 Okay, we're going to go ahead and get started. Thanks for being here. This session is called Demystifying Service Mesh, Separating Hype from Reality. I'm not sure it's really even that mystical. Chat GPT just does a really good job at inventing catchy titles. In fact, there's quite a few other Demystifying sessions, even a few at this exact time slot. So apparently that's a catchy term to use. So we'll get started with the introductions. My name is Brian Redman. I'm a Product Manager on the AKS team at Azure. I've been at Microsoft for 23 blissful years. And I've worked with AKS since the early days of it. I'm a huge fan of it and proud to be working on the team with Ali. Yeah, my name is Ali Ford, and I'm a Product Manager as well with AKS. I've been on the team for two years about, so not as long, but definitely have enjoyed my time there. So today we're going to be talking about what is a service mesh. We're going to look at some different service mesh technology options, as well as deep dive into some of the service mesh features. Brian's going to do some demos there, which you guys will really like. And then we'll have some time for questions at the end. One quick thing I would add to remember, this is the novice track. And so we're going to focus on this from very much a beginner. So if you're an expert with service mesh, no problem, hopefully you still learn something. But this is definitely focused on, I don't know what this is and I want to learn everything that I can. All right, so going from the basics, what is a service mesh? So a service mesh is essentially a dedicated infrastructure layer for facilitating service-to-service communication between services or microservices using a proxy. We're going to see an architecture later where you'll be able to see all that. But really we want to focus on the service-to-service part. So that service-to-service communication is essentially what makes a distributed application possible. Routing that communication both within and across application clusters becomes increasingly complex as you have more and more microservices or services. And you can add a service mesh to your application to add capabilities for security, observability, and traffic management. Those are kind of the three pillars that we'll see later. And all of that without having to add anything to your application code. And that part is really important because that's essentially how people did it before we had service meshes. So before service meshes, we had to embed all the logic for common red-time operations in the application code. With all of that and then having to do it in multiple programming languages, increasing microservices as you go. Essentially this process was entirely two-time-consuming, really difficult, and overall had a lot of limitations as well. So with service mesh, we have a single framework for dealing with the traffic across those microservices. And we can do it outside of the application code. Like I mentioned, we have those three pillars here. So security, observability, and then traffic management. When we're looking at security, this can be encryption protocols. There's authentication, authorization, things like who can access and then which services can talk to each other. So those are different security options. There's also observability, which is essentially you can gain insights into how the traffic is flowing. And then traffic management, which is actually controlling the traffic. And there's options like traffic limits, shifting traffic, mirroring traffic, things like that. We'll dive deeper into some different features in all these areas in a little bit. But that's kind of the overall landscape of the value that service mesh offers. Like I mentioned, we're going to take a look at the architecture here. So this is an example of service mesh architecture. We have the two services, service A and service B. And you can see the green area there is the mesh traffic and the flow of traffic. You can see it's going through the proxies to the services, so the services aren't directly connected. It's through those proxies. The data plane is at the pod level that we have there. And then we have the control plane at the bottom, at the cluster level, which includes STOD. In this case, it's the STO service mesh, which is one of the technology options that you can choose from. And Brian's going to take a deeper dive into some of the other options that we do have as well. Okay, awesome. Yeah, so some of the choices that you can make, there's kind of, I would say, three most commonly chosen sort of platforms. We hear a lot about STO. Probably one of the kind of the early solutions in this. Linkerd is one of my favorites. The Boyant company actually kind of maintains that. That's another technology option, as well as HashiCorp has Console Connect. In general, those three typically have mostly the same kinds of features. Obviously, they have a lot of different things that they all do. But if you're just kind of getting new to this, they probably all have the core sets of features that Ali described. Some of the other technologies, like Kong Mesh, might have started as an API gateway and then became a little bit more like a service mesh. People layered in kind of different components of this to have service mesh technology. All of these are fairly popular. Companies choose these for various different reasons. My advice would be look closely at the requirements. Don't just use the one you hear the most about. Look into what they actually are offering and how it fits well with what you're doing. Because in many cases, you'll see all the different things we can do with this. You may only use a fraction of that. And you don't want to kind of go too far into using something that you're not going to really take advantage of. So let's deep dive into some of these features that we're talking about. When I'm doing some demos around some of these, I'm going to use this demo application. So it's just a simple pet store with a series of microservices. You see there's an order service there. We're going to do some interesting analysis there, traffic shifting and observability around it. So this is actually out on GitHub. You can grab this sample and run it. It's just pure Kubernetes runs on any kind of environment. So think about this diagram kind of as we go. We're going to mesh it alive and do all that kind of stuff with that application. So the first thing you've got to think about when you're installing service mesh, you install that control plane that Ali described, and then I actually need to mesh my actual application. I need to install these sidecars that sit alongside every service in my application. And there's a few ways to do that. I can manually do it, of course. Typically you're going to have a CLI tool that allows you to manually mesh the application. More likely you're going to see labeling the namespace or some of the components of it and having some sort of auto injection capability. So that as you start installing things, even future changes to it, they're always going to get meshed and have that sidecar there for the application. So that's typically recommended in what most companies do. You can configure that sidecar. So obviously when you first install one of these, you just label the namespace and it gets injected. Well, maybe you want to lower the memory and CPU resources or any other kind of configuration options that are exposed. Typically you can configure that. And that's something too important to look into in case you want to really control more than kind of the defaults. And I think that's always important to do that kind of thing. It's also important to think about strategy. When you upgrade the components in the cluster, you're obviously upgrading things like Kubernetes over time. You're going to upgrade your service mesh control plane. And then you're going to have to upgrade the proxy or the sidecar. These different service mesh solutions allow you to do that in different ways. You know, maybe I want to upgrade particular namespaces in a controlled way and actually decide when they get upgraded. And so that can actually be done. And you'll see that when I kind of do this demo next. All right, so what we're going to do is we're going to just do one of these auto-injections. But I want to show you kind of a few things about how this is done. So I have a Kubernetes cluster here. I have service mesh already installed. So in this case, I used Istio. And you can see I just did a look at the Istio system namespace. And I have two pods running in it. This is the control plane. In the Istio case, almost everything is just in this Istio D-pod. Doing all the work for me. It does the auto-injection. It does the distribution of rules and things like that to the sidecar. So I have two of them just for redundancy sake. But if I was going to manually install, I actually can run this inject. And you can see I just pointed at a deployment and it will actually output the deployment that will install the sidecar. This could be handy if you just kind of want to experiment a little bit. But again, not typically how it would be installed in more of a production scenario. So the other thing we can do, of course, is auto-injection. And this is actually done with web hooks. So Kubernetes supports something called mutating web hook configuration. It just rolls off the tongue, doesn't it? But what you can do there is have sort of match expressions and if Kubernetes sees these particular things and what you're deploying, it will actually go and call, in this case, the inject command on the Istio pod. And it's looking for particular labels. I've highlighted here. This is a revision label or an Istio injection label. And actually, in this case, it's looking for two different labels. And it'll actually, if labeled properly, call that inject command and everything gets injected. And this is what it actually looks like. So I'll add a namespace here and then I'll just label it, just like I mentioned. And then we'll take a look at what that label looks like. So again, this is where I'm talking about the upgrades. It's not just enabled or disabled. I've actually labeled it with a particular version. And this allows me to actually ensure that I have some level of control over that. And so now that we have a namespace and it's labeled, we're just going to install our app. I just have everything in one big YAML file. We run a deploy of a bunch of deployments and services. And when we take a look at the cluster, we now have a mesh. And so you can see the zero out of two containers in each pod. That's because I have my particular container and the sidecar there. And just to kind of show you what this looks like, we're going to do a describe on one of these pods and just kind of show you where all this is in the pod configuration. So we'll take a look at the order service and the list of containers that are in it. And you can see when we go up, we see containers. The first container is the order service. This is my code and my application. You can see typical version one of this particular application. But I scroll down. Now is when I see that second container in my pod. And this is the actual Istio proxy. It's the other half of the service mesh that's running on every pod in the application. All right. And when we actually go and take a look at my pods now, we can see everything's running. We got two out of two ready. Everything is working. We should just go test our app. And so I'll just port forward to the front end, the website front end of this. Port 8080. And then we'll actually just make sure it works. It's a recorded demo. We know it's going to work. I would have rerecorded. I don't really like doing recorded demos, but it feels right in this case. So, but this is just a simplistic little pet store. We'll add a couple of their items to our cart. Go check out. And it will end up working, as I said. All right. The next piece that I want to talk about as service mesh features is Ingress and egress. Not necessarily always a part of a service mesh solution, but if we're going to be routing traffic in and out of the cluster to our application, we're going to want to use something like Ingress. We have a lot of choices there as well. But if I'm using a service mesh and it offers an Ingress controller, it might be a great thing to take advantage of. Because it ends up also being a part of the mesh to provide observability metrics and things like that and it uses the same routing rule configuration that I'm using within the cluster. And so Istio offers an Ingress gateway. It actually offers both an internal and an external one and can also help with egress traffic. Typically, these things sit behind a load balancer, regardless of where you're running it. And let's take a look at that configuration here. So if we take a look at my application, I have a set of pods and services. Nothing's exposed to the internet. So no public IPs for any of these services. Or we're basically going to take this store front and go and expose that to the internet. Just to see, you can see the gateways at the top there. Those are the Istio components that are actually acting as that gateway. And you can see I have a public IP for that external gateway. And if we take a look at my DNS setup, I have my own, I just used my own DNS here and mapped it to that IP address. If we go to that name, the traffic will end up on that Ingress gateway. Now we haven't configured the gateway. And so if we hit it right now, it wouldn't really do anything. So we'll jump over to VS Code and take a look at this Ingress configuration. And the typical default Kubernetes Ingress configuration looks a little different. In the Istio case, you use a combination of a gateway and a virtual service. So you can see in this case, the gateway actually maps my hostname and what port it's running on and configures that part of the equation. And then the virtual service handles matching what routing rules are being used and maps it to the actual storefront pod so that it knows, hey, if I see this URL prefix, I'll map that to this particular pod and then you can see it knows about the gateway and the two are connected together in that fashion. So if we apply this channel file, we should end up having a working application with a real DNS name. And again, spoiler alert, it works. Not shocking. So I create those two pieces. We go to the browser and hit the same website. You'll notice as well that it's not secure up in the top left. You can kind of see there's no SSL. That's easily configured. I just did kind of the simplistic scenario here, but it's easy to add certificates and have the SSL part of this in place. The next piece of this is service-to-service encryption. In my opinion, this is probably the most common reason companies turn to service mesh is they want to be able to encrypt traffic within the cluster. It's often a compliance requirement or something like that. And that's something you can obviously do and like Ali mentioned, you're sharing a bunch of configuration files and end up in a difficult scenario. The service mesh, you just get it for free. That's a pretty powerful thing. It's important to note this isn't the magic security solution to take care of everything. If we hack into your cluster and we get cluster admin rights on it, I pretty much have access to everything I need to decrypt the traffic. And so you think about security and compliance. It's a defense in depth type strategy. It's one part of a strategy for security and not all parts of it. That's something important to remember. Obviously, when it goes outside of the mesh, everything is done in plain text. And so if you're communicating there, you've got to use something else, SSL, TLS, whatever it may be to make sure that that traffic is also encrypted. The demo for this one is Allie's going to go over to a computer over here and she's going to hack into the cluster and put a sniffer on it and see if she can unencrypt the traffic. No, we're not going to do that. You just got to trust that it works. And it does. It would be fun. Sorry. The next biggest reason you might turn to service mesh is around observability. There's obviously all kinds of different monitoring tools up in the expo upstairs. There are a large number of companies with tools around monitoring and observability. Service mesh digs deeper into the solution and allows me to see a lot of details about service-to-service communication. I can know how my order service communicates with my product service, but not just the service, even down to the API path. I can say, okay, when it calls for products, we actually see latency versus when it maybe calls for the shopping cart. You start to get a lot more insights into what's actually happening in my application. I want to see things like, you know, bandwidth, distributed tracing, access logs. We get all of this stuff just right out of the box with all of these solutions. And they all typically work with a Prometheus-type solution. That's pretty much everywhere is what's going to be used as the data store. And then Grafana sits in front of it for dashboards. And I'll just show you kind of an example of this as well, what this typically looks like. This is a Grafana dashboard. This is the Istio workload dashboard. So this just comes out of the box with Istio. We get some basic information on how this services or this deployment or workload is performing from a latency standpoint. And I can get all the way down to picking pairs of services and see things like requests per second, failure rates. You can actually see on the left here, there's something happening somewhere in the last few hours. If I was looking at this, even if I don't know a whole lot about what this dashboard means, it looks like something happened a few hours ago. I actually ran a load test and pummeled this application with a very large amount of traffic to see what would happen. And it broke the application. And we're going to actually see why. But this kind of dashboard allows me to go in and take a look at things. I'm actually in the service dashboard here, and I've focused just on the Istio service. And I see a whole bunch of 500 errors here that occurred during that. So a whole bunch of failures occurred. I'm wondering kind of what's going on. And we actually can take one of these charts and expand it and take a closer look at what's going on. So this is a little bit about response time for this particular application. And this is obviously the window where we saw the Istio. And this is where that load test ran. We take a closer look. We can see that even the best case scenario for our customers, they got very poor response time. And this is actually super useful. I can understand what's going on. I might have seen an alert if I was just monitoring the Bernays cluster, I might have seen CPU or memory or things like that. But if I actually take a look at the workload in the cluster, I'll see that a bunch of the services had restarts a similar time ago. A bunch of failures occurred. Memory kills and all kinds of failures. It was very fun. But what you might notice is the RabbitMQ component that the order service calls, there's only one pod. And it doesn't really matter what I scaled out on the order service. The RabbitMQ pod with a very little bit of memory ends up failing and falling apart. And that's where I can actually go and now add things like auto-scaling and scale that out to make sure that doesn't happen in the future. Another thing we can do from observability standpoint is something called distributed tracing. Obviously when you're calling, you're working with this application, you're calling a series of services and we're monitoring the order service and the product service and the traffic between them. From a customer standpoint, I just checked out my order in the shopping cart. I went through a whole bunch of services and really what we need to do is be able to track the customer experience through this distributed set of microservices. And that's what distributed tracing does for us. And the service mesh takes care of the hard part in connecting all of those calls into a single thing called a span. And they're typically labeled by the service mesh and we go to these various tools, things like Zipkin, Jaeger and so forth. And we can actually see, okay, their experience was maybe three seconds, but the bulk of it occurred in a particular service. And now we can dig deep and understand, okay, the customer experience was slow, but it turned out it was just this one piece of that. And it's super helpful for troubleshooting and understanding what's happening in a cluster and in environment. So let's talk about traffic management. I really think it's really cool that we can do traffic management with the service mesh. It makes me think a little bit of being in a city and we've got stop lights, stop signs, one way streets, all kinds of things that control traffic. And if we just pop into that city and say, okay, for a while these streets are going to reverse direction, it might cause a problem. Just changing the sign and controlling the traffic could actually lead to all kinds of issues. So I bring this up because you really have to understand what you're doing and ensure that these rules actually fit and can consistently be applied by a team of people working on things. You're changing a bit of the behavior or the default behavior of an environment. And so there's a lot of good places to use this and maybe some bad places to do it. Certainly deployment strategies, things like good old blue-green testing, canary testing. I'll show you kind of an example of that here. I can do A-B testing or traffic mirroring and I can actually collect all of the transactions into a separate place and do analysis kind of separately. I can also do request timeouts and I can sort of gracefully handle situations where something's running a really long time. You can again build that into your code or you can just tell the service mesh when a timeout occurs, take this action and collect in this place. Circuit breaking. The service mesh is really smart about load balancing and there's all kinds of interesting things these technologies can do. I essentially can say, okay, look, I know we're just spreading traffic across these, but this one isn't responding really well. It's not responding at all. Let's actually take it out of the mix temporarily. I heard today up at the Expo that LinkerD is actually adding some cost efficiency to how it's doing routing between different components. And if you're in, say, availability zones, it's actually going to understand that a little better and maybe not send traffic across the availability zone until it has to, which sounds super cool. So all kinds of things like that that I can do using service mesh. Let's take a look at what this actually looks like, this demo is kind of fun. So we take a look at the application. You can see I actually have two versions of the order service in my cluster. So just two different deployments, slightly labeled differently. And then I have a service called order service that points to both of them and it just points to the order service label. And I'll go in into the deployment here and show you kind of what it looks like. But I'm not using anything with really Istio routing at this point. It's just standard Kubernetes. Each of these apps has a label for the app and the version. And so if I just point my service at the app one, it's just going to randomly hit version one and version two. All right? If we go in, I'm going to go run a little test here in the bottom of my terminal window and I'm just going to loop and just curl the health endpoint on this order service, which actually returns the version of the pod. And you'll see as soon as I do this, it will just be version one, version two. And it's basically even spread, but it's just round robin. All right? So what we want to do is actually go in and take advantage of the traffic shifting capability, in this case an Istio. And I'll show you what that configuration looks like. We'll pop over to VS Code and take a look. So Istio uses a couple different ways of doing this. It uses that virtual service thing that we talked about a bit, which I'll show you. But the first thing you do is actually create what's called a destination rule. And this is where I define the subsets of traffic for my application. And this might look a little redundant. I've already labeled my app V1, and now I've made a subset that points to that label. For this simple scenario, I probably didn't need this type of thing, but that's just how Istio has implemented it. But it does allow for much more complex scenarios. There's a good reason for this. In this scenario, you might feel a bit redundant about it. But I'm going to create a virtual service and notice that in my case here, the order service sends everything to V1. And so my first example here is let's actually make sure everything's going to the version that we want. And so we apply both those configurations to the cluster. We should see the traffic mix at the bottom be all V1. And again, we know that it will be. So we've actually controlled the traffic, and now we're using our service mesh to control the traffic. Let's actually go in and do a canary deployment. In this case, we're going to deploy say 20% over to version 2, and just add that rule to the cluster and see if it actually does what we're hoping. And so it's a slightly different version of the same virtual service. And notice now we're referencing both subsets, and we're saying, hey, send 80% of the traffic to V1, 20% to V2. And so when we apply this, we should actually see immediately, fairly immediately, that the traffic shift a little bit over to version 2, since we've been just hitting version 1. And you'll see that kind of start there at the bottom. Obviously, from here, if I was doing a canary deploy, I might go 30%, 40%, and eventually completely shift it backwards, and I'll just switch it to be backwards and be 2080. And then when I apply that, obviously it'll also start switching essentially mostly over to V2. Now, this might be, you look at this and you might think, okay, am I just manually applying these things? How am I delivering all these different configurations for how I'm shifting traffic if I was doing a canary deploy? Unlikely, I would be sitting there applying it and going to look at my Grafana dashboard and then applying it again. Maybe someone would do it, but probably not the ideal way. So this is a very cool technology, something that you can learn about here at the conference called Flagger. And Flagger essentially does that that I just described, but does it in an intelligent, automated way. So in this case, I tell Flagger, hey, these are my services that I want to deploy. I drop, basically, a CRD called Canary. I configure it with things like, here's my Prometheus instance where you can look at metrics, here's the metric you should look at. Here's how aggressive I want my canary roll out to be. Maybe I want it to sit there and run for days or hours at a time before you roll to the next one but basically that canary resource in Flagger lets me control all of that. You drop that configuration into the cluster probably via a CI CD pipeline, GitOps, whatever technology of choice, and then Flagger will kick and start to do that deployment for me and progress through it. We're not doing a demo of this. I've done it before, actually, at a previous KubeCon. It works very well. It works basically with the majority of the service mesh solutions we saw earlier. It's pretty cool for doing this kind of deployment. I don't know where we're at on timing because of my clock there. A little less than time. Going to the summary, some things that we want you guys to take away from this. First, obviously the definition of service mesh. We went way deeper than just that but it does make distributed systems easier by facilitating that service-to-service communication. There's a wide range of options that you can choose from. In this case, we used Istio, but there's a lot of different options that you can pick from. Focus on your requirements, so make sure that you understand how complicated you want your service mesh to be and you can kind of go from there when deciding. Use carefully. Brian had a lot of warnings in there but basically like if you're setting up your security, that's not all the security you need. You need to make sure that if you're changing things, that you know the impact of that. Service meshes are pretty complicated so if you are making large changes then you can break your application as well. So just be careful when you're making changes and setting up your service mesh. Lastly, we talked about Flagr but there's also other interesting new technology. One example is Sidecar List. The example we showed is Sidecar today but there's also Sidecar List options. One example is Istio Ambient but that's just something to keep an eye out in the world of service mesh. Awesome. We definitely have time for questions and we just ask you to go up to the microphone if you do have a question. Happy to take a few though. Possibly imagine that working in production. Is all of this for lower environments or do people really do that to their live fraud systems? How does that all work? I think that, yes, companies definitely are but it's the kind of thing that, again, we say be careful for a good reason because you do have to understand the implication of it. You can break the service mesh and thus break your application. Typically companies have to certify these things at a bank. You'd have to make sure that it meets all the requirements of the bank. I definitely work with financial institutions that use this kind of technology and I think this room kind of proves that it is. This is really a beginner session on service mesh. A lot of people are here. As much as it's been around for a while, it also is still fairly new to a lot of people. Two quick questions. Does service mesh work across multiple clusters? That's a great question. It does depend on the technology. They're all doing something in this space. All of them have kind of multi-cluster mesh solutions. Still somewhat fairly new but absolutely. There's a lot you can do to either mesh traffic to share the mesh configuration across them like some of those traffic shifting rules that can understand what's going on in the other cluster, collect observability data together and things like that. There's absolutely a ton of that and they all do it a little bit differently. Can you do traffic shaping across multiple clusters at least outstanding request connections across clusters? I'm not sure. Honestly, I bet you can. I don't know if you know her thing. I suspect you can but it may again depend on Istio might offer it in a very similar way but they might not have done traffic shifting. So it's something to investigate but I don't necessarily know. Interesting question. NCs are implicitly encoded in either code or configuration like in your app. So now you insert service mesh and how does that change my apps engagement as it tries to reach its dependencies? Yeah, do you want to? Yeah, I mean there's the beauty of it is that we don't impact the developer. The developer is just going to push their code and a container is going to be created and then when it shows up in the cluster and it's meshed, we don't necessarily have to know anything about it. There's definitely no dependencies within the code or even the container. It does affect the gamma configuration, of course. But again, we're using that auto-injection. That person doesn't even need to know in the manifest that we're creating. There possibly are some things that are non-native kubernetes things that someone could do that might get broken by something like this but that's kind of the beauty of it. I can turn on TLS in a cluster and I don't even need to tell the development team that we're doing it. As an SRE, I can start to do all this. If you get a little crazy with traffic shifting and traffic rules, you probably could affect the architecture of the app and thus the developer would be greatly affected. But yeah, that's kind of the beauty of it. Just using a kubernetes service address, for instance, either names-based or within the same names-based, that will be implicitly routed via the mesh and the injected proxies. Yeah, absolutely. Again, when you start adding virtual services is where you're essentially overriding that and that's where things can change. Which is the goal. Okay, so you actually mentioned console as a tool. I remember using console five or six years ago as a service discovery tool for a project in conjunction with other hash-corp services. Could you mind explaining to us the difference between service discovery and service mesh or are they pretty much the same thing? No, that's a great point. There's a lot of the history of this. Some companies started with something different and eventually ended up there and some might have started with service mesh. Your right, console was a service discovery tool. It is a service discovery tool. It's very good at that. And I think at one point they realized some part of their service mesh and some part of their work already done that's add the proxy and add some of that capability which is what they've been doing since it came out which is at least three or four years ago. Time flies. So at this point it's console connect that offers the service mesh part but it works in conjunction with console's typical functionality which is pretty broad and pretty powerful. It's almost like a proto service mesh. I mean you could say that. It was an API gateway and then it kind of turned it into a being more of a service mesh and so each of these technologies kind of offers a little something different. Like Istio wouldn't be called an API gateway but it has an Ingress gateway. So again, they all little nuances and that's why you really want to look at them close when you're trying to decide which one. Yeah. Got it. Thanks. Thanks for the newest session. So I have a newest question. Right. For example, when you do traffic management all of this happens at Sidecar but it is a centralized algorithm running somewhere but there's no centralized entity doing this. So how do we think about that? It can't happen on a control plane. That's one. The second is how does service mesh fit within the broader category of CNIs, EVPF, all of them have some overlap. So how do we distinguish between what can go what? Yeah. The first part of it I guess is more I'm layering in these rules trying to think of how to answer that question. Yeah. When you think of it in some cases I do have the ability to override what the Kubernetes cluster is doing say with the virtual services and so forth. But I think the thing that ends up being challenging is what you've implemented. So a lot of them do offer visualization. So you can actually see like if I've set one of those Canary 80-20 rules it'll actually show it to me visually and show me that the traffic is occurring that way and that helps you not necessarily maybe end up affecting how it's working sort of normally. But it is very separate from the CNI that this is a different layer of the communication. Thank you for the talk. I'm just following up on that question. For example, if load balancing is happening in the egress of a pod so like 80-20 splits and every such traffic management decisions is it happening in the egress or like there's some central controller? Yeah. What ends up happening the Istio control plane configures the proxies. I don't know if it's push or pull frankly but when you drop those configurations in those are Istio CRDs that we were showing the control plane then configures the proxy to have them understand it and you saw it happens basically immediately. So that's why that component is there. The central configuration is the control plane and the proxies actually implement it. I see. So my second question was what are the downsides of service measures in terms of overheads in CPU and request latencies? That gets asked every single time and I mean they're officially yes there is overhead but it's super lightweight. These proxies are very efficient I mean the linker D1 is written in Rust it seems like it's barely there but it's valid to make sure and test and ensure that hey I do have to account for even a slight percentage of increase of overhead. Thank you. We'll do one last question and then we can certainly chat outside of that. So I had a question about something you said at the beginning of your talk and that was you in a traditional legacy network you have a lot of different apps that do and implement a lot of legacy discovery or legacy code so how do you or how have you advised people that are moving to service mesh to consider that problem space where you may have a library that's very opinionated about how to find things and you want to add envoy or whatever your favorite sidecar is and say hey it'd be great if you use that and we didn't fight over the settings. Some of that just comes down to it may not even be related to service mesh but when I'm bringing a legacy application to be cloud native there's a lot of things I have to consider I might have been doing things away differently when it was like a monolith on a VM but when you come to Kubernetes there's no sense doing it maybe that old way when the cluster does so much of that for you and that's probably similar with Istio so in some ways it's probably easier to do a little refactoring of the application and try to adapt it to that new infrastructure if not then I think I'd be careful with my service mesh because if you're dealing with something that's maybe not typical then you might the behavior might not be typical as well. Thank you. Sure. Well thanks everybody really appreciate having you here.