 Awesome. Thank you so much for having us and thank you to those folks that are joining live and anyone that's watching after this gets recorded. So I'm excited to talk about this technology. This is stuff that we work on here at solo.io and with our customers all around the world. First let's. So I'm I'm Christian post I've been here at solo solo for actually close coming close to to four years. I've been working on Istio since the very beginning of that community. And here in solo we've been we've been working on EBPF and silly and poor for quite a while now to just recently published a book with my co author Renault back in March on on Istio so it's a real deep treatment of Istio and a lot of the, a lot of the lessons that we've learned over the last took us about three and a half years to to write the book so check that out. And the last bit is that what what we do here at solo and why I'm excited to talk about this technology and work on these types of problems is around application networking. You can think of that is what is a modern take of managing API is in service connectivity in a cloud world. That's quite a bit different than what we saw 1015 years ago. And what are some of the technologies that especially open source technologies that can help us solve those problems in a way that better fits that are get ups and DevOps type workflows. Things like Istio and silly and on boy those those form the foundation of of what we see as some of the right solutions and that you'll see, you know, in any of our products that that we go to market with and work with our customers on. So happy to chat that about any of that stuff offline as well. But really what we're going to talk about today is how things like Kubernetes, things like in our modern application architectures microservices and so on, how they require a different way to solve problems around connectivity. So, in May, back in keep kind of you, we announced that we're bringing psyllium, which is a very exciting open source Linux and container networking project into our solution that that helps facilitate defense in depth and networking and layers and and solving these problems, accordingly. So the question is specifically is how do services applications clients talk to other services API is whether inside a deployment in an organization or outside crossing boundaries and a lot in the past, you know, doesn't really hold up to how we solve these problems in the past doesn't doesn't translate very well to these cloud environments that are very dynamic, changing quickly, right that's the whole point of microservices be able to change code quickly and independently and move very fast. Things like security, things like resilience, things like observability and how you're tracking applications and understanding what an application is. You know, these things, the way we've done them in the past doesn't doesn't hold up as well. And so, we talked about application network we talked about solving these connectivity problems, some light, you know API management style problems as well. And doing that in a way that can be applied independent of where the application is deployed independently of how the application was written or what language it was used. Maybe pulling some of the stuff out of the application even and and being able to change it through policy, being able to change it dynamically and quickly to keep up with the way our applications are changing. So in the fact that, you know, most organizations are looking at a multi data center, multi cloud, multi home solution for how they build their, their applications and their services. And they may have existing data centers, they might have existing VM deployments or physical server deployments are exploring Docker exploring Kubernetes, or maybe they're fairly advanced and they've adopted these technologies. Maybe they've adopted them in a private cloud or in a public cloud on prem and now they need the consistency of their security policies, the resilience policies, the behavior of these applications can't just try to figure this out when things go wrong. Right, we need some level of consistency and understandability of the system. And you know how these things connect is extremely important to that. So if we look at security and, you know, traffic control API management, these types of things have changed as well in the past what we did was just force everything through some centralized system. Whether it's internal traffic or external traffic just force of stand up a lot of load balancers represent these applications and hopefully that'll be dynamic enough. But we've found that it's not. And the centralized systems tend to cause all kinds of bottlenecks. You know that they're the technology wasn't built for this these highly distributed environments and constantly changing environments. Additionally, a lot of those same problems crop up when you think about security and how trust is established on the network. You know when containers and applications are coming up and coming down and changing and scaling auto scaling and so on failing all the time, you know, how you write networking rules and how you secure and identify what a service is those that that's a very hard problem to solve now in this world. So, a lot of these solutions have become outdated. New technology, some of it, maybe a majority of it really kind of coming from and and originating from the some of the large cloud providers that that have gone through these problems already. You know, the and have become open source, you know, there's a lot of technology that that is built specifically for solving these types of problems. So, let's pause right there. And let's go into a world, maybe based on Kubernetes, where we can get a little bit more dynamic, where we can apply some type of security or firewall rules in an environment like this. So the first thing we're going to do is going to go to our trustee since a lot of typing that I am going to need to do. We're going to take a look at a script that is running. This is a live demo. And first of all, we're just installing a couple of applications. A hello world application and a sleep application become here. We look at our cluster we can see now that these were installed. Hello, hello world you want to be to and sleep. And from the bottom pane here, we are going to make a call from the sleep service to the hello world service. All right, so now if we call it a couple of times we can see the traffic is is going through. If I come into come here and go to deployment. In default, we'll take a look at maybe the hello world deployment be one will scale that to to maybe three replicas. I will give that a second we can see in this dynamic environment where we can quickly scale up applications just go to go here we can see we have a bunch of hello world services now running. In this world, we can we can still call these services we get load balancing or at least some simplified load balancing. Things continue to work it was an environment like Kubernetes was built for this dynamic system. All right, now if I create a new application. You put it in a different name Kubernetes namespace. A new client in this case in this demo so now we have a sleep to namespace and from this new client. Come over here, find our namespaces. We see sleep to there's our new client. We see a new sleep application running here. All right, now if I try to call this. And you can see on the bottom screen recalling sleep in our new sleep to client call call hello world. A couple times we get traffic going through. Alright, so we can add clients, we can scale up we can scale down. But now we need to kind of control and define usage policies or, or access or security policies about what services really are allowed to communicate with which which other services. And we're going to do that in Kubernetes using a network policy. Now this might not be all that new to some of you that are veteran in this area, but the Kubernetes. Networking and how it's implemented and how its service abstraction is is implemented and what we just saw can be implemented with with the Q proxy using IP tables which I dumped here. We can, we can take a look at you know what what some of those rules might look like a little complicated, but we also, we also can leverage various plugins to do network policy. So let's take a look at what a network policy might look like. So it's the declarative configuration. This is important. Because we don't want to have various scripts that say hey, if this pod IP is this and this pod IP is this and it's okay because these pod IPs are cycling and changing all the time. We want to declare hey for apps that look like this, they can talk to apps that look like this their pod IPs could be changing. So in this case we're saying for the hello world service. We will allow traffic only from workloads or clients that have that live in a namespace that have been labeled a certain way. In this case, project is some some value elect demo in this case. Right so we can only call hello world service. If we're a client that lives in this this this namespace. So let's label the default namespace. Correct me so that the sleep service. Okay get pod in default, the sleep service can call the hello world service in this namespace, but from the sleep to namespace. And from here our client over here we should not be able to call the hello world service anymore. Right so let's try to call it from namespace to we see here, we try to call it. The call doesn't succeed. The networking rule or the policy that was set in place here does not allow that max time we'll just give it three seconds right so if we try to call it won't in a way after three seconds it'll say, can't connect. Now if we scaled up the clients and so on in in sleep to it won't matter doesn't matter which pod right it's starting to look at certain attributes and dynamically be aware of how to apply that that policy. All right let's take that. Let's clean up a little bit before we get to the next, next section here. Oh, not sure. All right. So we have a dynamic environment. We have some some ways to set up policy, but we might need to go a little bit deeper than that. And so that's where, you know these plugins or these CNI can come in and provide a lot of power. So Kubernetes has been built very pluggable. So for things like networking and how Kubernetes does networking and has some assumptions but it doesn't force you to do to implement those assumptions a certain way. So we can plug in and plug out the various various networking mechanisms that we might want to use for our workloads. So a particularly interesting CNI or container networking solution is psyllium. Now psyllium has been built from the core to implement networking using a technology called ebpf and I'll get to that in a second. But what what it does is it digs into the Linux kernel and has very deep control over how a packet gets routed into the system between the containers. And, you know, it determines the best paths and optimizes for certain scenarios and allows us to implement networking policy at a layer in the kernel and be very programmatic about how how we do that. So psyllium brings in things like security controls and advanced networking policy that that is implemented ultimately in the kernel with the ebpf. Now, with this foundation, we can do things like scalable service load balance. What I mean by that is you saw the IP tables that I dumped earlier in the in the Kubernetes cluster. When you get to a certain size, certain number of nodes and number of pods number of services in Kubernetes, you know, IP tables is powerful is awesome. But a certain scale starts to slow down and degrade because IP tables rules and the chains are evaluated sequentially and and in order. And the more services you add and the more nodes and more workloads you have, the longer those chains get. All right, so we can do fairly sophisticated service load balancing using ebpf to kind of, and we end up not using IP tables and that filter anymore we go directly into the kernel capture packets at certain points, and, and we start building up the, the service load balancing capability inside the kernel itself. And so therefore we can scale to larger clusters we can scale to a lot of workloads, dynamic and constantly shifting and constantly changing workloads. We can implement sophisticated network policy we'll take a look at that in a second. And with ebpf we can, we can capture what's happening in the network at the source, you know, where it's happening in the kernel, and we can build, you know, various dashboards and telemetry collection systems and and so on so silly was a fairly powerful networking overlay or or CNI in the Kubernetes world that, like I mentioned is built on ebpf. Now ebpf is not all that new but recently, you know, has made some advancements in in the kernel to support it. And what it is is a programmable engine in the kernel that allows you to hook into certain event points. So, for example, in terms of networking, if a, if a packet shows up on networking interfaces, can we intercept that, take a look at it and make decisions based on that. So, can we write our own programs that are safe to run in this VM in the kernel. And, you know, the kernel does validation and so on before it's able to run it. And, and then basically inject. So in this next slide is it's sort of a event driven programming model for the various events and there's, there's, you know, the Linux kernel is built in all these hook points throughout the kernel, but specifically in terms of networking, can we evaluate these and make decisions about about the packets and be extremely performant and and efficient about the sort of routing and manipulations that we might need to do without having to use things like net filter and and IP tables. So, Sillium is an implement or Sillium implements a lot of those capabilities in EBPF. Let's take a look at Sillium real quick. So we can see here in our Kubernetes cluster, we do have Sillium installed. We only have one node on this cluster so you only see one one agent but but the Sillium agent would run per per node. And this agent, you can see it's in the cube system namespace has the permissions to manipulate EBPF maps which are data structures that can be shared between EBPF code and the user space, as well as installing the EBPF programs themselves. So the first thing let's check is what is what does Sillium know about our system. Let's take a look at the at our status here. We can see that you know we don't we don't fully and that's why you saw earlier the IP tables that we don't fully replace cube proxy but we can replace completely replace cube proxy. Okay, so all of the service redirection and how you load balance all that stuff can be replaced with with with Sillium. We can also see that there are a handful of maps for implementing the service load balancing for implementing things like networking policy. And so you can see the the maps here, and we can see that the cluster health all the nodes and so and so on the agents are are up and running correctly. So, next let's take a look at our EBPF load balancing list. We can see here in in Kubernetes. We do have a in, you know we have our Kubernetes services, but Sillium has identified these. And, like I said, instead of leveraging IP tables or even IPBS, what we've done is we've built a map of the services. We have a cluster and and and leverage the EBPF to capture the endpoints and and provide the actual routing that happens there. Right so now when we make calls from the sleep service to hello world. Those packets are getting captured in in the EBPF hook these hook points that are that are in the kernel. And it's the EBPF programs that are saying oh you're trying to talk to the hello world service. Well, I know about these back ends. I know how to do the the DNAT and the SNAT and actually route the traffic correctly, without hitting any of the, any of the other parts of the Linux, you know, IP tables and those net filter components. So if we if we also take a look at what Sillium knows about the system. Sillium knows about the various endpoints that we might have running in our cluster. And it also knows about something we call identity or how to identify what services represent or what what endpoints represent a specific service. So if we take a look at a at the sleep endpoint, take a look at the sleep endpoint and and take a look at its output here we see that these labels are what can be used to identify the workloads that belong to the sleep service. Now if I get the identities from the system we can see what are the numeric identities that make up hey this is the sleep service and any of the pods that might be scaling out of scaling. You know those all are part of the sleep service and they get identified by a particular number that internally then gets used to implement things like networking policy. So if I get the Sillium identity for for the sleep service we can see we have our number and we have the labels and that that make up the grouping of pods or IPs that make up the sleep service. Now let's take a look at how we might specify policy using Sillium Sillium is a superset of functionality on top of Kubernetes network policy that uses these endpoints and identities under the covers to to map and to implement our network policy. Now Sillium does also implement the Kubernetes network policy. So if you just want to stick to the plain Jane, you know the the lowest common denominator type functionality, then Sillium does implement that but for more expressive capabilities. So the Sillium network policy custom resource in Kubernetes allows you to be more expressive, specifying things by DNS and be a little bit more fine grain than what the what the Kubernetes network policy allows. So in this case we will apply a Sillium network policy. And now if we try to call it from sleep we should see that it continues to work. But if we call it from sleep to that we see the same behavior that Sillium is in the data path and is evaluating what IPs are calling what identities are calling what other identities. And in blocking traffic that that shouldn't, that shouldn't be going through. Now Sillium can also do a bit of layer seven handling and networking policy around layer seven. So we take a look at this networking policy, we can see that we can specify HTTP type rules as well. So what services can call which other services but even to the level of what path on an HP call can we can is allowed between those services. So if we apply this. What we what we notice is we we just added a few more policies to our network policy here. I can still call things. Because the hello world or the hello and and the path is what is part of what we're calling here. But you'll notice under the covers that Sillium spun up a separate separate component. Oh, no, no delete. One second. We, one second I ruined the automation was supposed to help me here but it did not help me at all. So we're going to apply resources I think L seven. And put that back. We will then log into our Sillium agent. And what we're going to take a look at here is that the Sillium agent, since it saw an L seven policy, and actually spun up an on boy proxy. And so, at the layer three and layer four parts of the network, Sillium will try to use the pf and optimize away at the network routing. But when it comes to layer seven, the traffic is routed through on boy and on boy is applying these are helping to enforce these policies that are spot specified at their seven. And you can see that Sillium agent runs per node, and per per host basically and is shared for all the workloads on that on that node. So let's come back to the presentation. Now, one of the, one of the things that becomes very interesting we were able to see that in a dynamic environment with Kubernetes, you know we have some mechanisms for enforcing network level connectivity and policy. We can use things like like Sillium to get a much deeper and richer look at the telemetry collection, the routing enforcement, the, you know, even the implementation of cube service itself in in the cluster to improve performance and and to give a certain level of observability. But when we think about applications communicating with each other and API's and various policies enforcement that we want to make at the grant granularity of a of an actual application or actual workload. There are things like the service mesh starts to come into the picture. So what the service mesh does is it actually puts these enforcement points, these policy enforcement points next to the application so they almost become one with the application. And they do this in the way of adding the sidecar proxies. Now these sidecar proxies intercept the traffic, whether they're outbound from a particular service in this case service a or they're inbound to a particular service service be in this case, and they apply various layer seven or higher layer application layer constructs like retries and circuit breaking and you know request canarying and traffic splitting and these types of things. So the, the service mesh, like I said does this at the application layer doesn't add, you know, these proxy deployed per service instance. It doesn't matter what the underlying network looks like and where it might be running. So we get the policy enforcement happening at the highest layers and at the finest grain scope with with each of the applications. And so that's where something like Istio comes into the picture. So you have Kubernetes, and you have the the CNI like, like psyllium provide rich container networking network policy. But now we can even get to the level of the application instance. So if we're going to talk telemetry about each application instance, we can provide a cryptographic identity to that represents the application, not just its IPs. And we can do this at the level of granularity of an application instance not just the know that it runs on. All right, so let me just do a quick, I want to show that out. We're going to install Istio. Actually, if we go to just go to all the other parts, you can see Istio is starting to install adding a few more services here. And the next thing we want to do is so Istio is installed. The next thing that we want to do is we see that the applications that are running in our default namespace, they don't have the Istio sidecar installed. I mentioned the proxy, you know, per workload or proxy per instance is not there right now. So we do need to enable that. We'll do that by labeling the default namespace. And then we'll restart the pods. So there's some amount of, you know, but it is something that you have to plan for. It's not something that you can just transfer all, you know, transparently without having taken some action install. And so we do need to get those sidecar proxies. If we take a look at them slowly coming up, you can see that the old ones are going away. The new pods are coming up and they have more than one container. They have two containers here, one that represents the workload and one container that represents the sidecar or the proxy service proxy that runs with the instance. Okay, so now we can see that our workloads do have the sidecar running. And from here we can do things like enabling mutual TLS so that connections between the services in the mesh. You know, they use a mutual TLS connection and a cryptographic identity to specify that mutual TLS. Now let's take a look at that cryptographic identity. Is that right? Yeah, it looks right. So basically what we're going to do is we're going to go into the mesh, into our sleep application again. And we're going to use some tooling to connect up to a client. And we want to see the certification or the certificates that are now being presented by the service. This Hello World Service is now presenting a certificate saying, hey, I'm Hello World, you tell me who you are. But it's actually the service mesh proxy that's doing this. So if we run this. Cross fingers shows up. Did not show up. Why not? One second. Get to see me debug it live. This was just working. All right. That looks correct. We're not seeing the certificates. Why not? Do not know why not. That is odd. I don't know. So what I was expecting to see is that when we call in and request and show that mutual TLS is indeed enabled that we would we would see the certificates and we would be able to prove out the cryptographic identity. That specifies, hey, this is the Hello World service or this is the sleep service, but we are not getting that. And we might have to just move on, but let's take a look at this. Why is this so much TLS mode is strict. We put this into the default names. Let's let's add this to. Just do a system. See what happens. One second. I specified it as default. If we do this. No, no. All right. Well, I'm not sure. This was just working. I must have an issue with my configuration here somewhere. But. So we won't be able to, but, you know, if we've had this, actually, let's give it a try to see what happens. And then in this if we have that cryptographic identity, we can specify what traffic is allowed and not allowed based on the cryptographic identity what's happening on the connection itself, not just from where it originated. So if we if we have this, there's authorization policy and we say that Hello World was selected right here the Hello World service can only get traffic from the cryptographic identity represented by the sleep service. And not just sleep, but sleep in the default namespace. And these paths these ports are are available or can be can be used. So if we apply this. Now, down below we can see that we cannot call it. Something's going really going on with my, with my environment here apologize. I don't, and I don't know why. But let's. The YAML file specified default. Yeah. But we should have a get here. And we have it in both. Let's delete the one in in default. Still no. Let's do one more thing and then we're just we're just going to have to move on but logs I'm not seeing any issues here. Not seeing any issues here. You got a second to complete. Let's try to do that. No, I don't know. All right. Well, yes, I really screwed something up in, in setting up this cluster here. All right, well, we will continue here. So, right. Okay, so now what we what we would have, what we did, we did see that, you know, we have networking and layers, we have security in layers. We have the CNI is things like psyllium that can provide some amount of layer seven capability but not that the full amount that we need from from something like a service mesh. And we but we see that the service mesh. It uses proxies I get deployed with each workload instance to implement this, this layer seven capability. So, is there. So, we can go from one proxy on a note or a proxy on every single workload instance. So, what is a reasonable approach to doing this, especially when you're looking at it from the perspective of I just want to apply policy, I don't care if the service measure see and I don't care about any of that. I just want to consistently apply my shared policies about API is communicating with each other and users communicating with the API is and so on. So then the question just becomes about, well, how, how do we implement the data plane to support that. How do we tie things in at the CNI if we if we need to. What is the ppf start to come into the picture and what are some of the trade offs that we're willing to make when we implement this data plane. Right, so some of the things we're probably very interested in our resource overhead and usage. So if you're thinking of a one proxy per node or a, you know, many thousand proxy per node deployment, you're, you know, potentially going to be thinking about well what are the, what are the trade offs I can make for resource overhead. Another is, you know, the opposite side of the spectrum. If you try to share everything in a single proxy. You start to run into issues, noisy neighbor type issues, especially in areas where we know, working with our users that extensibility customization these are really important to the data plane in environments like this. You know, we see that the service mesh itself, even that has the granularity of workload instance. We see things like it gets 85% of the way there. But in these environments where applications are already running. The wonky enterprise environments are maybe some backward compatibility needs and so on. But that last 15% needs to be implemented somehow. And those are very specific and custom to a particular application. And so how you, how you trade off between resource overhead and sensibility feature usage feature isolation and so on becomes a very important topic. How you specify your, your security boundaries, or how you're identifying particular workloads. And what happens if one of those, those proxies that are serving layer seven becomes compromised. The last is not least by any stretch, that how you actually introduce the service mesh, or these capabilities, how you upgrade and maintain and patch these capabilities in the data planning are all very important. So if we look at those different dimensions, and how we might implement a data plane to, to account for those things. We see things like the sidecar deployment, which we have been just about every service mesh deployment out there, where we deploy a proxy per workload instance. And, you know, we have a good feature isolation, but we have not so good resource overhead. And so in terms of those dimensions, and, you know, how they, how we trade things off so we trade we're trading off resource overhead for, you know, feature isolation, extensibility tendency, and a very fine grain security so like I tried to show you there with with Istio that we can get workload identification down to the actual pot. Now, operationally, upgrading this, maintaining this, having to, you know, inject sidecars restart workload all this. So this is actually, I think I should move that little bullet over to, to the left a little bit more. That's not, that's not very ideal. Now the alternative at the other extreme is we'll just share this in one big proxy on the note. And all of the traffic will flow through these proxies will implement service mesh like behavior or these these enforcement policies down at this layer. And we'll get much better resource utilization because now we have only one, although it's a much bigger proxy still only one versus the many thousands per node you might see with a sidecar. But then you run into a tendency problems. All right, there's no, there's, you know, the feature isolation, the extensibility becomes much more disruptive. You get, and we see this all the time when we try to stand up shared gateways that that try to do everything for all of all of its services, all of its clients. And so security granularity feature isolation, these are all things that we trade off when we try to use a single node per note per host or single proxy per note. Now, these are not the only these might be the two most extreme usages, but these are not the only approaches. We're trying to find some balance in between. And, and maybe use proxies per identity, so that we're not trying to share identity and try to share extensions and custom configurations for all of the applications on a particular host. And this, this model, you know, does give us a little bit better resource overhead. It gives us feature isolation. And, you know, from a upgrade standpoint, we can upgrade these things independently of the applications and, and, you know, it's still, it's still provides a level of security granular that we need. We're communicating with each other. Now, there's also an approach that allows you to remove the proxies completely from the end nodes that allow you to implement these capabilities outside somewhere else in some pool of proxies that align with the identity align with the service account. And in this model, we, oh, I didn't put the last line there. So in this model, operationally, we get a much better separation. We get better resource overhead resource usage. We might take some penalties in performance, although your environment may, may be able to account for this. We also maintain our, our extensibility, our tendency and our security granularity down to the level of a particular identity. So there are various models for how we can implement a, a layer or a policy enforcement layer that trades off the best of both worlds here. And, yeah, that's, it's not one or, or the other. Now the last thing, and I don't have time now had actually had a separate cluster for this demo is around getting around. These policies could potentially conflict with each other. And what we typically see people do, and this is what we did at solo is we built an abstraction layer that focuses primarily on what are the policies that you care about not how they get implemented. What are the workflows that you need, the tendency constructs that you might need for your teams. So how do you actually operationalize this and enable this among your teams. And then let the underlying data plane pieces, however they are implemented. You know the configuration specifically that drive them will automate all that stuff. And so that's, that's typically the path that someone, and certain people we've seen go down where you're trying to kind of keep consistency because you don't want the, you know, the SDL policies to say yeah you can do this, and the networking policies say no you can't do that. And then find out at runtime that, hey, you know why are we enabling it on the SEO side that right why don't we just get consistency across all of the networking policies. And that's why it's important to treat it just as, hey this is, these are these are just policies, I don't care how they get implemented. So, you know just to recap, we looked at the, you know, the world and the way we've been deploying applications has changed. We need to be more dynamic in the way that we deal with networking in general application policies in general. And we do have good open source solutions for that. There are various trade offs that you will need to that you should be thinking through when you're considering one or the other or both. And focus on those higher level policy that doesn't matter if there's where the proxies are what how many there are on that stuff to the end user. Maybe to the platform owner, and to the person paying the bills but the end user cares about the policies are implemented, and they're implemented consistently and correctly across their application workloads. Again, like I said at the beginning this is this is an area at solo dial that we've been very interested in built our products around, built a lot of educational material around. Please go check out you know we do workshops and offer certifications, all free, by the way, this is not we're, we're more interested in educating the market than we are making money on training. So all of this stuff is free. So please go check it out. And I mentioned the products that we're working on around, you know, the connecting services securing services regardless of what the data plane looks like. So if you're interested in working on things like psyllium, like on boy like Istio, like GraphQL and and operating components around it. How you actually do this in an enterprise. Then please come talk to us we're, we're hiring and we, you know, we have a lot, a lot of opportunity to work on so it could good place if you're interested in those in those technologies. So if you're running up here against the clock. I do want to say thank you all for joining live. The ones that did. And definitely feel free to reach out for those that are watching on a recording at some point. Always happy to answer questions. So, let's take a look as we do have some. Please use the Q&A if you, if you would like to answer a question. Some of these are comments about the demo. So the question was is traffic encrypted between a sidecar and the pod as well. And so this is in the sidecar approach. When you inject the sidecar into the application that that communication between the app and the sidecar. You know, there's a link there all the all of the IP tables or CNI is used to redirect traffic. So when an app is trying to talk outside of the pod talk to different application. It just goes through the proxy. And then the proxy, then, you know, decides how it's going to route the traffic, how it will do load balancing and it will create a mutual TLS connection to those upstream call callers, or upstream clients. So from the proxy to the other side, that is encrypted as encrypted with mutual TLS. The part that's in the pod between the app and the proxy is not encrypted. Now those live on the same node, same host, and those actually both exist in the same network namespace as well so from outside from the node, you know, that that traffic is is kind of shielded. So but to answer the question that that that link that part is not is not encrypted. So someone asked why should we implement end user authentication using Istio. What are the benefits. So end user authentication I assume you mean the, that the person, the end user that initiated the API calls that then might cause services to communicate with each other. Right because because there's authentication the services themselves service a talks to service be a service a might be calling service be on behalf of, you know, user, whatever some some user. And so I think if your question is asking why should we implement end user authentication using Istio what are the benefits so like I was just alluding to those calls those service calls. So talk to service be usually because service a has a life of its own it's interested in talking to service be right some something initiated that those those calls. So when you're thinking about whether a request from, you know, service a to service be should continue. It's not just based on the identities of the services. It's not just based on maybe the location of the services, or what boundary they exist and it, you probably want to take into account. You know things like who, who initiated this call. Why is this user calling or attempting to call a particular service. So from that perspective, Istio the service mesh is on the data path can see it can see, you know, different credential material that represent the end user. A lot of times the application needs to know this. So if we can get some level of authentication and authorization, you know, in, in, in the network, then we can offload some of the things that the application has to do make it easier to reason about and write the applications as well so Istio or the service mesh in general, this application networking layer, having awareness of an end user is also extremely useful and valuable to give accurate security policy enforcement. Somebody asked if we'll be sharing the demo and slides absolutely. I actually want to, I might go back and try to figure out what happened on the Istio demo. That stuff is usually pretty rock solid but I must have misconfigured something on the demo. But yes, we will publish the demo and on the slide. So I appreciate again, your attendance. Thank you Linux Foundation. Thank you solo for sponsoring and and organizing and reach out, like I said if you if you have any questions. Thanks. Thanks again for your time today and thank you everyone for joining us as a reminder this recording will be on the Linux Foundation's YouTube page later today. We hope you will join us for future webinars and have a wonderful day.