 Awesome. Thank you so much for that intro. And I'm really looking forward to delivering this webinar. Although the topic of compliance to some can be pretty boring. Zero trust kind of a marketing buzzword these days. But some of the technologies that we're going to use to to implement or can use to implement a zero trust environment and solve for a lot of the compliance. The challenges that that we see certainly around networking. You know the technology itself is pretty exciting. And so we're going to start with kind of an overview of the landscape. What is, what is trust if we're going to talk about zero trust. And then we're going to, well, I know that. You know, webinars like this we're going to have a wide distribution of audience members some people who are interested in the advanced stuff some people who are brand new and just getting into this so we'll try to set the context for everyone and and and get into it. So my name is Christian Posta. I'm a global field CTO here at a company called solo.io. And what we work on is service mesh technology. And I've been involved with with the service mesh ecosystem for quite a long time now. I've been involved in working with customers and large organizations who are deploying things like microservices and containers and Kubernetes. My background is in connecting systems through integration and messaging and generally distributed systems. I've written a few books on the topics that we're going to be talking about today. Including Istio in action, which was published probably a year ago, still very relevant and up to date, mostly because the project has become very mature and you know things aren't changing very, very much. However, one area of the technology that is undergoing some innovation right now is in the in the data plane. And Lynn son and I recently co authored a book published back in October around basically the topic we're going to discuss today so I won't go into too much of that right now. But it's solo, like I said, we focus on connecting systems securing systems that that are deployed, you know, in enterprise and is probably some of you know we're on this webinar right now. Our enterprises are not all that and all that all that cut and clean and nice. Right. There's, there's a lot of we do it here because that's how we've always done it. There's a lot of we have we're doing it this way because that's what security told us we have to do. Or, you know, just a whole whole combination of reasons but we what we do is we try to. We try to offer a solution to problems that have cropped up as people start to adopt modern platforms, certainly around services service connectivity around security, certainly and that's why we'll be talking about some of the topics today. And, you know, we based our platforms and our technology on open source projects, like Istio, for example, Istio is the service that we'll be talking about today. Like things like evpf that we can use to implement and improve the behavior of some of the networking technologies that that we're using. And so a solo, you know, we've been working with a large number of customers around the world, very successfully to solve challenges around observability and security and connectivity. And we're certainly leaders in some of these open source projects. And, you know, we have a lot of a lot of insight into how to be successful with these technologies. So let's go back to the meat of the webinar here. And that's around compliance, zero trust, and what technologies we might be able to use to implement and solve for some of these, these challenges. But if we're going to talk about zero trust or no trust, then we should talk about what what what is trust. It's a very stationary and look it up. It's fairly consistent from what we want to talk about here. But it's about representing yourself or some entity and credibly understanding what that entity is going to do that it says it's going to do that there's credibility in in that in that entity. And that, you know, they have, they might get access to certain data, certain services, resources, I guess we'll call them more generically, based on this trust. And in organizations today or maybe more classically. That type of that type of trust or over trusting leads to scenarios that are not very positive for those companies you see things like security breaches customer data, you know, financial data. And this happens quite a lot. Unfortunately, and a big part of how that that trust in a system, a distributed information system is created is through the classic approach to applying security to an organizational boundary. And oftentimes that's seen as or known as perimeter based security. If we just stand up these big walls around our organization and our network, then, you know, we'll keep the bad people are the bad actors out. And you'll probably see this is obviously a simplification but you'll see, you know, there is a path for traffic to come in into an organization through various firewalls, you know, perimeter based like demilitarized zones that might be facing the internet, corporate internal networks, maybe establishing VPNs into these corporate networks. And then if you're a developer, you're building services API is probably familiar with, we'll go into a little bit more the, the, some of the complexity that it takes to get access to data and databases and that's for, you know, the noble reason at least to try to secure and align with the clients, because we want to keep the bad actors from gaining access to this sensitive data, which is, you know, customer data internal financial data, and other sensitive information. What am I, you know, going a level deeper. I resonate with some folks, certainly, again, those that are working at large enterprises that rely on things like DMZ security that rely heavily on the firewalling appliances that most likely live in those environments, and how the network and the security has been constructed to align with things like compliance and regulatory oversight and so on. And I see things like gateways at the application layers API gateways that handle traffic coming in from the internet, or even just traffic internally inside the organization. What will you, what you see, what you see here is traffic is segmented. Right. The DMZ can't directly talk to, let's say, the core databases, or maybe mainframes, but certain layers can. But if you look at each individual layer, there is a lot of trust in each of those layers where we're assuming that if you're in one of those layers, that you might have access to other services or the components of the resources within those layers. And maybe if you can somehow impersonate being a resource in that layer, you can get to some of the other layers as well. And so that's where we kind of want to rethink, or at least understand what is the trust that we're giving to a particular part of the system, and what is the blast radius what can happen in those scenarios. Because it's not just the outside. Look, you know, trying to get in. There are things like configurations. There are things like policies. There are scenarios where is a perfectly running good API, but we're sharing too much information in those API's. There are certainly scenarios where bad actors don't breach or somehow overcome the firewall, they're invited in, and through phishing social engineering type attacks, and they're just there. So the question doesn't become, you know, how do we keep these bad actors out? It's once they get in, how do we limit the access and their lateral movement and the trust that's inherent in the system we need to scope down so that these bad actors can't take advantage of that. Now, that was that, you know, the last couple of diagrams we've looked at are kind of thinking about one organization and, you know, at its most simplistic form. But in reality, these organizations as they modernize their technology, they look for ways to move quicker and experiment with technology they adopt cloud services. They want to want to build a platform that allows them to go quicker. So they're using containers and cloud automation, CI CD built around it and so on. We'll see that there's a natural evolution of the technology stat, some running on premises, probably a lot running on premises, and, you know, things moving to a public cloud. And maybe multiple public clouds. And so the this idea of well here's the perimeter, and this is what makes up our corporate network is, is obviously is dissolving in this case. Because the applications are being deployed onto these cloud networks, and we don't own the organization doesn't own these cloud networks is somebody else's infrastructure that we're renting right. And so now that that corporate network is, is highly distributed, and is, you know, who owns it, what are the, what are the security, if you want to install firewalls and all this stuff. You know, there's a lot more that goes into making that work and a lot more areas for that can fall short. Well, you throw on top of it, that the the business functions that these these organizations are there for. And whether that's financial companies, whether that's health, you know, health insurance health, the health related those retail that are handling financial transactions and so on. And the federal agencies, like these are governed by, you know, regulations and laws and rules that they have to, and should, you know, hold closely and guard and secure the sensitive information that gets shared with them from whether it's customers or financial information or other. And there's various compliance and restrictions and, like I said, regulatory and and an audit that happened to ensure that this, these standards are upheld. And we can look at each, each one of them. They're not all that exciting. As I mentioned in the beginning, they all they all kind of boiled down to what would seem like common sense things but are not all that common. And that is maintaining a secure network, restricting access to sensitive information. And when there are vulnerabilities discovered tracking understanding that they exist the paying attention, potentially working with, with governing bodies or vendors or partners who can help with this and implement strong controls and and networking controls and access to this, this data, so that not just anyone can can access it. And what, what we've seen is, and, you know, the implementation and ends up sort of looking like this. But as I, as I also mentioned, you know, as we go to highly dynamic environments, like spinning up a VM on the fly, tearing it down spinning up containers on the fly scheduling them across multiple machines. Maybe across multiple clusters and multiple clouds. And just the fact that you're going to a public call all of all of these things make the, the classic way of looking at solving these problems by establishing boundaries and and parameters. So, a lot less, you know, impactful, because there's, there's, you know, we don't take into account inherent trust that gets placed in those boundaries. And so that's where this idea of zero trust comes into the picture. And what we want to do is we want to restrict down to the smallest possible scope where trust is granted. And we, we assume that just because you're in one of these parameters are in these boundaries. Doesn't mean that you're a good actor, or that you should be trusted. In other words, we want to, we want to constantly assume that they're that, you know, basically just assume things are on the public internet. Everything's hostile. Everything's coming to get you. And that when services need to communicate with each other, or when an API or an application needs to share data that we should on demand we should we should prove that the the actor that's requesting that is trustworthy. And we want to continue to do that. We want to scrutinize all of the access and we want to log all of the access and we want to be able to go back and and see exactly what has happened. And so, and like I said, we want to we want to eliminate trust islands or, or, you know, we want to scope the, the, who we trust and where we trust them down as far as possible. So if we come back and look at that very simplified by oversimplified diagram. We want traffic as it comes in to be authenticated. We wanted to so we want to know who it is want to be authorized. So based on who it is and what services are being called, what are they allowed access to. And we want to do that dynamically. And we want, we want to do that in such a way that holds up. So you can see in this diagram where we're maybe in the, in the past we would have an API gateway that might be looking at and establishing some of those properties authentication authorization in one spot. We don't want to just trust that because the traffic makes to that gateway that it's all good. We don't want lateral movement. So what we want to do is we want to continuously and dynamically enforce the same security properties on any request on any service service access. If that means across multiple different clusters, different workload types, VMs containers, functions of service. We want to uphold these, these principles of zero trust, regardless of where the applications are running. Now there's quite a lot of literature on this that kind of kind of goes into some more, more detail things like the NIST standards around zero trust things like the Google cloud papers. There's some very early papers from, I think it's the Department of Defense that they discussed this even 20 years ago. And so there's a lot of good, I would call it, you know, academic papers on these things but when it comes down to how you actually implement this, there's a lot of opinions. And I would argue that until recently it's been kind of difficult to implement this consistently and in a performant way because the technology wasn't ready for it. But if we boil it down a little bit more and talk in specifics. What we want to establish is that communication in the network is to any resources is secured, regardless of what islands or firewall or boundary or perimeter is established. So all communication, even it's inside, especially if it's inside one of those permitters. The, the access is determined dynamically. It is determined per session or you can think that you can think more concretely for every request that's coming into a service and not just not just the open a connection. Firewalls typically are looking at can a layer three layer four connection be established. We want to go deeper than that. Can a request be sent. Can it be received and can it be serviced by a resource. And since we're determining this dynamically that can change. So, you know, from request to request, we have this fine grained control over our, you know, where and when we're establishing trust. All access is authenticated authorized. And this access is tracked it's logged audited, we can we can go back and review it. And based on, you know, we can dynamically also control access. So, if we're seeing that a particular user in the system is requesting a sensitive data. And it's unusual, let's say that it would, it would request this data, what a pick a number I don't have a certain number of times 20 times. Then, you know, after, after about 1015 20 we'll start to notice and the system can automatically reverse some of the, some of the access that had already been given. And if you look at the way at least abstractly that zero trust networking and these types of architectures are implemented. It's usually with some kind of coordination between a policy enforcement point that is handling traffic on behalf of a resource, and some policy engine. And then this engine together with some administrator that's the, you know, driving driving the policy changes that might be needed. So these these policy enforcement points that in line with the with the request and decisions are made in line and dynamically based on certain policies that have been set or different parameters that again could be dynamic. And various attributes and context of the request that can be evaluated. Now, again, this is a very abstract way of, of looking at a, how you would implement this. But this isn't certainly an area where service mesh, a technology that's specifically built for these types of dynamic environments can help service mesh. There's a lot of a lot of overlap with the, you know, in terms of implementation, a lot of overlap with with a model like this. And so from here what we're going to do is we take a look at how a service mesh might fit. But we also want to see how the evolution of service mesh, and how we can refine and improve the implementation of service mesh, so that we don't end up with these policy enforcement points scattered and littered all over the place unnecessarily. So we got to start with what is, what is a service mesh. I'll go through this kind of quickly but the, the idea behind the service mesh is to solve some of the challenges that come up when applications want to communicate with each other. Security is a big part of that. When services communicate with each other, usually we're looking at APIs, communicating services, services service, and we're thinking more along the lines of requests and messages that are being sent. You know, they have to solve for, you know, those, those networking challenges that we see in service to service communication, like service discovery, load balancing, resilience aspects. When services are not available. Can we fall back, can we retry, at least be thinking about things like timeouts. When you start building distributed systems we have to think quite a bit differently than if we were just building a, you know, monolithic type application, because the network is such a critical player in the implementation. So we go and solve for things like resilience, like telemetry collection, like security, and, and so on. If, if, especially if we have a lot of different programming languages, you know, we need to have a way to solve these problems consistently. And not worry about maintaining certain libraries or frameworks. Not worry about all the overhead of governance and making sure that developers use the right languages frameworks the correct way, patching, maintaining, patching CV ease, fixing bugs, etc. Across this becomes pretty, you know, there's a large, a large amount of investment that has to go into that. So with the service mesh, what we, what we see is we implement those networking challenges like connectivity security reliability, etc. As a agent that lives with the application and the application instance. And this agent gets deployed with the application instance handles these things on its behalf. And if you're, if you're, if you're insightful enough, you'll notice that this agent becomes or can become a enforcement point of certain networking and security policies. So in the service mesh, we have these little agents, they run with the, the applications they act as proxies, they act as policy enforcement points, and they are all remotely controlled and configured by a component we call the control plane. So the control plane is connecting to these various agents and it's giving the agents dynamically, it's, you know, what policies it's supposed to be enforcing. And those agents can reach back out to elements of the control plane to more dynamically determine, you know, what, what, what a policy should be and policies are usually should this request continue, or should it not continue. And that, that determination is based on who's calling the service on behalf of what user and other attributes like location, like time like claims that the user might have. And so on. So you can start to see that the service mesh has a lot of the elements and the architecture for implementing the zero trust networking principles that we're after. And here's maybe a more simplified example where they were an application wants to talk to another application. And so the request from the app on the left side, when it talks over the network will be, you know, will be forced through this agent or this proxy that then, you know, does things like service discovery load balancing timeouts retries telemetry collection. And also do things like originating TLS or expecting that the transport will require mutual TLS. So it can coordinate with the control plane get the correct certificates and, you know, start to encrypt and secure the channel for service to service communication. On the right hand side we can see the agent or the proxy on the app there can do the same thing. It can expect mutual TLS. It can enforce policies about whether the service on the left can even talk with this service. It can evaluate the requests that are coming through and dynamically determine whether those requests are allowed through. And fundamentally the service mesh here can also establish workload identity for each of these services. So we know without a doubt the service on the left let's say we call it food is the food service and the service on the right called bars is that service. So for identity, we can get networking policies based on identity, not on IP addresses and location of network and so on, but actually on on workload identity. And, and then we can get an encrypted trap tunnel between these services, we can't have man in the middle we can't have ease eavesdroppers and so on, trying to capture the traffic to replay it. And then we can write fine grain authorization policies about how that, how that traffic is allowed to continue. So I was generically describing a service mesh but we know at solo, at least we are heavily invested in Istio and the Istio community. We see that service mesh as the most dominant service mesh as the most mature service mesh. And that, that observation comes from, you know, first of all a period of neutrality because from about 2017 to say 2019 we were sitting and watching and seeing what what is happening in the service mesh space because it was a lot of different it wasn't clear who was going to take off was in around 2019 and a lot of work that we were doing in the community. You know we we saw that okay it's it's pretty clear now Istio is becoming very well adopted a lot of those those thorny edge cases and use cases that we would see in the enterprise were being. They were being, you know, softened and we were able to make it fit. And now now advanced use cases multi cluster VMS all of that stuff is very well capable of handling. And we saw last fall 2022 that Istio officially made it into the CNCF. And, you know, has has a regular deployment release has the a lot of contributions from the community, very diverse involvement and continuing to see massive adoption of this technology. But we talked a little bit about the model here that service mesh presents. We, you know, we can see that it has the components has the pieces in the right places to to implement a zero trust networking architecture. However, what we're going to talk about for the rest of this webinar now is, can this be improved. Are there are there some drawbacks to sending a bunch of different agents and proxies out out there. And, you know, can that be improved so the first thing I'll point you to is a blog that we've wrote a year and a half ago now. We represented where, where we were looking as we being here here at solo where we were looking in terms of service mesh. Some of the, some of the innovation that was possible. What were the challenges we were seeing our customers run into service mesh adoption is primarily led by some of these security and compliance requirements. But once you start operationalizing the mesh. What are what are some of the friction points. What are some of the, what are some of the issues that people will run into. And so we wrote this blog, and it kind of outlined for different for different areas that our customer base our user base was very interested in. And how we can, you know, get better resource overhead, or how we can isolate feature usage and configuration between different teams, a little better and and security and need to operations upgrading. These were the variables that we were looking through. And, and we after ourselves for some of the challenges that crop up. Is there a different model where we can get the same, or the good parts of the service mesh and alleviate some of the, the challenging parts. And so we looked at different models maybe side cars not the only approach, maybe using something like a shared agent or shared proxy. You know, this will get you much better resource overhead, and and potentially upgrade impact is upgrade easier to upgrade if there's only one. And but we saw these as almost two extremes one that is extremely fine grained and gets great feature isolation, you know, good good security granularity, a little bit more difficult to upgrade. And then the other side of the word we flip the benefits and the pros and cons there, but is there something in the middle. Is there is an approach we can take this kind of in the middle that will give us the best of both worlds. And so this is what we, what we, what we talked about back in that, in that paper in that blog, a year and a half ago. We were actually working on this. We were actually building it at the time. And then just through our work in the community, you know, it's solo is a prominent leader in the SDO community. A lot of us here at solo, or part of the founding of the SDO community. And, you know, we have positions on the technical oversight committee on the steering committee and the various working groups. You know, so that we have a dedicated engineering team working on this deal directly so I mean we we we saw and worked with our partners and understood kind of what's happening and we came across that Google was working on something very similar. And so we ended up collaborating together, kind of proving out the whether this even makes any sense. And then in September, 2022, we announced that we would be contributing this work, sort of the beginnings of this work to the SD open source community and what it is, is an approach to running this deal. It's a car list mode that allows you to take advantage of the capabilities of the service mesh, including some of the properties around zero trust that we don't want. And do that in a way that simplifies operations. So things like upgrades extremely important things like onboarding applications into the mesh extremely important. And we can some of the side benefits are things like reducing costs right now we run fewer proxies have to think a little bit less about allocating resources for those proxies proxies in certain areas where we can improve performance of the mesh, especially if you're just leveraging the mesh for workload identity and networking policy. So we don't have to take on the full capabilities of a layer seven proxy, which is what, which is what the SEO data plane does today. And in the sidecar approach so a handful of different. I would call them ancillary benefits, but then also a handful of extremely beneficial operational benefits that we can get out of out of a model like this. We end up working. We saw in previous diagrams that the service mesh is implemented in terms of an agent, or a proxy that gets co located with the application instances in the SEO ambient data plane. We take those proxies out of the application. We don't take them out of the request path. So they're still in the request path. But what we end up doing is using a little bit lower level networking control to force traffic through what we're calling the secure overlay layer, secure transport secure overlay layer. So this layer is made up of agents that work closely with the CNI to implement the zero trust networking behaviors that that we want from from the service mesh. So this component, these components of the Z tunnels in this diagram. They take on the responsibility of assigning workload identity to the workloads on the node, and doing things like originating and terminating TLS for mutual TLS and enforcing networking policies based on these identities. Now, a service mesh needs to be able to do more than just, you know, establishing or not establishing connections we need to be able to understand what's in the request, what tokens or headers or claims are part of the request. And so we need to have layer seven understanding of what's happening in the service mesh as well and so what we've done is separate that out. So if you look at a request path, if you just if you don't need any of the layer seven introspection, then the traffic can stay in this secure overlay layer. And that is quite a bit faster than having to use any layer seven proxies, so the sidecar approach included, and, and we get the properties of zero trust that that we would need. If you want to include more sophisticated layer seven policies, then that traffic will then first go through the secure overlay layer, and then make it to a workload specific layer seven proxy or a waypoint proxy that we call it here. And that handles any other sophisticated layer seven aware policies that need to apply to a particular workload. In this case, you know, a service a is talking to service B, and those, you know, maybe maybe we're checking for tokens for checking claims or other opaque tokens, we verify that they're there that they haven't been tampered with. And then we will make decisions about what API is what endpoints and so on can be reached for for service being. So is the ambient mesh is, you know, does bring a lot of optimizations a lot of benefits to the studio project. We release it in September. It is, it's probably and actually just Friday this, this last week. You know, it moved into the SDO mainline of the code so it will be in the next release. And as well on its way certainly here on the solo side we're working on making this production ready very soon. And, you know, it comes with a lot of benefits. So we don't need to alter our deployments and inject side cars, agents, whatever. And you, you kind of sidestep some of the challenges that come up when you do have to do that. Certainly the folks on the webinar here that are familiar with injecting a side car and Kubernetes know there's some challenges there. There's race conditions between the containers, you know, job resources are not that don't play very well with side car containers. If you have your own in it containers, there's some collisions possible there. So without if you don't need to inject side cars you alleviate some of those, some of those cases, things like upgrading and patching of the service mesh infrastructure itself are done out of band of the application now. There was always a little friction, when you take a piece of infrastructure and kind of jam it in with the application. Right because the the the platforms the infrastructure they need to be serviced, they need to be maintained they need to be patched, and they probably do that on a different cadence and what the applications are doing. So that they probably need to decouple that. And here in the ambient data plan for Istio we see that upgrading patching, all this is transparent to the application. And we can do that with much more minimal oversight and governance and focus on coordination which apps can be restarted and so on. Other other areas we maintain a lot of the foundations of zero trust for network security that we have with the sidecar based service mesh. And, and we do see areas of improved performance, especially when you're just adopting the service mesh for for security. So that is where I'll end the presentation I'll do a quick demo here. And then I'll come back and we should have a few minutes for questions. Okay, so the first thing that you'll see is we have a Kubernetes cluster that is running is running some some more clothes here. There we see, just keeping this about as basic as you can get it Hello World, we have a sleep application we have a couple of Hello World applications. Some of these, these Hello World applications are deployed on on different nodes in cluster. We can see that there are at least two different nodes here. And if we come into the sleep application. We can call Hello World on port 5000. And we see we get responses, and those responses are low balance across those two different instances of Hello World. On the bottom pane, you can see that we have our default namespace we have some of the cube specific cluster namespaces, but we don't have Istio installed here. So we haven't installed Istio yeah if we go into default we can see that that that's where our sleep and Hello World is. Now if we come here we're going to install Istio. And we're going to do that and set the profile, equal to ambient, because we want to enable the ambient data plane for for Istio. So here we're going to give that a few moments to install. And what this is going to do is it's going to install Istio that you're probably familiar with if you are things like the control plane will be installed. I believe it'll install an ingress gateway. Yes, it looks like it is, and it'll install the components that are necessary to run in the ambient mode. So if we give that a second. We come back here. We should see now of the namespaces listed here Istio system is is one of those. So Istio has been installed click into it. We can see the Istio D, which is our control plane component. You see the ingress gateway, and you know a few components that make up the secure overlay layer, including the Z Tom. Now if we make a few calls that's actually let's let's come here let's let's go into the default namespace. We're going to take a look at the sleep app. And the application here is running on the new ambient worker. Okay, so if we come back and look at our diagrams here for requests to go through the service match they will first go through the secure overlay layer which is the Z tunnel component, and then eventually to their destination. So what I'm going to do is let's take a look at the Z tunnel component that's running on this node ambient worker. If we come here, the Istio system namespace we find Z tunnel, and we'll find the Z tunnel that's running on the ambient worker node. We take a look at our logs will full screen that. And if we make some calls here. We'll see we're not not seeing any, any traffic go through. And because we haven't, we haven't told Istio that these applications should be part of the mesh. In the sidecar approach what you do is you install the sidecar, but we don't want to install the sidecar. What we want to do is we want to include those workloads in the match, but we'll do do it some some other way, dynamic way and the way we're going to do that is by labeling our workloads with with a particular label that says hey you're part of the service mesh, the ambient service mesh. If we do that. And we come back. And now, when we call hello world. We should see access logs going through the Z tunnel components and on the bottom pane here. And we should also see that the traffic is starting to use mutual TLS. And is is is verifying things like spiffy IDs and workload IDs and so on. And that the traffic is going to the Z tunnel so by click hello we should see down on the bottom access logging that confirms that indeed our traffic is going through the Z tunnel now. And then we can see based on some of what the logs look like that we are going through the components that require mutual TLS. So I'm doing a little bit short on time I could pull up the, you know, TCP down for wire shark to show you that it is mutual TLS. But I can lead I can point you to another, another demo that does that. Okay, so that's, that's good we get mutual TLS we can enforce traffic policies now because things are going through this service mesh. So what if we wanted to take advantage of some of the layer seven capabilities that a service mesh brings into the picture. Maybe we want to test what you know how our security posture changes or behaves when things start to fail. The golden path or the, you know, the expected path going, but when things start to become chaotic. How do you think how does how does our security posture behave there. And in service and the Istio and the service mesh what we can do is layer seven policies like checking for jot tokens, injecting faults, controlling traffic routing and so on. And in ambient mesh what we what we do is see we deploy the waypoint proxy. So we will, let's do that first I'll show you the deploy the waypoint proxy and that proxy is this right here. Because we want to see this we want to end up seeing traffic go through the waypoint proxy and have layer seven capabilities applied. And if we come here. We should see a waypoint proxy has now been deployed that represents hello world. It's been running for 23 seconds. And now what we want to do is create a layer seven policy. And that injects a delay, whenever we call the hello world service, and the delay is established for five seconds so if I add this. Come back to my applications and call hello world, cross our fingers, we should see traffic go through Z tunnel and we should see the traffic delayed by five seconds because it's now going to the waypoint proxy. One, two, three, four, five. Right. And so now we would get the full behavior of fault injection and layer seven that we expect from a service mesh. But again, if you come back and look at my workloads. I don't have any sidecar proxies running in fact these workloads have been running for, you know, quite a long time. It's been running for a very long time. It's been running for a very long time. It's been running for a very long time. And the only thing we've deployed is a single proxy that represents hello world, and still gives us the, the layer seven granularity that we need for enforcing certain networking policies or security policies. Now if we decide that we don't want service mesh anymore. We want to remove it, but then we will unlabel or label, remove the label, the data plane label, and then we will install Istio, using the normal, normal approach to uninstalling Istio. And then if we come back here we'll see the Istio system namespace should get. I mean, I didn't delete it. I think I have a install this. And we'll see the Istio system namespace will go away. Our application work, you know workloads are still up and running, like they've been running, and we can still call Hello World. Now we don't take any advantage of any of the service mesh capabilities, but the applications haven't been touched. So that's the end of the demo there. Let me let me leave you with a few resources. We've, we've written a white paper that goes into more detail about how this all comes together, and specifically around Istio ambient mesh. We've built a few some other top some other content around how a how we mesh or a gateway and mesh technology together combined can provide a fairly sophisticated set of zero trust controls. And then the last bit I didn't go into what we, you know, the the products here at solar but we actually tie this in deeper into the into the CNI into the lower layer network. So you can think of solo as an enterprise service mesh that is built specifically to be more secure than than what people see in the community or any of our other competitors or anything like that. Take a look at academy.sola.io as a free resource for getting hands on with this technology. And obviously the, the blogs, the books, all the good stuff and then join us in the in the Istio community as well if you'd like to get involved. So that's all I all I have for for today. I know we've got a couple minutes left. Somebody asked about the slide deck. Yes, we will. So I will. I'll make these available to to the organizers here. I'll also tweet them from my, my Twitter account. It should be shown here, but I will make them available so I see a question here can the ambient mesh be integrated with a workload identity provider like fire. And the answer is yes, actually, we've done, we've done a lot of work with our customers to integrate spire for better workload attestation and identity issuing and Istio ambient is no different. We ambient actually the way the secure overlay layer works that works similarly enough to how the sidecars work to get workload identity. So being able to plug spire in would be, you know, it's fairly straightforward and something that we're working on. So definitely can ambient mesh function across clusters similar to a sidecar mesh so basically asking. I believe about multi cluster support potentially routing, failover that kind of stuff. And the answer is yes. That is something we are actively working on right now. And I think we do have some working working POC's around that. When we announced Istio back in September we knew that there were a couple gaps between what the, what the ambient approach could do and what the sidecar approach can do. And certainly here at solo we've been working on closing those gaps and we're almost there. We're actually very close to being there and multi cluster, cross cluster, failover, etc. So in most of those use cases are very core and near and dear to what we do here at solo so yes ambient mesh will definitely be supporting that. Thank you for your question. Another question that I so and I'm happy to take any others or we can do it offline to but another question that I get is, if you're already running sidecars. Introduce ambient. And along those lines, you know, going forward what will be the default for for Istio. And so the answer the first part of that question is definitely yes. The, the ambient approach and the sidecar approach are interoperable with each other. And this is a path that we can that we will, you know, continue to document and and show for kind of transitioning from a sidecar to a data sidecar list approach. The longer term is the ambient will likely be the default data plane in in Istio, and we would reserve the sidecar approach which is not going away for more for use cases where you need a little bit more control. You need dedicated resources and and policy enforcement. Usually those would be edge cases but we certainly don't expect the sidecar approach to go away. So I know that's, that's my time, and I don't want to overstay my welcome here. But certainly reach out to me. I'll post slides. I'll make them available to the organizers and I appreciate you all for for joining me on this webinar. Thank you so much Christian for your time today and thank you everyone for joining us as a reminder this recording will be on the Linux foundations YouTube page later today. We hope you join us for future webinars and have a wonderful day.