 Alright everybody, good to go. Thumbs up. Everybody's still alive, still breathing, coffee's still running on fumes a little bit from the coffee. I know I am. I had a Celsius about an hour and a half ago and beginning to wonder about the wisdom of that. I know this is the last talk of the day. We often like to talk about how we're going to make Istio boring. I'm going to try to keep making Istio boring as interesting as possible, keeping you guys awake, but also not keeping you from beer, your after activities and everything else that we know after a very long day at KubeCon starts to look extremely appealing, getting out of this huge building. I'm Louis Ryan. I'm the CTO at Solo and this is... I'm John Howard. I'm a software engineer at Google. Alright, so we're going to talk Istio Past, President and Future, which basically means we're going to talk about some things that have gone on recently, things that are going on right now, and the roadmap. Hopefully that's pretty straightforward. So let's keep it taken along. Yep, the big news from taking a look back over the past year is our CNCF graduation. So we... Yep. About time. Yes, it's been a very long journey. I think it would be one way to describe it of getting into the CNCF and then going through the graduation process. So you know, we've been around at the CNCF conference all week long around all these other great projects. So we're pretty excited as a project to be in the select few that are kind of considered graduated. It's a long time coming. It's like Homer's Odyssey and the earlier I drilled into one. Okay, yes. So obviously, you know, getting into the CNCF, it's a big deal. Didn't change a line of code inside Istio, but it has a big impact on the project, right, on how you all think about Istio, about how people who want to take a dependency on the project to use it in production or build a business around it or anything like that, right. The stamp of being a CNCF project, being a graduated project just strengthens the community around open source and we're just really happy to see this happen. Absolutely. So, you know, Istio has been around for I think we said six and a half years about now. It's used in production by tons of companies. I've been talking to all sorts of folks this past week and they're all telling me about all the cool ways they're deploying Istio and their production environments. But over the past year, we've been really looking, you know, we've had some success. We've seen a lot of adoption, but can we do better? Like, we don't want to stagnate in where we are. Can we meet more users needs? Can we expand our reach? Can we do it at a lower cost, etc. So we've really been taking a look at the broader ecosystem where we fit in it and trying to answer this question. Be boring, but be better. Those two things are not incompatible. Very true. So it kind of our first thought when we went down this journey is like, okay, let's go like rewrite some stuff and make some improvements, optimize things a bit. Really, that just makes us at most, you know, kind of incrementally better, right. Maybe we want to be a lot better, right. We need more than just a few optimizations here and there. We need to kind of rethink things from the ground up on where we want service mesh to go. Yeah. And as software engineers like our current first instinct is we want better software, let's go rewrite it. You know, if you've been doing this for a long time, getting past your initial knee jerk reaction and thinking a little bit more systemically is what you want to do. And so we're looking for bigger improvements than we would get from just, you know, incremental, like, yeah, let's go rewrite this in in Rust or that thing in WebAssembly or something like that. We want to do a lot better. Yep. So, you know, one obvious way we have, of course, I'm sure you're all familiar, Easter runs a sidecar model. There's a lot of proxies running everywhere, right. Maybe we can share these. Maybe that's one way that we make these order of magnitude improvements, right? Sharing is good. Sharing is caring. We love to share. So we say, oh, let's get rid of the sidecar. Let's share envoys. Let's put them all in one place. We use a lot of envoys. Everything's great, right? Yeah. So when you think about sharing, right, like we're talking about critical infrastructure, sharing means some things, right? Sharing means tenancy. And so, right, when you build critical systems, right, that that should give you pause. As we're all aware, right, complexity is a problem in our business, right? If you're building infrastructure complexity is something you have to fight really hard against. Complexity causes outages, causes CVEs, causes you to lose your job and your business to disappear. And complexity is the intersection of people and code, right? Code is complexity and people are complexity. And when the two intersect, if there's a lot of both, bad things can happen. So you want to keep complexity down wherever you can, right? And we've seen this systemically in the industry. And Envoy has a lot of code. How much code does Envoy have, John? Too much. And so, but that code serves a purpose, right? That code is important. It's doing useful things. There's just a lot of it. So how do we find ways to share Envoy without, like, without increasing our complexity dynamic? And so we can either reduce the amount of code in Envoy, which is pretty hard because Envoy, there are certain things you have to do to process HTTP or to be a load balancer. So you could look at reducing the number of people involved in the system, right? And be less multi-tenant. And that was our focus in designing, right? So ambient mesh, right? So how do we go about sharing Envoy while keeping it single-tenant, right? And this is the whole design philosophy. We want to find a way to put Envoy's in the network so they can be shared, right? You can only really share things that are in the network. We want to keep them single-tenant, so then you have to make sure that the things that they communicate, you have a very strong understanding of who's communicating on either side of the Envoy. And maybe we can go back to the other one. Sorry, I kind of set that up wrong. And so that means we have to establish trust. We have to establish trust with the Envoy. And, well, we've been talking about doing that for a long time. In Istio, we actually have a way to do that, and that's called MTLS. So we speak MTLS to the Envoy, and Envoy speaks MTLS to the other side in Ambient Mesh. And so we know who those parties are, and so we can reason about tenancy. And that allows us to have single-tenant Envoy's in the network and allow us to share them. So we've been talking about Ambient for a few minutes here, and for many, many hours in past talks and past Kubecons and blog posts and whatnot. There's a lot of great things that Ambient can bring to the table, lots of things to discuss and improve. I just want to highlight one of them, which is particularly sensitive in these times, which is some of the cost savings that Ambient can bring us. So this was some simulation. I guess we had about 100 namespaces, 1200 deployments, 800 services or so. And we were sending some load through this and testing. How do this compare with Ambient to sidecars and Ambient if we're using just the base L4? So up at the top we have the baseline application. We created this simulated application, which has these rough-sizing ergonomics, and it has a certain cold depth, and it has all these... So we tried to make it look like a realistic production system. And if you look at production systems running Kubernetes, we see we're hitting about 30 percent utilization of the allocated CPU in the cluster, which is, for those of you who run Kubernetes in production, not unrealistic. There's quite wide variability in the typical utilization of CPU in a cluster, and often we're over allocating CPU to ensure that we have redundancy against traffic spikes and other situations. And in this case we were using 17 E2 standard 8 instances to do this. So that gives us a monthly bill using... I've forgotten exactly the billing model, but about $3,500 a month to have that many E2s. Hopefully everybody's familiar with this stuff. John, you want to talk about the sidecar one? Yeah, with sidecars we're putting an envoy proxy along every single pod. So in this case we had, I believe, 1200 pod, so that means 1200 envoy proxies. And keep in mind, even when we're only using very little amount of resources in that pod, we still have to reserve those resources, so they still contribute to the cost. And it's very hard to know ahead of time how much CPU and memory is this application going to take. And in fact, it changes often. Even if I have low CPUs sometimes, I may have spikes of requests that I need to handle quickly. And so if I only give a tiny fraction of CPU, the performance tanks. So we have this large problem of resource overhead with sidecars, and so we end up going up to $5,600 a month in this simulation. That's a little bit short of double. Yeah, we added about 12 E2 instances. And you'll notice that the utilization of CPU, obviously this is of the total number of CPUs, it's barely changed. And one of the things we run into are problem sidecars. We, again, have to allocate and reserve CPU just like applications do, and it's not fungible. It has to be reserved, and it's tied to the lifecycle of the deployment, and it can scale up and down independently of the deployment. This is one of the fundamental limitations of the sidecar model. And so we have a pretty sizable cost increase, about a 50% cost increase. We didn't do any some of the tuning things that in particular would improve the memory allocation, but this is typical of like, I've got a Kubernetes cluster, I've got an application, and I just turn Istio on. This is what will happen. So this is obviously set up to discuss the difference we see with Ambient. Maybe, John, you want to talk about L4? Yeah, so one of the nice benefits of Ambient is you kind of have more choices, right? For a lot of users, they just need MTLS everywhere, and then maybe later on, for only some applications, or maybe just slowly over time, they start to digest the full service mesh. So we have these two layers. One is just L4, just the Z-tunnel layer, and then both with the Z-tunnel and the full waypoint proxy. And we can see that with the waypoint only case, we have no additional cost and increased percent of utilization, right? And even once we add the waypoint proxy, we only add one extra machine, you know, about 200 bucks a month. So the overhead is substantially lower than what we see with sidecars. Yeah, so that goal, right, let's do a lot better, right? This, that would only have been achievable by sharing things, right? The Z-tunnel part of waypoint, allows us to have a very, very resource efficient MTLS only implementation. It does something simple. It does it very efficiently. And because it's per node, right, we don't have a standard allocation problem, right? You can see on L4, right, this CPU utilization went up. There was no change in reservation requirements, right? And we didn't have to increase the number of VMs to accommodate it, right? And that's what we expect to see anywhere, right? If you have a cluster that's running at 98% utilization, one, congratulations, you're a complete unicorn, you don't exist. And two, Z-tunnel will still fit in it. But then the other thing, right, so we're, like, this is like two orders of magnitude cost improvement over the stock side car model. When we turn waypoints on, and you can turn waypoints on not across the whole fleet, you can turn them on for a single namespace. If you only want L7 for that namespace, and you don't have to turn it on for the other ones, we still have an order of magnitude cost reduction. And that's, you know, those cost reductions you can only get by fundamentally changing the architecture of the system, not by going and rewriting a bunch of code. And, you know, this is obviously the big thing going on in ambient and in Istio right now, and, you know, the proof is kind of in the pudding, right? You know, we're somewhat economically constrained times, right? Some of you had to argue with your managers to get budget to come to this thing. Hopefully some of this will allow you to come to the next KubeCon by cutting your bill. So ambient is really great, really exciting, but it's not the only thing that we've been working on over the past year. One of the other big areas is our collaboration with the Kubernetes community on the Gateway API. If you haven't heard of the Gateway API, it's a new API out of the SIG network to kind of redefine traffic routing and management in Kubernetes APIs. And so this is really the only way I can visually show an API. Is this canonical diagram from the docs? Thanks to whoever made this. Literally you have to wear a hard hat to use this API. So it's part of the requirements. I blame Rob for that one. Yeah, so, you know, the Gateway API is really kind of the next step in the evolution of APIs for networking. And one of the nice things is that there's really two main benefits that I see brings to the table. First, we've learned a lot over the years with the Istio APIs on what people get hung up on, what's kind of hard to understand, what functionality people need, and how to better model that. We've taken all those learnings and brought them to the Kubernetes community, as have some other projects as well. And so there's a lot of improvements that we can make when we do something the next time and learn from our mistakes. The other thing is that, of course, Istio has these Gateway virtual service, these routing APIs, but they're mostly isolated just to Istio. The Gateway API is a huge project that has, I think, probably 20 implementations now across the ecosystem. Everyone's kind of aligning on this. There's tons of documentation, videos, and learning amongst them. So there's a really big ecosystem around this project, which is really great. Yeah, so for all of you, practitioners, users, developing skills in these APIs is going to translate to different infrastructure, different contexts. Standards like Kubernetes itself has effectively become a de facto standard. Developing a skill set in that is going to help you develop your career and it works and everybody's benefit. And if you have skills in other projects using these things and you come to Istio, you'll already know what to do. So that's pretty great. So the Gateway API, I believe, last week just went GA in Kubernetes, so that's a huge milestone for us. And I'm really excited about that. But another thing that Istio has been working on driving with a few other members is adding Mesh support for this. The original API was designed just for Ingress use cases, so we're really excited to be able to use the same API across Ingress and Mesh and be deeply involved in the community for that. So in the past few months, that's been merged into the core API. So we now have a stable GA Ingress API and Mesh is an experimental and rapidly approaching graduation as well. Yeah, so if you notice up at the top it says the word gamma, that stands for Gateway API Mesh something. It's probably the best way to think about it, but Gateway API for Mesh. Someone remind me of all the acronyms. I'm getting old. Okay, so we're obviously looking back and looking forward. If we look at where the community is today, we have about 600 active contributors doing stuff with Istio in the repos, driving the project forward. That's pretty amazing, right? It's 120 companies, 600 people, and over 200 people have made a new contribution for the first time this year. So hopefully some of you newbies are here. Thank you. This is awesome. You know, it really helps drive the project forward. There's a lot of momentum behind Istio and it continues to grow and it's just really exciting to see. Even for somebody as old and grizzled as I've been working this space for a long time. Maybe John, you want to talk about? Yeah, I mean we talked about 120 companies, which is amazing. Love to see. Do we just want to highlight one as well, that Microsoft has recently heavily invested in Istio as well. So we've been really excited to see as largely part of our CNCF graduation, the growing ecosystem of people all aligning on Istio as their service mesh solution. So we've been really excited to see that. Yeah, and I know for obviously users, right, you know, seeing the big clouds get behind it, right, gives you a little bit more warm and fuzzies about choosing a particular open source project. Microsoft is quite a large company the last time I looked. So yeah, it's we're really happy to have them as, you know, we are happy to have anybody come and contribute to the community. Everybody is welcome. And, you know, if you're looking to get started, you know, come and talk to us, there's plenty of the maintainers here, contributors, TOC members, they're floating around, just come find somebody. Or go to the Istio booth. And, you know, folks like Vassila and John and Justin here who've been gamely managing and Mitch down the back, just come find somebody. You know, on the stats side, obviously, you know, we have the contributors numbers, just, you know, kind of a point of comparison. We are the third most active project in the CNCF by PR, which is pretty amazing to think about when I think about all the other really awesome projects in the CNCF. So first, kudos to Kubernetes, of course, and OpenTelemetry for kicking our ass. But yeah, we feel pretty privileged to be in that kind of rarefied air, but we're going to catch up with Kubernetes someday. And that's never going to happen. Just a quick recap of some other things we've been working on. You know, we've covered a lot of the highlights, but there's a bunch of work going on behind the scenes, especially around taking features we already have and stabilizing them, whether that means improving the documentation, the testing, or just committing to keeping the APIs stable and tested. So just a quick highlight of a few. External authorization has gone to beta. Helm install has gone to beta. Canary upgrades and revision tags. Workload group, distrelist. These have all gone to beta. We're starting to see newer projects such as dual stack support moving to experimental. We've been working a lot on enhancing the security of our product, done a formal audit. We've been continually expanding our fuzz testing with some help from the CNCF fuzzing team, and we've expanded our reach bit with some arm support. So this is just a quick snippet, but tons of great work going on across the community. Yeah, so progressing APIs, boring for the win, but really important for people in production. These are all critical features, and we're just grinding away, just trying to make them production ready. And to us, beta means they're ready to be done in production. Most of these features are widely used in production already, so we just continue to grind away on that. So we've talked a lot about the past where Eastio has been, but we also promise not to bore you too much, so I want to make sure to talk about the fun stuff, the future. And as you've been walking around, I'm sure you've seen AI is kind of a hot topic, and we've been looking really closely at this space. John and I have been talking about this for months. I know. We're super excited to announce that Eastio's future is doing more of the same stuff. We are proud to announce no AI. So we have a clear vision for where we want to take service mesh. We know what our pain points are for users, and we'd love to hear more about that, get to that later. But we also like to stay in our lane. That is true. So we're continuing to execute on that vision. We're not getting sidetracked. We're keeping Eastio boring. But we will talk a bit about what that actually means. So like I said, the big one is keep Eastio boring. And I don't mean make sure you guys fall asleep during our talks next year at KubeCon. What I do mean is make sure that you're not woken up at 3 a.m. because you have an outage, right? Yes. That's the kind of boring I like because otherwise people yell at me on the phone. And we want to bring mesh to more users with less cost and compromise. That's one of the key things we're looking for with Ambient. And that cost, I mean we showed a dollar value on that cost, but a lot of it can also be the cost of complexity, the cost of maintenance. There's all sorts of costs associated. And we're looking to lower all of those so that more users can adopt mesh with less friction. One of the big things of Ambient, obviously there's a lot of like the rise of the platform team is kind of a common thing over say the last four or five years. The platform teams are going to be the owner of the infrastructure. And with Slidecars, the ownership model was a little blurry. And now we have platform teams that are stamping out or managing big Kubernetes clusters for their teams. But when they want to upgrade some infrastructure with Istio, we're going, well, we have to go and restart your application. And if you're running a Cassandra database, that might make you a little twitchy. So, you know, taking some of that pain away from the operational folks is a big part of just, you know, the ease of bringing mesh. And then the other part is two more users, like there's a lot of VMs out there. And there's a lot of other compute still. How do we find a way to bring that on board and lower the kind of UX pain for those users to work with the Kubernetes environment? Kubernetes is also everybody's trying to get stuff onto Kubernetes. But the things that are not on Kubernetes still need to talk to the things that are in vice versa. And Istio is a networking product, like we help people connect and secure and manage and load balance and, you know, authorize and do zero trust. So we have to address that community as well. And by having Istio become more simple, that will help with that problem too. Right. And so this is the kind of long term futures. You know, Kubernetes first, Kubernetes is our best friend, but we will always try and do things to bring other environments into the fold. We cover the less cost part. Yeah. So some other areas we're looking into always improving and we've been in this path for quite a while, but keeping on the same old, same old is deeper integrations with existing standards and other CNCF projects. So, you know, it's easy to go demo Istio on some kind cluster and try it out. But when you're deploying it into an existing production environment, you already have a lot of infrastructure, right? We want to make sure that, one, we actually just don't break in the presence of that. But even more, we deeply integrate it with it so that we can improve that product and that product can improve Istio. Right. So some of these things are things like open telemetry and Prometheus. So we can integrate with these metrics, backends and providers. Argo, CD and flux. So we can have, you know, managed rollouts, that sort of thing. We want to have more reference architectures for how users can take Istio, stick it onto these systems that they're already using and just kind of one click and go. So they don't have to, you know, think about this from the ground up themselves. Yeah. I mean, hopefully Mitch gave a talk earlier this week about, you know, integration of Istio and Ambient with Argo. That's a great example, right, of helping you on the operational side for platform teams. Obviously, open telemetry has gained a lot of momentum and, you know, most of the major APM vendors now support ingestion by open telemetry. So, you know, whether you're using Prometheus or some other tool, right, also adopting open telemetry consistency for logs, you know, tracing and metrics, right, will make integrations with those solutions possible, right, because you have lots of different reasons to put this stuff into lots of different places. And then, you know, something like Spire, which helps integrate with the underlying identity infrastructure of many, many platforms and provides abstraction around that, which helps bootstrap identity into Istio, which we can then use to do the magic of zero trust. Like these are all great examples of, you know, working with these things for broader ecosystem effect. So in line with keeping things boring, one of the other efforts we've been undergoing is kind of our enhancements and stability division. So, you know, there's a lot of features across Istio that are experimental, alpha, beta, and of course many of that are also stable. So we want to move more users to using more stable features. And we're doing that through a lot of different ways, right. We're taking features that are in these experimental alpha, beta stages, promoting them up by adding testing, improving documentation, whatever we need to do. And we're also looking at features that have, you know, have been experimental because it was added four years ago for some obscure use case, no one cares about it anymore, pruning it back to keep kind of the core set of functionality. So there's been a lot of work on this. Shout out to Whitney, who's been leading this effort, and many, many others. So we're really looking forward to pushing more users towards a clearly stable, supported API surface. Yeah, I mean, this is a big deal. This is this is core to the boarding agenda. I'm sure Drake would agree. Okay. So obviously, you know, the last part of this talk is the future of Istio is in the hands of people in this room. I want to thank them. You know, that's the community, the contributors, you know, users, people who complain about it. Where's Darren Shepherd? I love his tweets. He keeps us honest. I'm going to track him down someday at this thing and just shake his hand. Everybody brings value. And, you know, hopefully, you've seen the Istio community be responsive to that. You know, Ambient in a big way is, okay, we can't change that often because we can't churn you that quickly. But we have been listening. We have been trying to systemically address the real feedback that we've been getting over the years and make substantial meaningful change. It takes time. So we appreciate your patience. And for those who have been contributing and giving us feedback, thank you. And with that, I think, you know, if you're not so many consider in those groups, then we would love to have you. And here is some steps on how you can get involved. It's quite easy. We have Slack channels. We have, I think we did a contrib session on how to get involved in development. You know, we're always looking for more people to get involved in documentation or even just being a user and giving us feedback, right? In that case, you want to join the complainers group. That's fine as well. We are doing a Istio survey. We would love to hear your feedback, positive or negative, super valuable for us to understand what we should work on next and what we're doing well and what we're not doing. Yes, please feedback. More feedback, the better. And I guess with that, we have time for a few questions or we can set you free and you can go to beer and dinner. Five minutes. I'm reading the hand correctly. Yes. Five minutes worth of questions. Let's go. Thank you for the talk. Quick question. What features do you sacrifice using ambience service mesh using the proxy interface? Or is the plan or the current implementation to use to support all the current features using that? So if you look at ambient mode with the waypoint, the feature differential is very marginal. We will support almost all of the existing API surface. The primary difference with waypoints is that the things that were happening as routing in the client side have now moved into a middle box. So that will change a little bit the load balancing dynamics, but probably for the better. It's not a semantic change. It's kind of a runtime behavioral change and load balancing, but probably better in aggregate from what you would see. Istio doesn't have egress authorization. So there's no change there. That's something that may have been client side but doesn't really exist in Istio. So that's not there. We keep the same zero trust semantics. So there really is not a lot of change in terms of what the APIs mean in terms of their semantics. It's very little, John. Yeah, I mean, with the exception of the sidecar API, but that one's kind of a given. The sidecar API is the thing you use to try and not to consume too much memory. So that's probably the API you're happy not to use anymore. Thanks. Hi, Jung Seok from Google. I have a question about the benchmark of the ambient mesh data plane. Can you go back to the kind of comparison? Yeah. Yeah. Oh, yeah. Just to clarify my understanding is that there are two modes for the ambient mesh and the airfoil only mode. So you're running just envoy proxy as a TCP proxy in that case? No, we're running. So Istio has two things now. Z-tunnel, which is the rust based proxy implementation that we run per node. And it only does MTLS. It has no L7 features at all. So the features that you get by enabling that is just MTLS, L4 telemetry, and L4 authorization policy. Envoy is not used at all in that mode. You don't get envoy until you turn on the waypoint feature, and then you get all of the L7 stuff. And this was a really important consideration in terms of adoption cycles for platform teams. A lot of people, like their first goal is to just get MTLS on. And so that, the ergonomics of that are tightly aligned with that goal. You just get that part. You know, I often talk about, you know, cycles or we have this problem, we have to eat the whole burrito. You get all the features in one unit of deployment, and you have to absorb it all upfront. Now you get a more gradual infrastructure adoption path that matches your feature adoption path. So yeah, no, no envoy on that path. Oh, I see. And for the L7 case, that is running the envoy as a HTTP connect proxy, right? It's running as a full HTTP forward and reverse proxy connected to by the Z-tunnel and all the magic we do with that. I see. Yeah, because I've actually been working on implementing Connect UDP and Envoy. So I heard that Connect UDP, I think there was a plan for actually using Connect UDP in the ambient machine. You were stealing our roadmap talk. Yeah, that's still the plan. Yes, no, we don't yet have a concrete timeline for doing UDP with this, but it is clearly something we have designed for. But don't make me commit to a timeline, John. You don't want to commit to a timeline either? No, okay. Yeah, thank you. Thanks. For some of the newer features like ambient mode or Gateway API, is there a migration path from running everything with a sidecar to ambient, or is that only really possible with fresh installs? Yes and no. So for now, we've been focusing on, because we have to focus our efforts to get things out on the Greenfield cases, but we've designed it all so that it will be interoperable. And so you can gradually migrate on a workload per workload basis, or at least like namespace by namespace basis. So that's something we will definitely have before we go GA. Also, you mentioned Gateway API as well. There's some recent tooling that's been developed for, it's called ingress to Gateway, but it really should be called like anything to Gateway, because it's also going to expand for Istio support for virtual services to convert. So just some tooling to help there. So in both cases, yes, we're absolutely hoping to help users migrate towards these new things, and not have to tear down your entire universe and rebuild it from the ground up on the new world. We're very cognizant of what it means to upgrade our existing users. That sidecar to ambient migration case, it's just not as production ready, but it has had a lot of time, effort, and thought put into it. You can test it. So just so you understand, there's a lot of stuff already in Istio to do it, but we don't consider it the same level of stability as some of the other moving parts right now. But there was a demo given of it the other day at IstioCon. I wouldn't suggest people put it in production. Don't do that. But it shows you like that we are committed to it, and 100% we are going to be doing that. Okay, thank you. Also, I didn't ask this, but I get asked this a lot. Just to be clear, we're not removing the old Istio APIs or the sidecar model. These are additional options we're giving users, and we hope that they're better and improved, and you want to adopt them. But you're not forced to, right? It's a carrot, not a stick. Forcing is not boring, it's scary. At least with the sidecar model, the identity of your application or workloads was you could discover them through an envoy header. In the beginning, in the ambient mesh, when I looked at that identity was lost, is that still the case? Or are there other ways to kind of see who is calling you through Istio? So the reason, if you just have Z-tunnels enabled, right? Z-tunnels don't touch L7, so they're not going to rewrite headers for you. When the waypoint is enabled, absolutely we can include the identities. And when you look at the telemetry, all the identities are included. So clearly, we're seeing all that information. If there's a bug where we're not propagating in a way that we were expecting before, right, that can be addressed. Okay, thank you. We're also looking into some ways that maybe we can do the same even for just the Z-tunnel case, but it's very thought experiment at this point. Yes, there are, even for L4, we can start to provide some of that facility, but it's not nailed down yet, but expect to see that. One more question. Kind of similar to the Kubernetes efforts around versioning and like an LTS model, is Istio considering something similar where there's like an LTS release, and instead of having to go through like four upgrades, I can just go to the next LTS? It is a constant source of discussion. We tend to trail Kubernetes by about two years in adoption of these models because we'd rather inherit the discussion than repeat it ourselves. Kubernetes moved to a three times a year release model before they started having the discussion on LTS. I don't have a crystal ball. We haven't agreed anything yet, but I think it makes sense to keep tracking what has been working for Kubernetes, and if if something didn't work for Kubernetes, don't do that. We don't need to reinvent the wheel. We've been pretty heads down on Ambient, so I think you'll start to see once Ambient becomes stable, that will gain a lot more traction because that's the other obviously big aspect about being less boring is upgrades should be more boring or less often. So yes, that's always been a heady topic of conversation. Go talk to Mitch about that. See that guy waving his hand over there? He wants to hear everything you have to say. Yeah, another area that we've been working on very, very heavily is making the upgrades much more seamless so that they're not so scary, right? That's a big part of Ambient, right? Thank you. Thanks. All right. Thank you, everyone. Thanks, everyone.