 So, Adise has asked to sub out for Yuval. It is my intention that we shall switch her back in at some point, otherwise we have a very manly panel and no one wants that. So, let's start with Victor here. So again, for everyone who will have heard, I think everyone here, but Louis had a chance to have a chat today, some earlier in the morning than the others. So if you don't mind, please just a little quick introduction to yourself, two sentences and we'll pass the mic down the panel as we go. Alright, it's working. I'm Victor. I work as a developer at Vakia.com, doing all things around cloud connectivity, API, Kubernetes, cloud native, all this type of stuff. I feed this in one sentence. Beautiful. It depends where you put your commas. Yep. Hey, I'm William Morgan, one of the creators at Lincordee. You probably remember me from the very boring keynote this morning. Hey, I'm Yuval Kohavi. I'm the chief architect in SoloIO. Hi. Hi, I'm Louis Ryan. I guess one of the early founders of Istio and I work at Google. Hey, I'm Thomas Graf, CTO and co-founder, ISOvalent, creator of Cillium and long-time kernel maintainer, developer, whatever. Alright, so can we bring the mic down to William, please, because I want to ask first of all, that's you, about the, for people with a shorter memory in the ecosystem space here. Lincordee started out as a single proxy that ran per node, and then there was a sort of a Skunkworks project, perhaps if I characterize that right, called conduit, with the idea of moving to a sidecar model, and that eventually became Lincordee version two, which is what we're all used to today. Could you recount, please, William, a little bit of the history behind why you made that first move and then why you eventually made the migration to a sidecar model? Sure. Yeah. So, you know, Lincordee was the very first service mesh and the way we started it out was with some Scala technology that we had imported from Twitter. So we were on the JVM, and JVM was awesome at a lot of things, but it was not very awesome at being small and tiny. So, you know, the recommendation for the Lincordee 1.x was for you to run it on a per host basis. You could actually run it in sidecars, but it was like 150 megs. So, you know, if you had a giant application, it was kind of okay. If you had a tiny application, it kind of sucked. And we had a lot of problems from our early adopters who were adding it to their Kubernetes stack on a per host model. Operational problems, primarily. You know, things around upgrades and maintenance, and, you know, when something went wrong, trying to figure out where that was. So we eventually, for that and other reasons, like Craig mentioned, we end up rewriting everything, and we now have a data plane that, you know, is designed to be in a sidecar, written in Rust, and we have a control plans and go and increasingly have some rust in it. I wrote a longer version of this in an InfoQ article. So if you Google for like service mesh lessons learned or something, you'll find it in there. But that's kind of the basic history. And then, Louis, perhaps, if you wouldn't mind giving a sort of a rundown on the secret internal Google thing that we promised not to talk about, that SDO was in some way sort of based on the idea of putting the sidecar next to the workload and why that was important to us. Oh, we talked about it publicly. Sure. Yeah, it's fantastic. Don't name it. Yeah, well, no, we talked about it called envelope and ESF and whatever. But yeah, I mean, we, you know, at Google, we have a very large fleet of stuff and we needed a common set of services that would abstract some of the interfaces in front of those applications and trying to roll out libraries and updates throughout that fleet would just be massively, massively expensive. And so it, you know, the idea of creating, you know, value as a sidecar service that sits in front of your application as an abstraction layer to provide, you know, support for cross cutting concerns, whether it was quotas and rate limiting, or whether it was content transformation or protocol transformation, or, you know, if you wanted to introduce a new security mechanism into the transport layer, right? So there were, you know, we had a lot of those use cases and, you know, at Google stale that like the management of that became extraordinarily important, you know, so you could kind of consider it as Google's legacy problem, right? And, you know, Google and the software in production tends to up to date update quite regularly and is rebuilt very regularly, but we still consider this to be a kind of legacy workload and, you know, framework management problem. And that's why we introduced sidecars to solve that problem. Right. So as, so as not to sort of mischaracterize what Thomas is talking about before, there's currently no intention to get rid of the layer seven proxy. There is a goal that perhaps that the functionality could one day be moved into the kernel. Everything seems to run quicker there. I want to dig into the idea of the, the Sillium service mesh idea saying, well, we'll move some of that down. We might still have to run one proxy per node versus running one proxy per side car. There are some tendency concerns that were brought up when this discussion first happened. So I don't know if it's, it's probably better for you to summarize them than me, Thomas, if you would. So the idea of what do you gain by doing this and what things do you think you lose if we're moving to a model where all of the layer seven processing is still done by a proxy envoy in this case, but it's now done by one per node rather than one per process per sidecar. Yes. So I think very important is that we don't necessarily need to do everything in kernel and eBPF to gain some benefits. A good example is we can do in kernel eBPF HTTP visibility in pure eBPF and the gains are massive, but we cannot do layer seven retries or load balancing yet, but that does not mean that we shouldn't do the visibility part, getting open telemetry metrics and traces almost for free is amazing. So to the question of the proxy, so we love envoy. Cillium has been integrating with envoy forever. Matt, I love you. You've built a great proxy. We've been using envoy to leverage and enforce layer seven policies for years and it's been in production for years. The multi-tendency aspect is very interesting because the discussion we're having right now is very similar to the one when VMs and containers were containers were replacing virtual machines. One of the immediate concerns was what about multi-tendency? What if my apps share the same operating system? Who controls memory? Who controls CPU? Who controls access to all of these resources? And we were required that we built a multi-tendency angle into the operating system, into Linux. That's what we call containers today. And we got a lot out of it. We actually got better control out of it because now we can do we can do fair queuing. We can actually share memory and do best effort and so on. So I think that's actually the benefit. So just because the sidecar model was once the right approach doesn't mean that we shouldn't question it and we shouldn't look at potentially introducing a multi-tenant proxy. Also, it's not necessarily one proxy per node. It could be one proxy per namespace, one proxy per service account or some other granularity that makes sense. The kernel has a way for people to run multi-tenant things. It's called user space. It's called processes. At which point do we get the idea of Envoy is written in C++? Kernel is written in C++? Why don't we just make Envoy a kernel module? I think what we proposed is actually very close to this. I think running Envoy in part of the kernel, you probably face some retraction. So when you say we have this concept of multi-tenancy user space, yes, that's one of the two answers. The other answer is C-groups and namespace, and that's what containers are. What we propose is essentially this. Envoy actually has a great design and it's very close to what the kernel does. It's multi-threaded and it's very siloed. The kernel has the capability to run individual threads of an application in separate C-groups. So we can run part of a single Envoy instance in the C-group of the pod and we get the CPU accounting automatically accounted for in the C-group of the pod. That's amazing because that's exactly like I mentioned before. With Layer 7, CPU is the bottleneck. Like HTTP processing is very CPU intensive. So you definitely don't want all of your CPU off the node to be used up by a single proxy. And Louis, perhaps you might like to the contrast some of the things in Envoy that you think don't necessarily work so well on the multi-threaded. Yeah, I mean there was a, you know, Thomas, I and Tim had a bit of a back and forth and Twitter about this. It took a long time for all that to happen in the kernel. It's a lot of engineering work. And there are a lot of ancillary concerns that come in because it's not just this process or this request or this thing that I have to deal with. It's the whole configuration space, the shared memory space. It's a lot of complexity for Envoy to take on like that doesn't exist today. So, you know, reasonable people can disagree on that front. I'm a little skeptical about when and if that would be viable. Like as Yuval mentioned, you know, there's what holds today and they're what might hold two years from now. So, you know, right now I don't think it's practical. And, you know, I think what we do with user space with good acceleration from EPPF to get data into user space as efficiently as possible, you know, gets us very close to that anyway in terms or should get us very close to that in terms of efficiency. I'm not going to say psychers don't have problems, right? We're all aware of the total cost of ownership issues like life cycle maintenance, but you still have to maintain one running on the node and there's granularity issues. So, right there are, right now I'm just a little skeptical of, you know, vapping the solution. You know, I'm happy to be proven wrong, but right now that's not where I would be, you know, placing my dollars in bets, but Thomas is free to spend his money how he likes. So, yeah, that's where we are, but we'll obviously, you know, keep talking and see how this develops, but right now that's where I'm at. Now, not everyone on the stage is spending their dollars on Envoy in general. I know that there is work happening to support Rust in the Linux kernel. I know that Linkadee proxy is written in Rust. Is there any synergy between this? Is there any chance that you'd consider in kernel options for the Linkadee proxy? Yeah, that's a good question, you know, to a certain extent, right, you know, Linkadee doesn't use Envoy, so like I don't really have a direct horse in this race, although it's interesting to listen in. I think for me, you know, would we consider, you know, running stuff in the kernel? I mean, I guess, you know, I guess we consider everything that the- Would you consider expanding the Linkadee proxy to become its own operating system? Well, yeah, I think we should import the kernel into the proxy and, you know, it's a lot simpler though. No, I think for me, the question is always, you know, what's the actual user benefit that we're getting? And I happen to be someone who loves the Sidecar model. I think it's a really elegant model. It has some implementation issues in Kubernetes, especially when it comes to like ordering and things like that. There's stuff that has to be fixed. There's annoying aspects of it, but I think as a model, I actually love it. So if someone were to come to me and say, I want a Sidecar-free service mesh, they're like, why? Like, you know, you're prescribing an implementation. What problem are you actually trying to solve? Is it like you want to reduce complexity? Okay, well, then why don't you say that? I want a simpler service mesh. Oh, is it taking too much memory? Okay, why don't you say that? I want a smaller service mesh, right? So I think from my perspective, we've tried very hard to make the Linkadee proxy and implementation detail. It's not something you have to think about. We don't even give it a good name. It's got like a terrible name and it's not meant to be consumed by anyone outside of Linkadee. And I try very hard to have users have that mindset too. You know, it's not something that you're directly manipulating, you know, in tuning except in extreme cases. So I actually don't really care. Like I could follow that same theme. Yes, we could put stuff in the kernel or we could put it in outer space. And, you know, what I care about the most is what's the operational, you know, kind of impact of all that. And when the user is maintaining their service mesh and they're operating it and they have to upgrade it or like there's a problem and they have to trace it down, you know, trace it to its root cause. Like, what does that actually entail? And so far, the sidecar model has been beautiful for that. I think, you know, in my opinion, it's been a really nice way of doing that. And it ties that functionality to the kind of your mental model of your application anyways, right? Like you want to change something in one service? Well, you change it on that service, you know. And the further we get away from that, the harder it is for me personally to think that you would maintain that same kind of operational simplicity. But I'm like a babe in the woods when it comes to these discussions. So, you know, I'm happy to learn. If anyone has any questions that I'd like to pose to the panel, please do stand in front of the microphone over there. While you make your way there, I'd like to bring Vik into the discussion and say that Kuma and Kong's mesh play based on Envoy, but not based on Istio. You have the benefit of having seen some of this play out over time. You have the benefit perhaps of the project having come up in a world where EBPF was perhaps a nascent possibility. What design decisions did you make looking at LinkerD, looking at Istio, the meshes that were out there at the time that sort of relate to this space and deciding how to arrange your proxies, for example? So, Kuma is a relatively new project comparing to Istio and even LinkerD. And definitely we learn a lot of things designing Kuma and the idea around the Kuma always was developer experience. We want to, like, I want to share the same sentiment with William around. We really want people to have a simpler mesh with the benefits of having service mesh capabilities that they know because people already have some experience with Envoy. People have experience understanding and getting out of it, a lot of things. There's plenty of the protocol supported including some of the L4 protocols. Some people running more and more workloads like MongoDB and Kafka in the things. And some of the things that no one actually mentioned yet and I probably would be, it's a very unpopular thing and running like your production workloads on Windows. And- Think for yourself? Exactly. So, and the running similar, like same experience for developers in the mesh in regardless on operating system that you're running production and that the flexibility that gives us like a sidecar capabilities is something that we're really getting benefits. Plus, we're not trying to abuse CRDs like that much and we want you to use the CRDs only for the things that are really important to configure. So that's only a thing. Like Kuma tries to be like developer friendly and put a lot of pointers into, you know, how we can get simpler mesh. Okay, well, we're gonna take an audience question now. I think it's a question to Thomas. In the case of layers seven traffic, what perform besides memory footprint, what is the performance advantage of option two versus option one you were showing before? Yeah, I will publish the slide so you can see the specific differences. In terms of visibility, it makes a massive difference. I don't have the exact numbers in my head right now, but it was in the single digit percentage overhead for in kernel HEP visibility and the latency was two, three, x, four, x bigger for a proxy. For the visibility case, we also measured the Cilium On-Way filter against the Istio On-Way filter. Maybe that's an unfair comparison because the on the Cilium On-Way filter is massively simpler compared to the Istio On-Way filter. In that environment, from that perspective, there is I think a feature imbalance there. I think the gain is definitely, as soon as we can go in kernel, we're talking almost no overhead, which is the appeal, right? And I think the other appeal is that we can provide this type of visibility for protocols, enterprise protocols that a proxy does not support. A proxy is typically very limited to TCP and enterprises obviously speak a variety of auto network protocols as well. And sorry, follow up question. And in the case of, for example, you were saying that it's harder for layer seven, like retries and routing and so on. Do you have any estimation? So I think for things like retries, circuit breaking, whenever it is about connection splicing or replaying traffic, I think the combination of EBPF and On-Way will be the answer where it is, I think it was Lewis said correctly, it's about leveraging EBPF to inject On-Way better and quicker and faster and not require this very expensive network-based injection of the sidecar proxy. And what to Lewis said, it was actually very, very accurate. And I think a couple of years ago, the complexity of solving what's needed to make this happen would have been very hard. What changed is EBPF because we can now integrate On-Way into the kernel without kernel changes. And that's a massive, massive difference which makes this feasible and approachable. Thank you very much. All right, we have another question and then we'll talk about something fun. Don't tell you what it is yet. Okay, I think I should have asked this at EBPF day yesterday, but then how would you compare Calico EBPF with Celium? But maybe not strictly a service-related question, but with combination of using On-Way with Celium versus... Oh, I see, okay. I'll allow you 15 seconds to ask. 15 seconds. So Celium has native On-Way integration, Calico is not and there's a couple of features missing in the Calico data path EBPF. That's the short answer. Okay, thank you. All right, so in the web browser, we have the JavaScript runtime and through a sequence of events, we decided that we could basically re-implement a Turing-complete machine dot, dot, dot web assembly. So we now have a mechanism for running Doom, Quake, whatever you want in the web browser or probably doing some actual real work as well. The Google team on working on this here on On-Way, especially let a lot of work to add support for web assembly into the On-Way proxy, allowing arbitrary code to be run, giving the safety, taking the safety models of the web assembly sandbox, putting that inside the concept of the proxy. So put that aside for a second. We have the kernel and if you're going to ask a question, I'm gonna demand that you come and ask it from the stage, please. Put that aside for a second and say we now have these points in the kernel where as I understand, and Thomas and I spoke about this on a podcast back in January, there are certain extension points. You can say send me a message when this thing happens and there are a certain set of things that you can say. Why don't we get to a point where we can run a web assembly-like thing, if not actually web assembly in the kernel and we can implement a Turing-complete thing to what Yuval was saying, where we're able to arbitrarily hook anything and we can rewrite On-Way in JavaScript and run it in the kernel and get all these benefits and not have to worry about the arbitrary split between we can do certain things on packets but we can't do them on streams. I think that discussion is actually exactly happening with Rust and not with eBPF, but there are people out there that want exactly this. Like eBPF has been specifically designed to not be able to crash your kernel and a big part of this is you have to run to completion. You can loop but loop needs to be bounded. It means that whatever program you can run as an eBPF program needs to be safe, needs to be guaranteed to complete, which is why eBPF on its own is not enough, like why the combination of On-Way and eBPF makes sense. It's essentially when we get to the level of complexity where it's not possible eBPF, we go to On-Way. For the full Turing-complete version of this discussion or the upstream consensus now currently leaning towards just enabling Rust in the Linux kernel, but that's probably a couple of years out. So I might just pass it down to Yuval if we can, but my understanding of the Rust support in the Linux kernel is basically to allow you to write parts of the kernel in Rust, not necessarily to arbitrarily inject Rust into the kernel at runtime. Please, if that's not correct, please tell me what. Okay, so that's fine, but again, that comes down to installing a kernel module, recompiling your own kernel perhaps. So that's not necessarily as simple as upload a module like we might expect today. Yuval perhaps, is there a way that you see being able to safely run arbitrary things that maybe don't run to completion being a possibility in the 2025 kind of vision you have? There is a way, definitely. Just you want to guarantee the... Closer, please. Sorry, oh, closer, yeah. So you want to guarantee that a certain program doesn't bring the whole kernel into a halt, right? So you need to find a way, for example, with WebAssembly to help it terminate while still running it in native speeds, right? Which means that you'll have to instrument WebAssembly in order so the program itself will stop. WebAssembly is structured in such a way that it's actually not horribly hard to do. WebAssembly does have infinite loops, but infinite loops are basically the, I think the only way you can jump backwards, right? So if we instrument those loops and add checks, have we, people familiar with Blockchains and Web3, can I add the gas concept into WebAssembly? We could potentially provide a budget for a WebAssembly program to run, and once it exceeds this budget, stop it, return an error, have some semantics around what happens in case of it running out of gas, right? And to do that, it's not that hard conceptually. You have to instrument the WebAssembly program and inject opcodes in the cases where it can recurse and it can loop, but those cases are pretty limited as far as WebAssembly goes. I've seen some papers around it in the internet. I believe some of them from the HCO community, you're probably familiar with those better than me. So it's definitely possible. I don't know if anybody's working on this or not, but that would be the concept. Yeah, so I'm interested obviously in the fact that a lot of people are moving stuff out of the kernel. A lot of things, network processing and so on, packet processing is being offloaded to specific hardware where there are user space programs that are able to access them. A lot of the conversation that we're having here is effectively we need to move things back into the kernel in order to get things to be sped up. Is this the right direction? Is there a way of, like we talked before about Kubernetes needing to support running sidecars? Is there a way that we as a community can petition the Linux developers and Thomas, your friends in that community to solve this problem in such a way where we don't have to think about it so much as moving things into the kernel but just making the things we're running in our sidecar model run quicker? I think what you're asking is, can we get like a unified approach? Yeah, I'd like a pony, please. Yeah. Like everything. I know there's different approaches, right? There's the DPD, Decated Data Path Development Kit that helps you kind of bring things directly from the network R2 user mode. There's actually all sorts of models. There's VPP and there's EBPF. You know, it's kind of all in flux right now. Eventually I believe there will be consolidation. I'm not too much of an expert with the other stuff. So if anybody else wants to comment on that, please. Thomas, if you would, sir. I think this is an interesting topic. We see a massive shift back from user space processing back into the kernel and the reason is containers. Virtual machines were essentially machines and we had to kind of, it didn't matter whether it was the kernel or the user space doing whatever processing was required. Applications directly interface with the kernel and packets data needs to go through the kernel. There's no additional operating system running in the VM as in the virtual machine model. And this is why making EBPF is so interesting because it's directly integrated into the kernel. That's also the difference of EBPF with other languages like WebAssembly. EBPF is specifically for the Linux and now Windows kernel and its main value is that it can interact with the kernel, with the operating system. So it can take shortcuts, it can do processing. It does not have to, for example, we talk about VPP or other DPTK based applications. Yes, they do processing in user space, but then in order to deliver that data into the application, you have to either change the applications which most users are not willing to do or have to go back through into the kernel. And because of the rise of containers where she shifts seeing more and more processing, essentially go back into the kernel with EBPF. Yeah, I mean, I think your pony is gonna be a hybrid, right? It's gonna be... Like a Z-donk or something? Yes, a mule, I don't know. Any Foundation fans in the order? Right, and it's the degree to which some functionality will live in EBPF. Like, I think EBPF has done an excellent job in accelerating some of these integration points and providing lower-level hooks for certain types of things. Right, like maybe you'd see something like DPTK, right, used for things like middle boxes, right, where you never have to go back into the kernel, but certainly if you're going back into the application space, you're gonna go back through the operating system, right, because that's what all the applications are targeting. It's really... When you look at the functionality that you provide, like from, you know, L3, L4, L7, where there's gonna be a sweet spot in terms of management and updateability and maintainability and platform coverage, right? That's what's gonna determine it. And, you know, EBPF is certainly moving the needle somewhat towards the kernel, but, you know, there's disagreements about how far that can go before, you know, you're gonna hit limits and, you know, people are gonna have issues with maintenance cycles or other types of tenancy issues. All right, you had a question? If you don't, you can either switch out for Yuval or you can switch your mic. Sorry, it will. So, quick question is about debugability. Like, one of the things that is anywhere not easy with Envoy is basically how to figure out where the problem is. Now, think about that if we're taking a lot of those functionality to the kernel, how are we going to do that? How easy it's to debug a problem in the kernel? Yeah, so while the mic comes back up, I'll let William keep gloating about the relative lack of debugability of one's own work. Yeah, I don't want to take it off. Sorry, that was a short question. Yeah, no. Yeah, I mean, I feel like I'm just gonna sound like a broken record. Like, the stuff that I think is really important, it's exciting to have these conversations, but what's really important is like, what's the effect on the end user? And, you know, what's the operational burden that we're asking them to take on now? Perhaps, I can sort of twist the question a little bit if you would. We have, in a group like this, we sort of represent a percentage of people who are end users and care about it, but then we also have a percentage of people who are building the various technologies out. So setting the user part aside for a second, like, there are benefits that Thomas talked about, especially just to shorten the data path between two different processes using EBPF. Is that something for you, and then for Vic, in terms of CUMER, is that something that is a win for you to build into the application such that the users don't need to see it? And if so, do you see Linkerd 2.0, whatever the next one is, supporting this out of the box, or is there something that's stopping it being a quick win in that regard? Yeah, so without really knowing the specifics, I would say operability has to come first, and subject to those constraints, then yes, more performance is better. I know there's some work we've done, like, have you got a team who've said, hey, we can get 3% speed up if we do this, or something like, is it as simple as just enabling it for everyone? Yeah, so that, I don't know, that I don't know. Okay, Vic, do you have a... I think the someone mentioned, the hybrid mode is the win. So that's something that we're looking into to implement in Qoom, for example, replacing the way how we're currently handling the network traffic through right now, these IP tables, and we're looking to use this EPF functionality to replace potentially this thing. And again, to make it also invisible for people, if they want to use the ultimate models, it would be just like configurations to which can allow them to, for compatibility reasons. So I don't believe that everything would be just replaced with one thing. And the operability and user experience is the first thing rather than performance. It's, I might sound like the clueless person, but can we have a bigger machines or spend a little bit more money on the cloud or type of thing? I'm a cloud vendor, so yes, absolutely, please. Yeah, in the past, there would be numbers that every developer needs to know. Like there's a chart, like how the performance goes and performance like throughput goes down and you let and see growth from the processor to network to distribute the things. Now it's just like one credit card swipe and you have a bigger machine to calculate your stuff. After that, you can kill this machine and just pay for the task. It's a practical choice. Just, yeah, build all your VMs to Vic, he'll sort it out for you. Just quickly to follow up on its question specifically, maybe Thomas, maybe Louis, like if everything else can be held the same, like in the case of EBPF, the more we move in, we want to be able to tell when things go wrong. Is there a concern that a user who might be debugging an application isn't necessarily able to get access to, because they're no longer dealing with a process that's inside their own container, their own namespace, perhaps some of this might run in a C-group that they control, but do I have the same visibility into the kernel with the model we're talking about here as I had in the past, and am I able to debug my own application? Absolutely, and I would actually turn it around. It's actually an opportunity to provide even better visibility. Monitoring, performance troubleshooting, observability has been a main driver of EBPF. EBPF has been primarily used for Perf Linux performance benchmarking and monitoring. We have a lot of experiencing building a networking layer with EBPF, and we have built massive observability and troubleshooting capabilities. And I would actually turn it around and say, it's an opportunity to provide better visibility on the lower levels that process like CUMOC and then leverage to provide great end user experience. I'm a kernel developer. I'm not especially good with UX, but I'm really good at providing the low level visibility and introspection that is required for troubleshooting. Because running at scale, such as psyllium clusters are running, observability monitoring metrics is absolutely essential. And that goes all the way into the service mesh, of course. All right, Niran has a question. One, one, okay, cool. Sorry, did I miss something, sorry. I'll just have you, Val. Maybe just a follow-up question is, I know CUMOC is a data plan implemented in EBPF. If you have a problem there, today when I do IP tables, I can add IP tables, logs everywhere till I figure out which rule is my problem. How would you go about that type of debugging with EBPF? K print if. Take a kernel system, no. No, there's tooling like CLI tooling, observable dashboards, everything. Similarly, and usually it's actually on a higher level attentive with the Kubernetes metadata into account. Very similar. It's usually not a dump of 100,000 IP tables rules, which is sometimes scared and nobody likes. It's usually like, I think, more, more abstracted. So no user has to read EBPF bytecode. You don't even need to understand EBPF programs. It's an implementation detail that gives opportunity. Do you need to be able to spell EBPF? Well, I think we're probably talking about the wrong thing. Most, like we're trying to serve the application developer and maintain our community, right? And that's mostly L7 stuff and they want higher level observability tools to go and find their stuff. If you're looking in logs, we're probably not helping you and we're not doing a good job and you should probably fire us. You should be looking in the tooling and the integrations in the tooling for the protocol and the application type and all those types of things. If we're talking about syslog dumps and like, yeah, we should just stop. So maybe we should take the next question. All right, so final question, please. I'm a bit stuffed. There we go. Yeah, we can hear you now. The microphone. So I'm really interested in your opinion about the adoption of service mesh because we're here talking about EBPF and kernel, not kernel. In my opinion, performance is not the inhibitor for adoption for service mesh. So what is the inhibitor for adoption for wider adoption of service mesh in your opinion? Like what do we need to do to make service mesh more widely adopted? That is a great question. I'd love it if we could end by going down the line with an answer to it from everybody. Yeah, so I think actually we've done a variety of surveys when we launched some service mesh and we asked like, what do you want us to do? What is your motivation? Main ask was, please no sidecars. Why? Complexity. It's not performance, right? It's great to show benchmarks and yes, performance is always better. I think William said it's 100% correct. Same complexity, same values overall, performance is better. But our main motivations to get rid of sidecars is actually not necessarily performance but getting to a simpler model. I think that service mesh should be as boring as TCP is today. We don't think about TCP today. I think the sidecar, the data path of service mesh should be as transparent, as simple as TCP is today. I would agree TCO, although he and I have slightly different opinions about maybe how to go about it. But I think we generally would agree with that point. Market confusion is probably not helping. Just being honest about it. And getting to a standard API that most people here could agree is a good API. I think there's an opportunity in this space right now. I think the Kubernetes Gateway APIs and that specification are a good set of APIs for traffic management and they are applicable to the service mesh use case. So it's my intention to kind of foster that. I think that's a good thing for the community and I think the establishment of that under the umbrella of Kubernetes would actually be helpful here. We're running a little into the wrap up session. So we're just to get another 20 seconds each from you all on that if we can, and then we'll wrap up. These are all good points, I definitely agree. I would add that in addition to that, what we're seeing with our customers, they need to see the value, right? They need to get something out of service mesh and especially big customer with complex environments. We want to simplify that and we want to help them get trust, right? Part of it is enabling service mesh in their environments in their setups, VMs, multi-cluster support and so on that they can show the value of a service mesh to the management in the org. Yeah, so what's blocking service mesh adoption? CNCF released a microsurvey this very morning so I would encourage you to check it out. It's their service mesh microsurvey that asks people exactly what that was and I'm not gonna tell you what the answer is. You're gonna have to go look at it. Just start looking at the graphs. I'll just say, again, I agree that complexity is a big issue whether that's a real issue or a perceived issue I think is a little blurrier these days. I agree 0% that side cards are the fundamental source of the complexity. I think the side card model, again, it's a beautiful elegant model. There's tooling that can help. There's some busted parts of it that kind of suck but those are not fatal flaws. I think the model is a really nice model so make better side cards. Go back home and tell your people that the extra cost whether it be in CPU usage or cycles, whatever is offset by the benefit you get in terms of the observability and the business value savings and things that you gain out of this. And the beauty, perhaps. It's like a very hard sport to be because like so many good opinions were shared. So I think what we can do better is just to alleviate the confusion around the, we should not, developers hate magic. We just, they love to use magic. They love to use technology that looks like magic but they hate when they need to deal with this and especially when they need to debug something in the 4 a.m. in the morning. So that's responsibility of like my personal responsibility to provide more knowledge around the things and what they should put an application code and what they should use from infrastructure. So in my, that's probably my final thoughts. Yeah, just alleviate confusion and just like make it less magic and allow people to use this technology. All right, well our panelists are gonna be outside momentarily. We're just going to now give them a warm round of applause to thank them very much for joining us. And as they gently filter off the stage in that direction Lynn will come up and give some closing thoughts. Exit, stage left. Thank you so much to our panelists and Craig, good job on moderating this. So I'd like to take a time, some time to thank everyone from the program committee. Please stand up if you're in the program committee. I believe Victor, you are and Edith and Craig. So thank you so much for making sure we have wonderful programs. And thank you for all of our sponsors and thank you, most of you for attending service match con without you. You know, this has been, I guess the best conference I've ever had since COVID. So really, really exciting to be on the stage and also talking to everyone. You know, honestly to me as somebody sitting in the conference, the biggest takeaway I have is, I think we are pretty confused at the market right now, listening to the debate of EBPF, service match, psych others and all the projects we are seeing in the ecosystem with, you know, QMI, ECO, Linkadee and Council Connect. It's an now salient service match. You know, what I really hope is next time when we get together at service match con, I really hope, you know we can bring some clarity to our user, you know so that we can be less confused about the market. We can be less confused about the architecture of service match where there would be a little bit more agreement among some of our industry leaders. With that, I want to thank you again. I believe you all have the ticket for the drinks. I'm actually not sure where the drinks will be but I think it's somewhere outside. So enjoy our cube con tomorrow and enjoy the drinks this evening. And if you haven't taken any of the sponsors' events, I believe there are some events. I think we have a cocktail tomorrow. So feel free to join us and also, you know, enjoy interacting with each of you in the conference. Thank you so much.