 Good evening, folks. We're going to go ahead and get started. I wanted to welcome you to Service Mesh Battle Scars. We're going to be talking about technology, timing, and trade-off tonight. I want to thank you all for being here this late. Y'all could be off drinking somewhere, so thank you for being here. As a result, as a reward, we want to try to make this exciting and hopefully you can learn something as well. I'm super excited to be here with my esteemed panelists, and I'm going to go ahead and let them introduce themselves. Start with Tom. First of all, this was the best intro ever to a panel. This is crazy good. Hello, everybody. I'm Tomas, CTO and Co-Founder of I-Surveillance. Creators of Cilium, the CNI, recently graduated. I also wrote a cycle as BlockPost a while ago that triggered a lot of the conversations that we'll talk about today. Yeah, I'm John Howard. I'm a software engineer at Google, and I've been working on Istio for a long time, about five years now. Hi, my name is Lin-Sang. I'm the head of open source at a small company, Solida.io. I've been working in Istio a little bit longer than John, since one of the founding members, but I'm not as productive as he is. I'm Flynn. I actually work in marketing. Am I supposed to be on this stage? I'm a technical evangelist for Lincordia at Boyant. Before that, I was the original author of Emissary Ingress. I'm one of the gamma co-leads over with the Gateway API people, and I'm pretty sure I've been working in engineering possibly longer than some of y'all have been alive, actually. So, yeah. And last, I am Keith Maddox, an engineering lead at Microsoft. I am also your way too excited to be doing this panel. This is all fake. I'm tired. But we're going to have a good time anyway. Yeah, as a disclaimer, I didn't want to say, we have multiple service meshes represented here. I, disclaimer, I'm a maintainer of Istio service mesh, but I promise to remain impartial throughout the contents of this panel. Microsoft uses all three of these technologies in production, so take that for what it's worth. I plan to. You say that like it wouldn't happen anyway. So, why are we here? Why are we having this talk about technology, timing, and trade-offs? Well, as you can see, these are three CNCF-graduated projects, three CNCF-graduated technologies. First of all, let's get a round of applause for these three incredible technologies, powering innovation and productivity and organizations across the world. Love what the CNCF does. And so, we're going to have a conversation and discussion this evening about the various differences between these service mesh projects. My role here tonight is merely to be your chief instigating officer to ask these questions, maybe a lot from grenades, figuratively, here and there, and help these folks have a lively conversation. Panelists should feel free to respond to the answers of other panelists. With each question, I'm going to kind of target towards one person in particular, and other folks should feel free to respond. If things are going too slowly or too boring, I may call on people individually. We'll see. And lastly, be kind, ish. No, seriously, be nice, but not too nice. And yeah, that's how we're going to go. And you all ready to hear about service mesh battle scars tonight? All right. You're having way too much fun with this, dude. It's like five thirty, we're all done. I need to bring the energy anyway. We got this. All right. I need you to hype up my future talks as well. Yeah, we can. You can make a living doing this. Well, talk to me after we're all right. We're going to go ahead and get started with our first question. So I'm going to direct this to Flynn, representing Linker D. In 30 seconds or less, describe your service mesh as philosophy. What do you optimize for? These could be things like speed, diverse feature set, usability or other. Flynn, take it away. Be a service mesh that gives you everything you actually need for security, reliability and observability while focusing on operational simplicity so that it will actually work if you're a four person startup or if you're a 50,000 person organization. How many seconds was that? Interesting. Thomas, what do you have to say about that? Describe Syllium's service mesh? Yeah, Syllium, we're really trying to make service mesh invisible. So most of you may have heard about Syllium as a CNI, just in that working layer, boring, never visible. And we really see service mesh functionality as just part of the connectivity layer. So customers and users have been asking us, give us service mesh as well. And that's what we've been building into Syllium. Fintech introduced like Sidecarless service mesh without running a Sidecar proxy. And I think what's very unique about Syllium is that we're in a position where we can do as much as possible, specifically in EVPF in the kernel with no proxy whatsoever. That's kind of the superpower we have with Syllium service mesh and that gives scale and performance and lower latency. We're going to get to some EVPF later in this panel. But to wrap up this question, John or Lynn, one of you two, you want to talk about Istio's philosophy. Yeah, I would say, I mean, if we talk about service mesh users, there's all these different things people want. Observability, security, routing. And it's funny, if you talk to any individual person, there's usually just one part they really care about. And so I feel like Istio's philosophy is to serve the most users and give them the most value, right? We can't actually just say we just want to do telemetry because that only targets a very small set of people. So Istio has a very diverse range of contributors, right? Microsoft, Google, a bunch of startups, a bunch of other tons and tons of companies. And each one's coming with their own priorities that we balance as a project to give the most value to the most people. So that's kind of where I view Istio's philosophy. Yeah, I'm not going to spend 30 seconds because it's now fail for a project. So I'm going to spend 10 seconds. I think Istio's philosophy is simple stuff should be simple and complex stuff should be possible. Love that. Love that. So if you're new to Service Mesh, if it's maybe your first time hearing about some of these. Did you really just say Istio and simple in the same sentence? I'm supposed to be channeling William up here. What do you mean? Come on. I mean, being simple doesn't mean not having functionality, right? It means exposing that in reasonable ways. So it is simple to use Istio. But there is a lot of functionality that it offers. If you go dive into the docs and use every feature, absolutely, it's complicated. That's crazy. But it's quite as simple as Linkardee these days, I would say, to do the basic functionality. Yeah, I don't believe you. Claws are already coming out. Love to see it. We'll, I don't know. I think we're going to have to try that one later. That'd be fun. Well, now that you have been acquainted with some of these service meshes, let's move on to our next question. Secure transport is arguably the biggest selling point of a service mesh, but the how is to be MTLS, WireGuard, or IPsec, and the where, sidecars, nodeproxies, CNI, you know, these are all different ways to provide that secure transport feature to users. What approach has your mesh taken to provide this critical functionality to your users? And what have you learned from that approach? And we're going to start with John. Oh, me first, great. Yeah, I mean, in Easter, we use mutual TLS as the primary secure transport, right? Why do we do that? Security should be boring, right? You don't want to go to a conference and get excited about some crazy new security thing, right? MTLS has been used by everyone in the world practically for a minute. Well, sorry, not MTLS, but TLS, you know, my mom uses TLS every day when she goes and browsers applications, right? It's the standard secure transport. It's had decades of experience on how to do it and how not to do it. And so it's kind of the obvious choice for users. Are there some part about that as your mom doesn't even know she's using TLS? And it still just works. I would love if that was the same with ServiceMesh as well, so. Yeah, I would agree with you on that. And you guys use standard, normal, plain old, boring MTLS? Yes, yeah, just normal mutual TLS. I think quite similar to your solution, but I'll let you see that. Oh, so yeah, Glyn, go ahead. LinkerD is technically the first ServiceMesh, I believe, depending on public commit history, has been around the block for a while. What is your approach to secret transporting? What have you learned throughout your time in the ecosystem? Yeah, it's MTLS. There are really, really, really good reasons why you should not go and implement your own crypto. I mean, really good reasons for that. So we just use MTLS, and it works, and it's great, and it's boring, which is wonderful. There are things about identity that we could start talking about, where I don't know if that's part of your secure transport question, or if you have a separate identity thing. One of the other things that LinkerD does is deliberately separate identity from anything having to do with the network. So that also plays into things and makes it very easy for us to adapt to different topologies and things like that. But ultimately, it's MTLS and boringness. MTLS and boringness. So, Thomas, I know that Cillium's got a little bit of a different approach to secure transport. Talk to us about some of that. We can take boring to a whole other level, right? We're using IPsec as the most common form of just secure transport, which can secure all sorts of network traffic. We also support WireGuard as a newer form of encryption that can secure and encrypt all of the traffic. Downside of that, it is not FIPS compliant, which is why a lot of you will be looking at IPsec for encryption. And then most recently we used four mutual authentication parts. We're using an MTLS-based handshake. Interesting. So can you dig a little bit deeper into that for us? So talk to us about that MTLS handshake. Absolutely. So I think the main thing, because there's lots of service measures out there, we didn't feel like we'd have to do again what already exists out there. The feedback that we have been getting is that MTLS is actually great as a handshake protocol. It's incredibly fast because using the internet has optimized the speed of the handshake tremendously. Feedback was also that, hey, this is actually pretty limiting because it's typically TCP only. And in a legacy environment, you may be running all sorts of network protocols. So we have decoupled the authentication and the encryption part. So we're not building our own crypto. Like, definitely not. We're using IPsec and WireGuard as the encryption layer, and we're using MTLS to perform the mutual authentication handshake. So we are authenticating using SPIFI IDs, MTLS handshake, and secure all the transport using IPsec or WireGuard. Yeah, can I add something here? So recently I actually been studying mutual TLS, the TLS protocol myself. I was gonna say, hey, Lynn, didn't you go write a blog post about this already? Right, because honestly, as a user, like you and I, we actually don't understand, TLS actually have like two protocol, right? The handshake, what Thomas was just referring to is one part of the protocol. The other important part of the TLS protocol, most of you may not know, is the record protocol, right? So in order to have the full end-to-end security to claim to be TLS compliant, not only you need to have the handshake, but it's also important to reuse that encryption key that you established from your handshake onto the actual connection when you secure your application communication through the record protocol. So I just wanna make sure that's really important to be TLS compliant. Yeah, I'm not actually disagreeing at all. We are working on using the secret that's negotiated in the TLS handshake for the encryption. The encryption part, and you will notice really well as a cell offload is absolutely very common that the actual encryption is not happening where the actual handshake is happening. I don't really see a difference whether we're offloading TLS encryption part to hardware or we're offloading that to symmetric encryption in the kernel. That's to me is not actual difference from a security standpoint. That said, if you really want to use the TLS connection for the transport, you can totally do that with Silyam as well. We've been doing TLS origination and termination for many years. It's simply not what we have been seeing as the value we could add. And I'm actually not saying that one model is strictly better than the other one. We're offering choice. I think if you want to use the approach we are developing, we're welcoming everybody to do so but we're also not out here and saying that's the only value approach out there. I think every solution, every approach has trade-offs, right? Were you gonna say? Yeah, I was just gonna say I'm fortunate that I'm up here and I don't have to say that I think that TLS is secure because millions and millions of people have been studying the security of TLS in the world. So I don't have to worry about whether it's secure because we have an entire ecosystem of the entire world around proving and making it secure. So it could be that it's secure, I believe you, but how do we know, right? Is there audits? What is the extent of this being deployed in production tested against adversaries, right? And the great thing about that is I don't even have to say that because that's exactly what I was just gonna say. Well, we're not introducing any new form of crypto. We're betting on IPSec, which has been in use for many, many, many years. We're not actually using some modified version of TLS. We're simply making a different split instead of offloading the encryption part to hardware. We're offloading that into the kernel. That also exists as KTLS. If you're using OpenSSL for your SSL TLS implementation, the encryption part gets implemented and offloaded into the kernel. I'm really not understanding the argument here. I think you're trying to somehow picture this in a way that we're inventing some new way of doing crypto. That's not true. Simply not accurate. I think we're saying that because we've seen stuff coming out of Isovalent implying that you're inventing new stuff. And so that's what we're getting at. If you're actually saying that what you're doing is using exactly the same mechanisms that people have used for TLS offload in the kernel, that's different from what I think I've heard from Isovalent. And that's really interesting, actually, because this is going to sound incredibly snarky, but sincerely, that's something I did not know. And that is very interesting. So what I said is we're using the same concept as offloading. We're talking as soon as you start talking about the difference between concepts and implementation in cryptography, then, yeah, I'm going to start asking lots and lots of questions. Yeah, and we would welcome everybody. And then let me be clear. We will want to welcome everybody to have a very, very close look, right? I think if you want to use the existing way of also sending data over the TLS connection, go and use it. You can also do that with Silium. We're offering an other choice where we would welcome everybody and even encourage everybody to have a very, very close look and actually work with us to validate the model, because it clearly has benefits. But we're also all very interested that it's actually as secure or more secure than what currently exists. Two-minute warning on this question, then we'll keep moving. Continue. That is the reason why we're pushing back on it, though, right? It's not just going, hey, we don't like Thomas. Let's pick on him. I mean, OK, picking up is great. But no, you're saying on the one hand, oh, yeah, we're not doing anything new. But at the same time, I'm also saying, no, wait, we're doing things that are new, or at least that's how it comes across. And so we end up pushing back on that, going, in crypto, new is scary. It's the antithesis of boring. And so, yeah, you're going to get a lot of pushback on that. That's great. I mean, I remember being on a panel so much to do this in Valencia, being one of the only ones talking about CycleLess. A couple of months later on, we had several service meshes launching CycleLess. Ripping out there, talking EBPF. A couple of months later on, everybody's pushing EBPF. So yeah, pushback on me. But I think what I would encourage is look at the model and give us feedback. We think it's actually a super valuable idea. But we'd also want everybody to have a very close look. What should be felt? The CycleLess we have is totally different architecture. We're going to get to CycleLess, I promise. We're going to get to CycleLess. Any last question or responses on this question before you want to move forward? We've got about 45 seconds. Well, I guess I just want to echo what John and the Flynn was saying. How many of you are actually using browser and shop at eBay or Amazon? You trust the TLS. For security, you want to pick what's best there, what's been tested through your credit card. That's what the tools you want to use to be compliant, not only on the initial handshake, but actually on the transport, on the actual communication as well, to reuse that encryption key. It's super important. So Thomas, I want to give you the chance to say the last word on this question. No, I actually agree. I would absolutely agree with Lynn. TLS is a fantastic protocol. That's why we use it for the handshake. I also completely agree with the point that the secret negotiated in handshake should be used for encryption, which is what we're working on right now. We should simply believe that limiting the transport to the TCP connection for the TLS connection, that's limiting. And to answer that, we've created a variant where we can encrypt STTP, UDP, multicast, and so on to meet enterprise requirements out there. We're also not the only ones doing this sort of split. You can Google ALTS, and you will find a page how Google internally at Borg does. It's not exactly the same how we do it, but it's a very similar concept of also splitting handshake and encryption part. So we didn't even invent this idea. It's not a completely new idea, but I think it's an interesting idea for the clown-abuse space. You think I'm an expert at everything that goes on at Google? I think the critical part of that was your point that, oh, yeah, you can Google. You can find this paper. It's not actually the same as what we're doing, but you can find about the concept. Well, the point is we're not the first ones to split the authentication and encryption part. That's my only statement. Borg is literally doing this for a very similar reason what we're doing. It's not exactly the same, so we're not doing ALTS. We're also not making exactly the same assumptions, but it's not a crazy idea to split authentication and transport. That's my only point. That is true, but it is quite different. And the time when Google introduced that, TLS was quite a bit different. From Google's standpoint, I work on Google Cloud. We are pushing neutral TLS to our customers for a good reason, not ALTS, because we believe it's the best for our customers. Yup. So gotta cut this question quick for time, but hey, I love open source, don't you? What a great conversation, what a great ability there is to be able to have this conversation in the open, debating different ideas. This is what open source is all about. This is fantastic. So to close out, I think I heard you say that Cilium for the record is working on authentication and transport within TLS. Did I hear that correctly? Yes, absolutely. So you can already use TLS to secure your connections today. Since many years, TLS, ordination, termination, we're simply introducing a new form which has benefits. And that's what we've been talking about here as well. And you can, of course, do IP second wire guard for the entire encryption. Many years supported. So it's not only the new way that you can use it. There's many established ways that we offer as well. Yeah, I thought you said earlier that it was just using the keys negotiated in the handshake. Is it a standard TLS? Because I would love to see something like that in Cilium. Can you repeat that? Yeah, you said in response to Keith that you were working on doing, I forget the exact wording used, but mutual TLS basically. But I had thought earlier that this was still doing wire guard or IP sec, but using keys negotiated. So we have standard TLS, ordination, and TLS termination, standard envoy. Exactly like nothing specific, nothing, nothing custom. That's just doing normal TLS and you have to go through the proxy. You have encryption with wire guard and IP sec. No authentication on service level. No spiffy, no spire. Simply IP sec authentication with Ike and so on. And now the new method we added is mutual authentication with MTLS on handshake, spiffy spire for the certification. And then what we're working on right now is using the secret from that handshake to encrypt with IP sec. Thanks. We do have to move on for a sake of time, but again, thank you to the panelists for giving us such a great discussion. Love to be a part of this. So next question. All right, EBPF is a Linux core technology that boasts increased performance and observability for production systems. Each project represented here has in the very least evaluated EBPF in a service mesh context. What kind of use cases, trade-offs, and successes has remesh found EBPF? And of course, I'm starting with Thomas. First of all, if you after they want to learn more about EBPF, we're actually launching the EBPF documentary at KubeCon tomorrow, which will give you like the founding story of EBPF all the way back to 2014. And I think this question is really easy for Cillin because almost everything that Cillin does is done using EBPF. Cillin was essentially created by several of like the EBPF founders or creators or kernel-level operators who have been involved with EBPF early on. In general, all things we can do in EBPF, we do in EBPF, and then certain things like rate limiting, layer seven rate limiting, retries, layer seven load balancing, we're using on-board. Yeah, I can chime in on the exterior side. So I guess a year or two ago, we added the EBPF support for cycle to short cut the connectivity between the envoy cycle to the application part, and we've done a bunch of performance testing. And what we find out is that's about 10%, five to 10% performance improvement, which is not great, but it's better than nothing. And then most recently, so that's an ecosystem project in Istio. It's called Merbridge. Most recently, we also added EBPF support for the traffic redirection between the zero trust tunnel to the application part using also EBPF for ambient. So we've done a lot of work with EBPF in the Istio community, but the acceleration we've seen is about five to 10% improvement on latency. Yeah, like I said, Istio's very diverse. I have a much different perspective. Well, fairly similar, but if we consider like the sidecar case, we've looked a lot about at EBPF, right? And we use IP tables if you're not familiar in our sidecars. There's lots of documentation and blogs about how IP tables is super slow and it scales terribly, right? But we use IP tables different than Kubernetes does, right? We have a static set of rules. There's like five rules. The issues with scaling IP tables that Sillium solves and does a great job solving, I think, don't actually apply to Istio. So while you might be able to shave off a little bit of performance with EBPF, the visibility of just using the standard Linux kernel networking tools. I mean, of course, EBPF is part of the Linux kernel, but it's not something that like NetStat understands, right? It's not something all these tools, TCP dump, developers, even just being aware of these tools and whatnot. So for us, it's just not really worth the trade-offs because we wouldn't use it for that much, right? Flynn, I'm thinking and remembering some blog posts of some nature about Lincardy and EBPF. Why don't you illuminate us? Yeah, Lincardy thinks EBPF is a load of crap. All right, I'm being facetious. A better way to put that is Lincardy tends to be EBPF as a thing that is brilliant at layers two and three, maybe as high as layer four, not so much at layer seven. And basically everything that Lincardy sees people do, a huge amount of it is at layer seven, not down, four and below. And this sharply limits the utility of what EBPF can really do for us. If you take Lincardy and you run it on top of Cilium CNI, it works delightfully. And we're quite happy with that. We think Cilium CNI works out really nicely. I mean, to the extent that any of the CNIs work really nicely given the nature of CNI in Kubernetes where there are a lot of edge cases and things like that. But yeah, Cilium CNI is great. EBPF will become really interesting to us, I think, at the point that it becomes capable of doing things like HTTPS, which is a very, very long way away if it's ever possible for EBPF to do something like that. Because a bunch of the protocols that you see at layer seven are vastly more complex than I think we want to see the EBPF validator allowing. So that's our actual take. Can you ask why is that? Like why do you say that we want, why is that? You've presumably looked at the protocol details of HTTPS. We've written an HTTP parser to HTTP to parser in EBPF and it gives amazing benefits if you can without a proxy get open telemetry data, essentially zero overhead. So that's a fantastic story. And every time we demo that and show that, it's like, great, why do I not want that? That's great. So I'm asking you, why would we not want that? That's the question. Yeah, I'm gonna have to take a look at that because my guess is that it has a number of sharp limitations that the higher level things operating in user space don't have. Or just making random accusations. I'm stating my expectations and telling you that I need to look at the code. I don't think you can, because correct me if I'm wrong, it's part of the Sillian Enterprise, right? No, it's not. Do you have a HTTP parser in Sillian open source? Absolutely, but I'd be very interested in looking at that because... I as well, yeah. I actually looked for it and didn't find it so bad. I'm glad to hear that. For those of you who are not familiar with EBPF on that front, EBPF is taking user code and running it in the kernel, which is historically an incredibly dangerous prospect. So EBPF has a validator that works very hard to try to decide whether the code you're handling it is going to work or whether it's going to, I don't know, rip your kernel apart and melt it down. The validator tends to be very conservative. If it can't figure out if something is known to be good, it says, nope, not gonna do this. So it does not allow things like unbounded loops, for example. It has a really interesting time allowing things like, you know, state that is long lived across an entire connection, things like that. And these are things that you end up needing to do or things that end up being very convenient to do, I should say, for working with a lot of the layer seven protocols. So, A, yeah, I'd love to see the code. That would be deeply fascinating and I will absolutely take a look at it. B, well, yeah, based on my experience with looking at this stuff, I really wonder what corners have to be cut to make that work. If the answer is really none of them, then I'm gonna be really surprised and I'll stand up in Paris and say, hey, I've been surprised, but I surveil it. They did this awesome thing. Yeah, is that what you call it? Can I just make you quickly? I think I agree with Thomas what Thomas was saying, right? EBPF is not great with retry, timeout, some of the traffic function, the weather system is still relying on where to solve, but for layer seven for telemetry, I think EBPF could potentially be very, very useful here. I remember there was a project called, I think it's called Pixie, who does EBPF based observability, which is a great project. I've seen some demos around it on their website, so I haven't really poked the code as you wish to, but at least it shows telemetry, but with EBPF it's a very good potential out there. There are ways I can imagine, so first off, yeah, totally willing to be proven wrong, but yeah, there are certainly ways I can imagine it being easier to fit in with telemetry than some of the other functionality as well. I do have to cut us off, we are running quickly out of time. Okay, fine, go ahead. 30 seconds. I honestly actually agree a lot with Thomas that even if, and I don't know what the limits are, even if it's limited, to be able to give some telemetry, some HTTP knowledge at the node level with lower cost, it's kind of compelling. That would be really cool. I think then having the option to upgrade that to a full L7, Envoy or LinkerD, whatever, is powerful as well, but I've started recently to think more about we need to have low cost, and then a little bit of cost, and a little bit of cost, and a little more cost as you build on features. Not like I have nothing or I have the full service mesh, and I have to consume everything. So having something that's kind of a step in between can be quite nice. I don't know how good it is, but it sounds like it could be something that's quite powerful. We tend to just try to make it so the whole thing is low enough cost that you can use it all the time. That's fair. I just find it super funny that like, as one of the creators of EPF, I'm getting told what's possible and not possible if the technology is so fast in it. Yeah, but I dealt with BPF before EPF was around, so. Okay, last question because of time constraints. So, sidecar, sidecar, listen, ambient. These are, there are many, many terms that it's so hard to keep them all straight. Can you clear things up? This is gonna go to Lynn. So can you clear things up for the audience and tell us what these architectures are? What is your mesh, your projects currently are eminently employing and what are the strengths and weaknesses? Any of those things are places you can dive. Yeah, so how many of you are in KubeCon 2017, Austin? Remember, it's slow, right? So back then, LinkD had 1.0, which I think it's called Conduit, where you guys launched the site. So, Conduit was the name of LinkD2 before it was LinkD2, LinkD1 was a totally different thing. Okay, but I was referring to the architecture without sidecar for service mesh, right? So LinkD had that evolution. Well, Istio, I was also giving a talk in that KubeCon talking about Istio with the sidecar architecture. So just give credit. You guys were running Node Proxy in 2017 in LinkD. And Istio was running Sidecar in 2017, which was also when Istio was launched, right? And I believe in 2021, Selim come back to say, we also have a Sidecar service mesh, right? Without Sidecar using Node Proxy. So in 2022, in the Istio community, we started this new data plane mode called Ambient. What's really unique about Ambient is we kind of slicing the layer seven, which is the layer seven processing layer, all the function we talk about traffic shifting, rich authorization policy, layer seven observability into a separate layer from the secure overlay layer. So what we are proposing with Ambient is you can do a low level proxy for your layer four functionality that can do mutual TLS, simple authorization policy. But for layer seven, you're having your dedicated layer seven proxy based on the tenant scope that you feel comfortable, whether it's namespace or whether it's service account. So that's kind of the whole evolution of service mesh if I can summarize from 2017. Lane giving us some great history into some of these terms. John, you wanna give us your... Oh, sure, I missed it. Go ahead, Thomas. You can go first. Okay, well, I was gonna say that, you know, we're all from different projects and stuff, so it's easy to talk about the differences. But I actually think that we can work together quite well. Like in Ambient, we're not really just, it's like not just sidecars or sidecars, right? There's a node component that is required to do things securely so that we can get traffic to what we call the waypoint proxies, which are really just a general load balancer that can implement any functionality that an HB load balancer can do. We have retries, telemetry, timeouts, whatever. But we still need to get the traffic there encrypted and maintain the properties of our secure network. That can layer very well with a lower level project like Cilium or other CNI. It's like, I would love for Cilium to be that layer and integrate with Istio Waypoints, for example. I really don't wanna be writing EVPF code or doing all this low-level networking stuff that you guys have spent years getting right. It's not fun, it's really hard. You guys already do a good job at it. I would love if Mesh could focus on the higher-level stuff on top, the HTTP functionality, and more seamlessly integrate so you don't have to choose one or the other. Yeah, I think we're super interested in that. In fact, we have had an Istio integration for many years. We're for Cilium Layer 7 policies, we can enforce that in Istio Sidecar. It's been used as well. So yes, we're definitely super open to that. Sounds like some collaboration learning. Check in in Paris, see what happens. Flynn, so as Linger D, as Lynn mentioned, y'all were some of the first people to go with a node proxy, and now you're doing sidecars and have you re-evaluated potentially going back? What is Linger D's perspective on these different architectures and topologies? So, Linger D 1.0 was a node proxy, and since we have two and a half minutes left, I'll just cut that description really short and say Linger D 2 does not use node proxies because over time, talking to people who are using it, we realized that there are a lot of operational hassles with that. One of the really nice things about sidecars is that since they're coupled with the pods, all of the operational things that people are used to doing with pods just work, and you don't have things like, you know, the node proxy goes down and takes some random section of your pods with it, and suddenly you can no longer talk to them. If a sidecar goes down, it interrupts communications with one pod, you restart the pod and you're good to go again. So, operational simplicity is a reason why we stuck with sidecars. I'm also gonna point out, I was the original author of Emissary Ingress, Emissary Ingress is based on Envoy. I know lots and lots about the experience of working with Envoy and the idea of running thousands of Envoy's in a cluster scares the bejesus out of me because Envoy is many, many things, but it is not known for being lightweight in terms of resource consumption. So, one of the other things about Linger D 2 that I think is worth pointing out, and Linger D 2 is the thing that everybody here is now thinking of as Linger D, to be clear, is that deliberately designing the proxy to be small and lightweight makes it much more reasonable to stick with sidecars. There are a lot of ways that I personally tend to feel like a lot of the push towards the sidecard-less stuff is really about trying to deal with the resource load of running lots and lots of Envoy's. John and I in particular have had lots of discussions about how that is not the entire story, but I feel like it's still part of the story, at least. Yeah, so we're right about out of time, Thomas. Do you want to take us home and talk to us about what kind of formulated, what cause Cilium to look back into, or look into, rather, sidecarless and node proxies? What kind of things are you seeing around operational cost or complexity? Yeah, so first of all, I think many Cilium users are using mesh features, and have been using mesh features without any proxy at all, whether it's like a service mesh, multi-cluster routing, encryption, such as encrypting all of the traffic. We've been able to do that without any proxies, or the question whether sidecar or perno proxy actually never popped up. But we clearly also had Cilium users that were using Istio, for example, very happy from a functionality and feature perspective, but then said, well, it's really hard to actually manage thousands and thousands of sidecars, it's a life cycle of that and the overhead of that and so on. I think that's also what the Istio team has been hearing a little bit and what led to ambient mesh, right? So I think we've been seeing a similar signal that, like the concept is great, but maybe there should be an alternative, and maybe you should have choice, and you can run sidecar or perno proxy or offload a lot of it into the kernel. So I think my opinion doesn't really matter. I think what really matters is the opinion of all of you, and give us feedback, like what's the model that actually gives you what you want and gives you the operational experience and the operational behavior you want and tell us what we shouldn't be out here preaching what is right. You should be telling us what you all want to use. That's a fantastic thing to end off of because like we said earlier, even though there are different projects represented here, we all are in the CNCF, all graduated projects, and one thing we can all agree on is the importance of user feedback. So with that in mind, we've got some user surveys for y'all from Istio, Linkardy, and Sillium. I'm sure all of us would love to get some feedback from y'all about what works, what doesn't, so take a moment and get a picture of those QR codes. And the last thing I want to say is thank you again for being here at this late hour after a long conference, you really appreciate it, and you, the users are what makes our projects go around. So thank you for all the things that you do, and I want to give a big round of applause, please, to these amazing panelists. It's a fantastic evening. Thank you. All right, have a great night, everybody.