 Hello, hello. Hello. Mr. Bell. Josh, were you on last time? If I recall, you were giving us really good feedback about SMP. Because Josh, if that's the case, you clearly can learn your lesson. So I have this is just much to my amazement. We'll have to harass Josh later. OK. Hey, welcome, everybody. Put a link to the meeting minutes in the chat. If you haven't recorded your name in the agenda under attendees, please do. Business to do that is so that I can go, at least from my part, I can go back in history and see if I'm harassing the right person. I am Mr. Bell. It is you. Anyway, can I want to have an internet issue in my area? Good afternoon, Kenny. It sounds like you're having internet issues in your area. I am. Very good. OK, well, hey, listen, we don't need a bunch of corny jokes for me, like usual. There's a full agenda today. So we've got a number of different topics to go through. But let's dive in. Ken Owens is here with me co-chairing the CNCF SIG network. For those that might not be familiar, this particular time slot is used for both the service mesh working group and its initiatives. There's about four. And SIG network, just for those that are unaware, SIG network sometimes has lots of topics. It seems to go in spurts and fits, and sometimes not. And so instead of creating a separate time, we've been using this time for the service mesh working group. Those topics are somewhat light today. So it works out well because we've got a couple of presentations without any further ado given the time. An update on Get Nighthawk. Actually, so Abhishek, do you want to do you want to brief us on transpirings from two weeks ago? Oh, yes, sure. Right. Hello, everybody. I will be briefing up the update on the Nighthawk. Let me share my screen. I hope my doc is visible. Please. OK, OK. Just a little bit, I'll share my whole screen so that. Cool. So last time when we discussed, I talked about publishing build artifacts of individual binaries as well as Docker images and setting up CI in action to do so. We've been up with a prototype which can be found in this repository called Get Nighthawk in which I define a set of custom GitHub actions which does the job and which basically right now publishes kind of open to binaries which are different binaries of the Nighthawk project itself like the client server and the test server binaries. So I've done for just open to right now just for testing purposes and it builds out fine and it publishes these binaries on as a part of the release artifacts. But I do have a couple of questions around the Nighthawk maintainers if someone is up. The first thing is that I want to know if there is any Docker file which before that, is there anyone who is from the Nighthawk project? Tsunku from Intel said that he has a conflict today. So he's not here. And then auto was positive on the progress. Auto of Red Hat was positive on the progress but I don't see him on the call. All right, in that case, I'll just save it for my private conversation with them probably. So the next step of discussion I wanted to make is around how we're going to automate this process. Basically, right now, we trigger this action manually by giving in a couple of parameters like which version to be released and what architecture and et cetera. So how are we planning on automating this process is going to be the next discussion point of this project, I suppose. Does anyone have inputs on this? Abhishek, for my part, I missed the question. So what I was asking is that right now, we have the workflow set up for publishing all these binary artifacts and open images of the Nighthawk builds. But currently, we trigger them manually with the input parameters of which version of the Nighthawk to release and all those details. So how are we planning on automating this? Yeah, that's a great. Got it. So one option is that we define a trigger in the Nighthawk projects repository itself. Is that a better way? I just want some inputs from you guys. I think that's a great suggestion. We should raise an issue in the Nighthawk repo. Sure, yeah. Take a note. OK, well, while you're giving the time, while you're taking a note, the other update on the project was with respect to logo selection. And so in the March 4th meeting minutes, there's a link to a set of logos that were drafted and available for people to vote on. So I'm pasting that into the meeting minutes. In this week's meeting minutes, I haven't done an explicit tally. But by the looks of it, it looks like draft logo number eight is winning out. So there aren't a bunch of other Nighthawk representatives on today's call. But anyone who's on the call is welcome to vote on what logo they might think is befitting of the project. Please do. Maybe just make a comment in the meeting minutes here. What's your favorite? What you think is the most befitting? Abhishek, any other updates? No, I think that's pretty much. And I just want some inputs, mainly because we are missing Nighthawk maintainers. I will save my questions for later. Yeah, no more updates. All right, two other items that we've been tracking in the Service Mesh Working Group. One is service mesh performance and in it, sort of congealing and being an appropriate time to propose it for the sandbox. So Ken and Sunku and myself and all the rest of you are welcome as well have drafted this sandbox proposal. Fairly important, sweet. These are the questions that come from the sandbox proposal form. So jump in and assist. Otherwise, tomorrow this will probably be submitted. What else? Along with that, meshery as the canonical implementation of SMP and as the SMI conformance tool will probably be submitted alongside. It's been a long time kind of waiting for that to come. And it's probably most appropriate to submit it alongside service mesh performance. And there isn't a draft proposal, so that needs to come together pretty quickly. OK, so for service mesh working group topics, I think that's the end of it. Any comments or questions? All right, SIG network, just a quick reminder that emissary ingress or the project formerly known as ambassador is still out for public review. Last time I checked anyway. So certainly they'll appreciate your feed-year support. Give it a plus one or give it a minus one if that's appropriate. And so for the remainder of the topics that we have, two presentations. So the Submariner is up to present. And they're thinking in and around a donation. And then the next presentation is from Linkerdee. Is currently at the incubation stage and is headed toward graduation and want to get some eyeballs on and review there. So with that of the Submariner team, welcome, folks. Who do we hand off to? Hey, Lou. I can start it off and then I'll hand off to some different team members as we go. Great, so we have our slides there. So Miguel is going to be giving most of the presentation. He's a poor of us, our Red Headers working on Submariner here and then Saki is a very active user, who's helping us a lot to figure out stuff. We have five Submariner people on the call. If you have any questions, we should be able to answer it. So I just wanted to tee up by mentioning that we're planning on donating Submariner to the CNCF. And so we would love feedback throughout the presentation or asynchronously later, if you think of something else. In the interest of time, I'll go ahead and hand off to Miguel and then Stephen for a demo. And then we'll move back for questions. OK, sorry, I was trying to unmute. I'm not very used to some. OK, so Nani, do you want to start with the Submariner donation slide? Yeah, I mean, I mentioned it briefly, but we've been working for a long time. I'll say a few more words, I guess. We've been working for quite a long time to prepare Submariner to be donated to the CNCF, as far as preparing our developer infrastructure to scale and making our user experience nicer and getting all our ducks in line for electrical property things and licensing and all that stuff. We think we're in quite good shape there. We have a document linked both in the slide deck and in the agenda and put it in chat. That's the same thing that Lee was just showing in a copied questions from the Google forum with our answers. You're welcome to comment there, too. Yeah, and this is one of our last stops, but we hope before submitting the donation. OK, so let me explain what is Submariner for the people on the call. So the idea of Submariner is enabling direct network connectivity, like layer three IP packets between the pods and services of Kubernetes clusters. And it works by exposing a set of custom resources in a Kubernetes data store. I will explain a little bit more about that. You have the link to the website. You want to see more details on the architecture and how it works, how can it be installed. We have a few quick starts for different types of different clouds, different Kubernetes flavors and different network plugins. So yeah, and you can deploy Submariner in different ways. We have an operator, we have hand charts, and we have a command line tool that really helps you in the onboarding of clusters to a cluster set and looking into the details of how the connectivity is working or troubleshooting if there are issues. So common use cases for this type of connectivity can be application availability, disaster recovery, data residency guidelines. So if your data needs to live on specific locations and many other use cases, it is really similar to service measures, but in a more simplified way in terms of how packets are handled. The idea is that the packets of the pods when they are talking to each other or to services in other clusters, they are always handled in the Linux kernel. So they don't go into any user LAN application that needs to process the packet and has different latency behavior compared to the kernel. So the idea is that you can maximize throughput and minimize latency or jitter in your packets. So this is a simplified picture. If you have two clusters, your pods will be able to talk to each other. They will be able to discover each other via standard APIs that have been defined in the Kubernetes multi-cluster seek. And we will be working in network policies because then that becomes increasingly important. So we have a plan and a proof of concept for that part. But yeah, it will be developed in the next versions of Submariner. I think I briefly explained the benefits of Submariner. I have covered most of what we have here. Yeah, the idea is that we try to be as much as we can agnostic to the flavor of Kubernetes that it is, I mean, it can be hard because we try to do everything on the kernel, also agnostic to the network plugin that you are using. But again, for some network plugins, we will need to develop specific integrations as we already had to do for some of them. And yeah, you can deploy services across multiple clusters. And you can load balance between them and they can be discovered using standard APIs that have been defined in Kubernetes multi-cluster seek. The traffic between the clusters is encrypted. We have an architecture that allows different what we call cable drivers. So we have IPsec by default, but we also have one for WireGuard. And well, in some cases, it can be desirable that you don't want not to have encryption because maybe your clusters are private and you want or you need to maximize the throughput. And in those cases, we will provide an encrypted cable drivers and we are working on that. So the high-level architecture of Submariner is one. We use what we call a broker to exchange information about the participating clusters or the services that have been exported to other clusters and endpoints, the information about how to reach a specific cluster. So if you see, its cluster meets at least one gateway, that it's just one of your Kubernetes nodes that you mark with a Submariner gateway label. So that one will become your gateway. You can have multiple ones. Currently, we do active passive failover with between three to 10 seconds failover. But yeah, they become the connectivity point for other clusters. So please feel free to stop me if you have any questions or something that makes sense. Quick question if I could. I just probably missed it. I was busy chatting. But the broker, is it a masterless broker? Or is it a headless? This isn't a single broker. Yeah, currently. So the idea is that it is expected that your broker is going to be highly available in the current design. So yes, it will become a single point of failure that is correct. So it is expected to be highly available. The only thing that you need from the broker at least today is just the ability to connect to the Kubernetes API and that will be used to exchange custom resources, like clusters and endpoints and exported services, imported services. So in the future, we want to enhance this design, being able to set up multiple brokers so you can failover. If, I mean, even in the case where your broker is supposed to be highly available, I mean, if something goes wrong, still all the clusters can move to a different broker. Also, yeah, we have made the design in a way that even if the broker goes down, still the connectivity will I mean, everything from the broker is replicated on all the participated clusters. So if the broker goes down, they still have all the information that they need to maintain connectivity and to know the services that were exported on the other clusters. So I mean, they will not be able to get information on new services, but they will be able to work with what they have. So here we have, I mean, some level of resiliency, but we want to improve that. It's a good question. Also on the slide, you say you label the individual node to be a gateway engine. Does that dedicate that node then to only being a gateway or does it, I mean, can I run other workloads and other application pods on that node? You can decide, I mean, so yeah. So you can have a dedicated node. This is standard Kubernetes node. So it can be dedicated if you configure the tolerations to not allow other things than the Submariner workloads or you can use any regular node. OK, thanks. Yeah. And you can also have multiple gateways labeled. So that allows fast failover if your gateway fails for whatever reason. Yeah, well, that's actually recommended. And even the idea is that you should have them in different availability zones. OK. I think we describe this. Oh, no, OK. So yeah, there is no impact to the intra-clustered traffic. So any intra-clustered traffic is not handled and it will follow its normal path. So the traffic with destination and other clusters will go through the gateway. And the idea is that we always preserve the source IP. Also, we provide, I mean, this is complicated if your clusters have overlapping ciders, for example, in terms of pods or services. So we have a special mode that we need to iterate into a new version, but it's working already, that we call GlobalNet. The idea of GlobalNet is that we do like some sort of super cluster IPAM that is going to assign IPs from that super cluster IP address space so they can communicate with other clusters and be recognized and have their own IP address. That is working, but we need to iterate for, yeah. We have a lot of feedback and we want to improve that. So I mean, that is a data plane. Then we have the services discovery part, which, I mean, it's very important. You need to be able to reach the services on other clusters and you need a way of discovering what is the IP of those services or pods. And for that, we use the multi-cluster service API, which is now in alpha from the Kubernetes multi-cluster seek. And we have the concept of cluster set, which means that a cluster set is a group of clusters that have a high degree of neutral trust, normally administrated by the same people. And we need to assume that the namespaces in different clusters are supposed to belong to the same project. It's like a base assumption on this multi-cluster service API. So yeah, it means that if you are exporting one service in a main space in one cluster and you export the same service on the same main space in a different cluster, it means that it is the same service and you can reach either cluster A or cluster B and it should not matter. So yeah, this is like a foundation of the multi-cluster service API. And in this API, we have two core objects. One is the service export and the other is the service import. A service export is something that you have to create to declare, OK, I want to export my service. And when you do that in the other clusters, it will be available in this format. Then we have more formats for headless services or state full sets because you need to address individual posts. But this is the most simple one. And then the service import is something that you will find in your cluster if another cluster has exported a service and then your cluster has discovered that service. It's part of this replication that I explained at the start that if the broker goes down, still you will have the service import and you will be able to resolve and connect. Yeah, this is the Lighthouse architecture we use. It's basically core DNS with a plugin that is going to use those service imports to resolve those DNS requests. So you need to introduce a hope in your QF DNS or existing core DNS to send the service cluster local to those via our service. We handle that automatically in the operator. And yeah, this is our command line install tool. So this is like the minimal install for two clusters. You deploy a broker in one cluster and then you can join two clusters. I mean, you could even join the broker also if you wanted. So you can join those clusters with this file which is generated, which allows self-cater to create credentials for the new cluster and then connect it to the broker and deploy Submariner. So yeah, we wanted to explain how we think that Submariner fits in the multi-cluster ecosystem. First is that we try to be network plugin agnostic. So you can have one cluster with one network plugin and another cluster with a different one. So far, we test with Flannel with OpenSIFTestDN in the side of Red Hat also with Obi-Ank Kubernetes. And we know that some people is also using it with Calico. Yeah, so far, those are the ones that have been there controlled. Also, yeah, the GKE network plugin also works. So yeah, this one part then that we have been working with the Kubernetes multi-cluster seek to define those APIs. Yeah, they are implementing in Google. And yes, we started implementing them for Submariner, trying to make something super agnostic. And hopefully, there will be more people implementing this API. Another point is that existing service messes could be on top of Submariner. The idea is that we provide the IP connectivity and then the service mess wouldn't really need to create any endpoints or connectivity and just use that. And yeah, it's time for a small demo. I will stop sharing. Yeah, so I'll be doing the demo I'm just looking for the right window. Right, so somebody was asking in the chat how the root agent was set up on the X demon sets. So in the demo, we've got, let me show you, we've got three clusters set up, one running on GKE and two running OpenShift on AWS. And so Miguel mentioned that the broker uses a number of CRDs to do its work. And basically, that's all it is. There's no code running in the broker cluster for Submariner, it's just data storage. And so the first one is cluster here. And that lists all the clusters that have been joined in the cluster sets. And once OpenShift loads the information, we'll be able to see. There we have three. So like I said, GKE, OCP-A and OCP-B. And on those clusters, we've set up, well, Miguel set up rocket chats. So if you want to go and play, I'll paste the links. Yeah, let me do it, unless you have them super ready. Right, OK, yeah, go for it. And so while Miguel's pasting the links in, I can continue showing more of the CRDs, perhaps. So that was the clusters one, which gives the list of clusters, basically. There's not all that much information in there. And then the different clusters, they connect using endpoints. So these are shown here. And this gives details of the actual IP addresses to use to connect to the other clusters and the backend that's being used and the submits that are managed. So that's really how the connectivity appears from the administrator's point of view. Services, so they use the MCS CRDs. Well, you'll see actually in the list here, we've got our own legacy ones as well. But we've migrated over to the multi-cluster. So see here, service export in two versions, Lighthouse and multi-cluster.xkates. And this isn't the exporting cluster, so we won't see anything in service export that we will see in service import that we have a MongoDB service that's been exported here, imported, sorry. There we have it. So Rocket Chat, Mongo, and the defaults namespace. And so this, like Miguel said, means that from all of the clusters that have imported the service, which is all of the clusters in the cluster set, you can look up rocketchat.mongo.default.svc.clusterset.local and you'll get one of the services that's accessible across the cluster set. So I can just demonstrate that quickly. I have a net shoot pod handy. And so here, because this cluster isn't running the service itself will get a different, well, we'll get a service in one of the other cluster sets. But we do prefer the local cluster if we can. So if the service is running on multiple clusters in the cluster set and you query the service from a... Oh, it's a headless set of these, yeah. Oh, it's a headless one, this one, yeah. Okay, so I'll set up another one just so that we can illustrate. So I'll start engine X, give me a second. So yeah, like I was saying, we prefer the local cluster. So if you have a service that's set up on multiple clusters, you'll get the local one back. If you query from a cluster that's not got the... Service at all locally, then you'll get... Estefan, if you have a different window, we don't see it or it just would suck. Yeah, no, I'm just running a, I'm just applying engine X demo right now. So this should appear. So we'll go back to the cluster, the CRDs. Yeah, so if you have the service available across multiple clusters, then you'll get it back in the round-robin fashion. And this is a bit different to, for instance, if you've played with this using the same MCSAPI on GKE, that relies on cluster set IPs where the same IP address will lead to different instances. We rely on DNS round-robin. That's perhaps the most significant difference in MCSAPI implementations as far as I'm aware. Right, so we have here engine X demo now, which is available. And so if I go back to my pod. So you created that service on cluster B and you exported it. So it's now imported here. That's right, yeah. So now if I do dig engine X demo defaults as we see cluster set.local. Right, this time it finds it and it can even talk to it there. Yeah, and so I should have tried that before deploying it so that it was obvious that I wasn't cheating, but so this one here it's on a different time, different cluster has been exported but I can show also running it on the same cluster. So I'll just get engine X set up here on cluster A as well. So we just need to wait for it to come up. That's it. So now we have it on the local cluster as well. And it's going to profile that one always. So that's from the deployment perspective for administrator purposes. We also have a number of metrics that are published and these get, if you're running on OpenShift these get set up automatically. So for example, we have, we track the number of the amount of traffic that goes over the various connections. So here I'm on cluster OCP cluster A connected to OCP cluster B and the GKE cluster. And so because the MongoDB database is on GKE then I would expect most of the traffic to occur between post PA and GKE and not much traffic to be. And so that's what we see here. The blue line is goes to GKE. You can see that in the labels here remote cluster GKE and then very little going to be at all. We have a number of other metrics like just the number of gateways that are connected. So that's just one. This is on the number of gateways that are set up on the local cluster. The number of connections as well with their state. And so here we're showing two to the two different clusters. Once we have the latency, that's an interesting one. So we track the latency to the different clusters that each cluster is connected to. And so as you'd expect the other AWS cluster has very low latency and the GKE one since it's further away has higher latency but they're both pretty stable. And we also track, so if we had a global net which isn't the case here, we have some global net metrics because we have a big pool of addresses that are used. You obviously need to pay attention to the amount that's actually being used. So we keep track of that and export it as a metric. And we also track the number of, all right, no, that's in the next version. We'll have several discovery metrics as well, but not yet. So I think that's about it for what I had to demonstrate. It's perfect timing. We also have quite a few good questions in chat that I was just scrolling through. I think Lee has a list of three or four good ones queued up here. And if we can't get to them all, then my curiosity will wait. But yeah, one or two of them if you guys care to pick one or two. Right, yeah. So yeah, I like the question about incorporating it into Kubernetes. So that's piqued my interest. I'm wondering what it would take, but that's probably a bigger discussion. Very good. And not a suggestive question per se, but just a, Yeah. Yeah, we haven't. Oh, sorry, go ahead. Well, you take it, Miguel. Yeah, it was going to say that I don't know if I see all the questions because, yeah, I... Yeah, I was about to start recruiting him. There was one about Brownfield for GlobalNet. Can we step into a Brownfield and set up GlobalNet for overlapping cider support? Yes. So by Brownfield, I mean, I take it, you mean a bunch of clusters that have overlapping ciders already. If you know that from the start, then Subrainer can, you can set up Subrainer from the beginning with GlobalNet and it will work fine. What we don't support yet is if you set up Subrainer without GlobalNet and then you try to draw in a cluster that has an overlapping cider, that won't work. We can't add GlobalNet post-facto once Subrainer's been set up. But it's easy enough to just redeploy Subrainer when that happens. Good. I've got a... Good, good, good. There's a bunch of... This is great. Fantastic presentation, guys. I've got questions that will last us the next hour. Just for what... Hey, it's really fun sitting on this side of the table, hammering you with questions, pelting you with questions. No, no, that's great. I mean, because questions... Yeah, we'll... I mean, we'll give us information about what are we missing or where are we not doing a great job. Yeah, so the last question, that is quite an important one, I think. Does all in their cluster inter-node traffic transit the cable driver to? And the answer is no. In their cluster traffic just uses the normal Kubernetes networking layer. It doesn't go through the gateway. It's only inter-cluster traffic that goes through the tunnels. What does Routage and Upgrade look like in terms of disruptions to active inter-cluster communication by pods on the node? Shradar, you might want to build that one, perhaps. Yeah, so basically, Routage and Trans is a demo set and it programs some routing rules and some create some VX, some linear tunnel interfaces. So when you're upgrading, depending on like for what version we're upgrading, we generally don't modify the configuration until and unless it's required. So ideally, we should be... I mean, we may not expect any disruption to the inter-cluster traffic unless we are really modifying some configuration on the respective host. Yeah, and everywhere as much as we can, we try to leave the data plane configured and working. So if you bring the pods down or you are updating them to a newer version, the data plane will remain while that is happening. So, yeah, we don't expect disruption. We, I mean, we test for failovers and we test for hammering the route agents. We have a pretty big set of end-to-end tests that we keep improving them with new ideas. Yeah, and that is something that we test. We don't test, for example, if there are, if there is a small time with packet drops, for example, I think that will be interesting to test. All right, let me just jump in. I think we need to hand it off to our seniors, Lincardy, for their graduation proposal at this point. Thank you all for the wonderful questions, though. Thank you, guys. That was great. Mr. Morgan and Lincardy is up for graduation. Yeah, thanks. Thanks for having me. So, yep, we're up for graduation. Lincardy has actually been, you know, I think it was a fifth ever project accepted into the CNCF before there was a sandbox phase even back when it was called incubation, inception, back when it was called inception. So I have a couple of slides that I can run through, kind of giving an overview of the project and adoption, but, you know, honestly, I'm also here to just answer questions. So, you know, I think part of the graduation process is having the CNCF SIG network review the proposal. So if there's anything I can provide that would be helpful for those purposes, then, like, here I am, ready to provide it and obviously offline as well. So you want me to do a quick overview or is there anything you want to dive into specifically? I'll throw you off just with that. And this might be an offline, this is like a half comment, half question that, again, like, it's really easy to sit on this side of the table and ask the question, you know, like, and that, well, or let me start by saying, do you want to, do you want to, oh, William, oh, William. Uh-oh. What have I done? Yeah. Yeah. Let me just make a statement and say the kudos on the establishment of a steering committee just as the project matures in functionality, and matures in governance, matures in adoption and being used in ways that you didn't imagine, I suspect. You know, what a self-directed, self-initiated, healthy, you know, healthy step. I was, thanks for having me at the first, yeah. Yeah, or I mean, you know, yeah, anyway. And then, the, yeah, and then I'll follow up with, with other, with comments and boy, I've made it sound super onomous and it's not. And so, yeah, William, if you'd take us through a couple of slides, that'd be great. Sure. Let's see, can you see a giant LinkerD logo somewhere? Yeah, yes. All right, that's good, because I cannot, where is it? It's like disappeared from my view, hold on. Okay, there we go. All right, so yeah, I'll just give you a very brief rundown. You know, there's a lot, there's a lot to say, but LinkerD is a service mesh. We have a very strong focus on being light and fast and security-centric. And at this point, you know, we've been in production for over four years at companies around the world. We've gone through a bunch of iterations of the project internally. Have a very healthy community in the Slack channel, primarily, and a whole lot of GitHub stars and things like that, over 200 contributors. So I just counted up to 200 and doing a near weekly edge releases, we try and get code out, you know, in front of early adopters as rapidly as possible. And of course, we have open governance and a neutral home in the CNCF. These are some of the logos that are currently using LinkerD. Some of them I know a lot about because they've told us a lot. Some of them I don't know anything about because we only know kind of through external evidence that they're using LinkerD and they don't want to talk to us. It's always part of the fun of open source. Okay, so what does LinkerD do? I think this is very similar to, you know, what every service mesh does. There's kind of three big categories, set of features around serviceability, set of features around reliability, set of features around security. For LinkerD, you know, our goal is deliver those features to you in a way that minimizes the operational pain associated with that. So we believe that the service mesh doesn't have to be complicated. In fact, it can be pretty simple to operate. It's not a trivial piece of technology, but the operational component can be simple. And we do a lot of stuff in our design to reduce that operational overhead. And that's kind of the primary driver. And I think also, you know, part of what makes LinkerD a little unique in the service mesh space. I won't go too much into this, but you know, I think the value of the service mesh, you know, it's not really the features that it brings, but it's in the fact that it delivers those features at the platform level. These are features that historically we've had to get in the application, even though they are effectively platform features. So the real audience of LinkerD are the SREs of platform owners, you know, the folks who are operating Kubernetes, the developers are much less exposed. And ideally, you know, they're often not exposed at all to the service mesh. So what the service mesh is giving you what LinkerD is really solving for you, you know, it's not really giving you retries. It's giving you retries in a way where you can get that at the platform level and you don't have to beg the developers to do that. Same thing with MTLS. Okay, let's talk a little bit about kind of our design philosophy. So we're really trying to, you know, follow this idea of minimalism and do, you know, just the bare minimum to give you a secure and operationally simple service mesh. So LinkerD out of the goal is it should work. If you have a functioning Kubernetes application and you add LinkerD to it, the application should continue functioning. We can do that in almost every case. And it took quite a few to engineering to get there. But that's a really strong belief for us. Ultralight, of course, bare minimum resource cost and latency as well. Of course, the service mesh works with lots of user space proxies. So you're gonna pay a cost. We try to minimize that cost as much as possible. Make it simple. I'll talk a little bit about how we do that. And then security first. So whenever possible, we want security to be the default setting, not a thing that you have to configure, not a thing that you have to enable later. So control planes written in Go, it's sitting around 200 megs of RSS. It can optionally collect metrics data, in which case, like you can use a lot of memory depending on how much data you're collecting. Data plane are these little rust based proxies. We call them micro proxies because they actually are very different from something like Envoy or Nginx. And I've written a lot over the years about LinkerD. There's a historical article here. You can read on InfoQ. We actually started out with a JVM based LinkerD written in Scala and went through a pretty big rewrite starting in 2018 to get to this Go and Rust combo. Okay, I've got like two more slides maybe and then I'll be done with a whirlwind tour. So like most service meshes, there's a set of control plane components that set off to the side. And then the magic is in these little micro proxies that we inject inside the pods and we do the transparent wiring so that all TCP communication goes through those pods, which means that whenever service A talks to service B or instance A talks to instance B, it's going through two, not one, but two proxies. So that means those proxies have to be very, very fast. And I mean, you're gonna have a lot of them so they need to be very, very small. So LinkerD uses this micro proxy, which is called simply LinkerD2 proxy. It's not really a general purpose thing. It's very tightly coupled to LinkerD itself. It's built on top of this amazing Rust ecosystem, Rust network library ecosystem, which is general purpose. And I believe this is probably one of the most advanced, technologically advanced projects in the entire CNTF landscape because we are sitting right on top of this very, very fast moving and very exciting Rust asynchronous network ecosystem. Choice of Rust lets us avoid an entire class of memory vulnerabilities and we won't get too much into that, but that's really nice since what's going through this data plane is like customer's health information and PII and so on. We can compile down to native code. We do regular third-party security audits, which we pass thankfully. And like I said, very modern networking stack, LinkerD2 proxy part of LinkerD projects. So it's open source, it's audited, it's up on GitHub, but a pretty different approach from the general purpose proxy. The goal for us is you should not have to become an expert, an operational expert in LinkerD2 proxy. You should become an operational expert in LinkerD, but the proxy should as much as possible be an implementation detail. Lots more to say about that. And we have a security philosophy as well, but I'm going to stop here. I see there's a couple of questions and we're coming up to time. So I'll stop the presentation here and start working through some of these questions, which I will access by pressing a few buttons. Okay, ah, pluggable ingress. I might have misread that. Yeah, so right now, right, it's not, we don't have our own ingress. We work with every other ingress that we can possibly work with. So that's part of our philosophy of keeping this as minimalist as possible. There are many, many good ingress controllers out there that have a huge feature set, none of which I want to implement and none of which are like service mesh specific. So yeah, we work with that. That makes sense. Yeah, it totally does. I actually, it makes, I mean, it's a great philosophy. I mean, it's a winning, I don't know, you need my commentary on that, like, but I'll give it anyway. Yeah, it's a winning philosophy. I mean, there are the vast majority of projects and individuals, I think, tend to make the other choice, which, so this is a refreshing. I think it's, yeah, you know, I think, I think it's, we naturally want to accumulate more things, right? We naturally want to make the project do more and more and solve more problems and it takes a certain amount of like, discipline to go in the opposite direction. Yeah. Yeah, it's a sign of strength, actually, I think, actually, and I misinterpreted it just when I initially saw it thinking, oh, the, perhaps, Likerties micro proxy is pluggable itself or has extension points or it's pluggable itself. And, and that's, that's not currently the case. No, no. In fact, we ignored that entirely. You know, we had some ideas about making it, you know, proxy agnostic and stuff. Sorry, where's my proxy slide here? Just making it agnostic and it just, it didn't solve a problem. It didn't allow us to solve a problem that we really wanted to solve. Totally. Actually, I think I misphrased that as well. Rather, Likerties micro proxy doesn't have pluggable filters. Correct. Okay. Correct. Yeah. So if you're talking about things like wisdom that is on the tentative roadmap, I think there is value to that, but we have not tackled that yet. We did something like that in the one dot X days when we were on the JVM, you know, we had this idea of plugins. It was cool because people could extend, you know, Likerties do all sorts of application specific stuff, but operationally it became very complicated very rapidly. So there's a little, there's some friction to that idea currently on the team, but it's not necessarily forever. Sure. Well, again, like some, some, just some general comments like, Hey, in every, like you in accordance with graduation criteria and, and even not being from, you know, even if you're not familiar with what the specifics of that criteria is. I mean, in every, you know, in the area of which way. Lincertie V2, she's, you know, he hits those out of the park. The, with one exception. One big exception. One major exception, which is we only have maintainers today from one organization. Yeah, that was what I was going to approach it before. Yeah. Yeah. So this is going to be interesting. Yeah. Yeah. I'm one of the, but in the other way, yeah, that was what I was going to mention. In the other, in the other ways, the fact that it is a V2, the fact you've taken learnings from your V1, it's almost a sign. If you look at some of the other projects in the space and not just this space, but the cloud native space. It's a large or significant sign of maturity of the set of knowledge and to have to do a rearchitecture. To like have taken all those learnings from all the, like what a significant benefit that is to any of the users. Of the project. And the fact that it's, you know, the fact that you, the project itself and the principles by which it's being designed are. Are in like that strength that you were just talking about, about avoiding like acknowledging that there's other ingresses to use and not reinventing that particular wheel. Is it also. You know, or identifying general purpose things versus purpose built things and, and, you know, identifying just a few. Ultra fast ultra light, you know, just Kubernetes native Kubernetes first, like hanging on to a few of those design principles. Throughout. You know, throughout V2s. You know, from, from conduit to now. Well, you make it for anyone who pays it pays attention and can sort of reviews those, those different projects. It becomes quite clear that this. How those manifests and in terms of a user experience in terms of. Well, in terms of a lot of things like time to value and. There's. Some amount of boringness is good. And the simplicity facilitates some, some boringness. If you will think boring is the wrong. Has a negative connotation, but rather stable is the. More of the appropriate. Unsurprising. Yeah. On that, this is my last question. I know we're five minutes over and so. Reflections for you. As you. Well, yeah, as you balance, like we were just talking about wasm and, and it being. Hot and interesting and yet. You know, from prior experience, you know, friction and, and also. You know, it expands the scope of the scope of work to be done. All these things like. Do you thought your thoughts on how to balance. As, as you, as Linker D faces graduation goes. To have stable as a con as a connotation associated with graduation and stable associated with. The project. What. How do you balance between the. Innovative things and being boring. Yeah, we have a really strong opinion here and it's like. It's something that took a while for us to develop, but we are extremely user focused. We spend as much time with our users as we can. We look at the things that are causing them pain. And what you realize is like 95 plus percent of the time. It's not like I don't have. You know, a data plane plugin, you know, with wasm. It's like. I'm running out of. You know, space in Prometheus because it just takes all these metrics that I have no idea how to control it. It's like, that's the stuff that actually causes pain for people. And I think being a hyper focused on that and. You know, trying to. Map that to a concrete thing you can do in as short a time as possible is the one skill that we've. I mean, still working on it, but really trying hard. And that we've relied on to guide our, our feature choice. Just being hyper focused on, on like the actual pain that people are having. And I think if you do that, then like a lot of. A lot of the noise starts falling away and you're like, oh, here's the thing. It's not super cool, but this is actually what's causing the human beings problems. Questions or comments from anyone else for William. My apologies on the. On the. Overshot on time. Great presentations today. For my part, I really appreciate people fielding the fielding them. So. All right, very good. We'll. See you in a couple of weeks. That's the. That's a wrap. All right. Thanks for having me. Thank you very much. Amazing presentation. Likewise.