 Thank you. Hi, folks. Thank you for coming. It's really wonderful to be in person again, which means I get to start this by asking you questions that you have to answer to me, which I don't get to do when we do the online ones. So quickly, show of hands. How many of you are vaccinated for COVID? OK, easy one. Poof. How many of you use Kubernetes in your day-to-day job? Almost all of you. That's more than the last time we did this. How many of you use Linkardy in your day-to-day job? Good, all of you. How many of you have tried Linkardy at least once? OK, great. One more. How many of you have programmed in Rust? That's like way more than last time, honestly. So we're making progress. Good. So my name is Oliver. I lead the Linkardy project. I created a Linkardy project back in 2015 or so. And I'm the CTO of a company called Boyant. We have a booth in the vendor hall that you can get cool Linkardy hats at, et cetera. And I want to give you today a pretty short talk. I think we went over 30 minutes or so. So I really go through a quick run-through of what Linkardy is. I want to talk about some of the features we've been working on that were released in 211, those two weeks ago, and some of the things that are coming up. We're going to have a special guest appearance if you stick around to the end. But really, I want to make sure we have time for questions. And so my top priority here is to make sure that if you've come to this conference with questions about Linkardy, that you have a chance to ask me. Sound good? All right. So what is Linkardy? For those of you who don't know, Linkardy is an ultra-fast, ultra-light, ultra-simple service mesh for Kubernetes. I didn't write that, but it's true. We donated the project to the SCNCF in early 2016. So we're old-school, cloud-native, big active community, always getting bigger and activer the more the project goes. We started with a version of Linkardy 1.x, which was written in the JVM. We're not talking about that today. We're talking about Linkardy 2, which is really where the main line of work is happening since about 2017 or so. We have a whole bunch of production experience, lots of cool logos, some of the logos I can't even show you because they're so secret, but Linkardy's trusted and used. So what does it do? Why do these people trust it? Why did they deploy it? Well, it does a few simple things. First and foremost, it gives you observability. For those of you who are unfamiliar with the service mesh architecture, we'll talk about that in a second, but because there are sidecar proxies running throughout your cluster, we can monitor HDP metrics, GRPC metrics, TCP metrics out of the box uniformly. So there's no need to instrument your application, especially if you have multiple languages or heterogeneous environment. You don't have to try to cobble together visibility systems on top of these things and you don't have to modify your code to do it. You just add a proxy and we get lots of good data. Two, reliability. One of the top reasons I hear that people at Doppler-Linkardy is for GRPC load balancing, where you don't wanna have to modify your GRPC clients to deal with request-level load balancing. This of course works with HTTP as well and we have connection-level load balancing for TCP, for non-HDP communications. It's also configurable retries, timeouts, traffic shifting, a growing set of features that you can add on there. Third, we have out-of-the-box security. This is primarily done through transparent MTLS. I'll talk about that a little bit more soon, but this is a really core part of the project. We hear lots of folks adopting Linkardy to make sure they have MTLS in their pod communication. And our one of our big focuses is on simplicity. And so when we think about our principles for Linkardy, most people in this room and in this building and watching are trying to adopt Kubernetes, they're trying to adopt containers, they're trying to adopt modern terms of building, they're trying to go to a cloud-native architecture. And there are so many new things in that space that you have to pick up and learn, become experts at and run operationally. And the goal for Linkardy is that we don't wanna be another thing that you have to become an expert at. It's a failure in my mind if any company ends up with a Linkardy team that has to maintain Linkardy, the way they have to maintain Kubernetes. So it has to be something that you can add into your cluster and basically not worry about it, especially when you're getting started. It can't be a bunch of new primitives to go learn, unlike some of the other service meshes. So it has to just work out of Kubernetes. You can't modify our application to do it, just add Linkardy and things get better. With that, we don't wanna increase the cost too much. So we can't double your CPU and memory usage by adding Linkardy to your cluster. It has to be a really thin add-on. Let's be, again, simple. We don't want you to have to learn a bunch of new things. So we try to really build into the Kubernetes ecosystem and into the Kubernetes primitives as much as possible. We don't want you to have to install 50 CRDs to get started, maybe five or so. But this can't be a bunch of new things to add. And again, security out of the box. This has to be on by default. It can't be something that you have to buy, Boyan's professional services to secure cluster with. Linkardy has to do this in its core open source offering. Talk a little bit about the architecture, but Control Plane is primarily implemented in Go that we have some rust things showing up there. It's pretty, the core Control Plane's fairly small. Pretty simple. The Data Plane is a proxy written Rust and there's some background reading here that you can click on the links and find. So for those of you who have not been paying attention to all of the service mesh hype and buzz, the service mesh is really an architecture for having sidecar proxies or a Data Plane in general that is instrumented by Control Plane. So instead of having all of your routing infrastructure logic built into your application, you're extracting this out into a separate component that can be configured separately from your application dynamically at runtime to modify the traffic patterns. We do this by adding a sidecar process to your pods. This is a wonderful thing about Kubernetes that the prior container orchestrators like Mezos didn't get right. It's not a single container orchestrating, it's several, so we add these sidecar proxies containers that can facilitate communication. We also have an extension model that makes it easy to have bigger parts of the Control Plane become optional. This was introduced, I think, in 2010, which was early this year. So we have a Viz extension, for instance, that will bundle Prometheus, Grafana, the Linkery dashboard, a system we call TAP, which lets you connect to proxies while they're running and query your requests as they're going. You can also bring your own Prometheus, but these things used to be part of the Linkery's core Control Plane, so you just install and get up and running. But really, you don't need it to get started. This is all gravy once you have Linkery running. As I said before, MTLS is, mutual TLS is a big part of what Linkery does, and we, in fact, build lots of other functionality on top of this identity. The idea is that if you have two pods or two containers or two processes communicating to each other, the server needs to know who the client is, what its identity is, be able to coordinate that and not just base it on some IP. We want this to be a kind of a zero trust situation. And the client needs to be able to validate that it's actually talking to the server it's thinking that it's talking to. So we do this by bootstrapping off of service accounts. So every pod in the Kubernetes cluster generally has service account tokens that are provided as the pod starts up. We use that to authenticate to the control plane. We issue private keys locally, so those private keys never leave the container's memory, or the pod's memory rather. And we rotate those certificates, I think right now, about once a day. You can crank that down to be a little more rigorous if you want to. And that all works out of the box. There's no configuration, you just add Linkery and all this MTLS stuff just works. This works for HTTP, GRPC and TCP, arbitrary TCP communications. And we do this with a bunch of good defaults. So you no longer have to hunt down the people in your org who are still using TLS 1.0 or 1.2 even. We adopt TLS 1.3, we use lightweight ECDSA keys. These are faster and generally more performant than the legacy RSA keys. We also have no dependencies on open SSL or boring SSL or any C infrastructure, unsafe C infrastructure in the TLS stack. So on the Go side, we're using Go Crypto, that's in the control plane. On the proxy, we're using Rustles, R-U-S-T-L-S. It's a project that's built on a ring and it's a subset of the boring SSL functionality built primarily in Rust, which is safe. And again, main point here, you get this for free, effectively. And we use this identity in another extension called Multicluster, Lincolini Multicluster. Multicluster is a scheme where you can connect clusters without having any sort of networking requirements. So as long as one proxy in a client cluster can identify an IP address that connects into the foreign cluster or target cluster, we can join these clusters and route traffic between them. We do this through a concept called service mirroring, which is actually, I think, fairly similar to what we see coming up in the Kubernetes Multicluster services. We have a process and operator in the client cluster that selects services in a target cluster by talking to its Kubernetes API and creating shadow services in the source cluster. And that is point to a gateway. So all of those services you create in the client cluster, instead of selecting pods, you're not talking to pods directly, you're talking to a gateway address in the target cluster where we terminate MTLS, we can apply policy on that, and that does load balancing in the cluster, which is what you wanna do. You wanna have load balancing decisions be kind of close to the application. This also means that you can use things like traffic split to make this transparent. So you can just talk to a logical service and whether that's in the local cluster or another cluster can be an operation detail. And it's really, again, building into the Kubernetes primitives so that you don't have to go do a bunch of linkety specific things. This, a lot of these patterns use SMI today, which is another standard in the space, ish standard. And it's all built using the proxy. There's no real special requirements here, pretty lightweight, and generally easy to operate. I think you can start this in K3D or kind without a whole lot of work. It's pretty easy to get running on your laptop. The linkety sidecar proxy is where I spend too much of my time, most of my time. I love it, but it's a labor of love. And unlike the other surface meshes in the space, we do not use Envoy. So a whole bunch of reasons behind this. You're welcome to go read the blog posts on Y. And I think some of the reason it will become clear as we go. But the idea here is that we want in order to go back to that philosophy of having something that just works, that's minimal, that fits the purpose, that's safe and secure, we want to have something, one that's written in Rust, was a big bonus for us. We don't want to have lots of C++ being put into every container in your ecosystem. Two, it's really purpose-built. It's not a big configuration machine. Like Envoy's awesome, you can do all sorts of rich programming in your data plane. You can put plugins in there. You do all sorts of stuff. We kind of started with that model of LinkerD1, honestly, where you had a really rich plugin system where you could do lots of different things in the data path. We realized that if security's a big part of this, configuration's a bad idea. Configuration and that complexity that comes with it is really the enemy of having something that you can reason about and say, this is secure, this works well. On top of that, there's lots of resource benefits that we get having less memory overhead, less CPU overhead. And to do this, we've had to invest a lot of time and engineering resources into the Rust networking stack. So Tokyo is a project we build heavily on. There's an ecosystem there, there are Hypers, an HTTP library, H2, was almost entirely written by point folks. Tower is a service abstraction, really similar to Twitter's finagle, which I have tons of experience with. And we've really built this all up into the Rust system, which is now becoming a more and more attractive target for other things to be built here. And so this was a good investment from us because now we see, we actually collaborate a lot with folks at AWS to name a few other companies as well on this part of the stack. And it's really high performance and not a lot of complaints so far. And again, to highlight on that point at the bottom, we don't really think that, we have folks say, well, how do I configure Envoy? Or like, the proxy in Lincority is not something that you should be focused on, it's like a thing that you can program or configure. The goal for Lincority is not to configure proxies. The goal for Lincority is to facilitate communication. The proxy is just an implementation detail. And so there's a world where maybe Envoy gets written in Rust and really fits our needs well. We could replace the Lincority proxy with Envoy if we wanted to or with HAProxy or any other proxy. But it's really like the difference between container to your Docker. They all do the same job and you can swap them out. And we have made engineering choices there to write our own proxy. This is a obligatory slide. If I don't include this, someone's gonna ask me and I'm gonna have to say this all. What's the difference between Lincority and Istio? I work on Lincority, I can't tell you. But what folks tell us is that, in fact, put it this way, many, many, many of our recent Lincority adopters are folks who try to operationalize Istio first. And then said, whoa, this boat is too big. I need something that fits my use case. I don't need half the things in here. I don't need a team to deploy this. And that's why Lincority is better for them. But I think Istio is great if you're trying to do things that span outside of Kubernetes and join all sorts of different environments between console and Kubernetes, et cetera. You can do lots of great things with Istio. I don't think it's a bad project. I just think Lincority is better. And here's why. Big caveat here. We ran these benchmarks. We didn't write the benchmarking harness. The folks at the company formerly known as Kinvoke wrote the benchmarking harness. They're now working at Microsoft. We ran this, I think, over the summer when we launched, when we released 210. And what we see here is yellow is no service mesh, just running things raw. Blue is Lincority and orange is Istio. And here, lower is generally better. So on the left here, we see latency at various percentiles. On the right here, we see CPU usage and memory usage. This is a fairly low traffic, right? Like, for the low tests I'm running when I'm doing my own testing, I'm running like thousands of concurrent requests, which is pretty heavy. But the folks at Kinvoke couldn't even get Istio into some of those configurations without giving it too much resources. And so this is, you know, I'm not saying this is all because I'm such a genius. I'm saying that we've built on really good primitives. So things like the Rust ecosystem, especially our big part of the story. And I'm standing on the shoulder, we are standing on the shoulder of the giants to do this. Okay, this slide looks a lot different than the others because I made the diagram yesterday and it shows. So two weeks ago, we launched or we released Lincority 211.0, Stable 211. And this delivers a really highly requested feature, which is authorization policies. And so if I have MTLS and I can, you know, secure my communications, how do I, they're secure. How do I say that I only accept connections or requests from these specific clients to this application? And so these authorization policies do that. And we have two new primitives, one as a server and the other is a server authorization. So there are various default modes, but once you have a server, it basically assumes the default deny policy and then you can add authorizations onto that. And this all kind of uses hopefully familiar primitives in label selectors. So servers select pods and refer to ports of those pods. Authorization select servers. It's fairly simple, but it's also powerful in the same way that service label selectors can be really powerful. This allows me to do things like I just require MTLS. I'm gonna do a default deny policy in my cluster or I can have a default policy in my cluster that says I require authenticated communication. Anything that's not MTLS has to be documented by an authorization. I'll give you a caveat here. Kubelet, which does all your health and readiness probes of your application. That can't be secured. You can't even actually really know what IP address your Kubits running on in Kubernetes. So you have to create authorizations here and I think there's some room for improvement both in the Kubernetes ecosystem, but also operators and extensions that can be added to Linkerd to facilitate this for various environments. So here's a pretty simple, hopefully, example. For those of you who haven't seen the EmojiVoto demo, it's beautiful. That's also like four years old. But this is a demo app we use for a lot of our, as we introduce new features, we test that through this. And it's an application that uses both HTTP and GRPC. And here we're looking at a GRPC service called the Emoji service and basically lets you query about emojis. And I have a server that selects pods that have app emoji and it refers to a port on this pods named GRPC. I also have this proxy protocol hint and this is not really tied to authorization at all, but I can now start to talk about other things on other properties of servers. So a server is really similar to a service, but it's different for reasons I'm not gonna get into unless you ask me. But the server primitive we have now, this new CRD, is a building block that we're gonna use for more types of configuration on the inbound side. So you might imagine that in the same way that I can select authorizations onto servers, I can do that with routes or maybe I can do that with timeouts or other things like that. So this again, having small little building blocks that we can use in the Kubernetes way to build these things out. And here the authorization allows, again, since our identity is built on service accounts, we can map service accounts and say, okay, anything from the web service account in this same name space is authorized to talk to the service. Okay, in the oven, we're looking at, we have this primitive called service profiles, which is what we use for both inbound and outbound routes, timeouts, configurations, all sorts of things there. And it's really, we've lived with it for a few years now and it's not our favorite part of the system. I'll put it that way. And so we're looking for a replacement there, probably much more like those primitives I just showed you, the server and server authorization, where we can allow select routes onto clients and things like that. They can also probably be used for egress policies. So if I'm talking to services outside the cluster, how do I put policies on what can be communicated there? And also this will enable us to kind of fill in some of the gaps for things like service, circuit breaking and failovers and other sorts of client policies that are not currently there. We've had a long-standing PR to add access logging to the proxy. We're making changes in our logging libraries and things like that to facilitate that, but I think they'll land in the next couple weeks. We're speccing out right now adding support for the T-proxy syscall. This will allow us to be transparent on the server side. So right now, connections into your application look like they're coming from local hosts, because they are, they're coming from the proxy. We want to be able to pass that through so you see the original client IPs, which will really help with ingresses especially. Today, there's a way around that, there's some work arounds, but it's a little annoying. In 211, we introduced a new extension called Linkread SMI. And the goal here is to create a separate extension that's responsible for managing or installing the SMI CRDs and really all of the SMI functionality. We don't want to be part of the core Linkread install. As we introduce our impermitives here, we want to have adapters that can read SMI impermitives and generate Linkread impermitives or vice versa. For instance, SMI has a traffic policy CRDs that don't really line up with ours. They're not port aware, for instance, and we think port awareness is really an important part of this. For instance, I might want to allow unauthenticated couplet connections onto my health check probes, but not my application port. I have to differentiate those by port. And so, I see that being a room in the project for people who care about SMI to get involved and help us build those adapters and operators there. Part one of the complaints we hear is that folks using Helm CRDs are a huge pain in the butt. I agree, that's not our fault, but it's just the way of the world. But one of the things we can probably do there to help is to split out a bunch of the cluster level resources into separate Helm charts so that you're not having to install, like as you upgrade the control plane, you don't have to upgrade CRDs the same step. So that works, I think, in PR form and will be emerged soon. Looking forward a little bit more, there's this new Linux system, IO system called IOU ring. It has some really cool benefits performance-wise. This support's still pretty new in Tokyo. It's mostly focused on file IO and things like that. But as that gets merged further into the networking stack, we're gonna take advantage of that in the proxy. And I think that'll have some pretty big CPU improvements especially. The folks at NetApp have a branch of the proxy that really replaces Russell's with a boring SSL binding, native TLS binding. This can be used to build FIPS 140-2 compliance system. So if you're in government PCI compliant areas, you can't use modern crypto. You have to use a really specific set of vetted crypto that is not modern, unfortunately. But we're gonna support that. And we're doing work today to, well, not today. We're doing work last week and next week to support that and get the proxy to be able to be compiled in a mode that will support that. And that's a big thing for the folks at NetApp there. And also this roadmap is fungible. So if you have feedback here, let us know. If you didn't see the keynote this morning, we graduated, finally. Oh yeah. We've been in the community since 2016. So it's great to see. It's really, as a maintainer, it doesn't make that much of a difference. We have a slightly bigger logo on some of the slides now. The real benefit, I think, is that for everyone who's been on LinkedIn and deployed it, is they can go to their management and say, see, this is a real thing. I'm not a weirdo. You might be, but it's separate. We have a bunch of ways to get involved. If you like to do talks or meetups, we have slides and a program to help facilitate that. Developers hang in our Discord server. We have weekly edge releases, probably not this week, but almost every week we do edge releases. So if you have staging clusters or if you're looking for ways to give feedback before we go to prod, edge releases are a great thing to test. Monthly community meetings, you can join Zoom or not Zoom. Something like Zoom and ask us a ton of questions. There's a bunch of talks at KubeCon. You can find this on the meetup or on the meeting thing. As I said, lots of ways to get involved. Open governance, CNCF, graduated. Awesome good stuff, right? Finally, okay. We also got a Fippy character. This is the special guest. So meet Linky the Lobster. It's a general neutral lobster. And super friendly and we're happy to be part of the Fippy community now. With that, I think I have a few minutes for questions.