 Okay, so my name is Eliza Weissman. I'm a Linkerty maintainer. I work for Boyant, the creators of Linkerty. If you're a fan of Linkerty, you've probably seen me on the Linkerty Slack or on GitHub, or on my Twitter account where I post pictures of my cats and bad computer jokes. So I'm gonna be talking about writing service mesh controllers in Rust. This is my Rust evangelism slide. I'm not gonna talk too much about it. At this point, I think a lot of folks have probably at least heard of the language, but it's sort of core value prop is that Rust is a systems programming language that ensures memory safety. So no dangling pointers are used after freeze, but it ensures this at compile time through compile time checking rather than at runtime with garbage collection. But this is service mesh con. So it's not Rust con, right? We're gonna talk about a service mesh. And the service mesh that we're talking about is Linkerty. It's a CNCF graduated project, ultra lightweight, ultra fast, security first service mesh for Kubernetes. And in particular, the Linkerty team's goals are that the mesh should be secure, resource efficient, and low latency. And we really care about those things in that order, right? That security is the most important and then performance and efficiency. And down here, we have Linky the Lobster, the Linkerty mascot. So Linkerty loves Rust, the Linkerty 2 data plane, the proxies have been written in Rust since the release of Linkerty 2 in 2018. But the control plane, on the other hand, is mostly written in Go. This year's Linkerty 2.11 release, though we introduced a new control plane component, the policy controller, which was written in Rust, which is what we're gonna be taking a little look at today. Down here, Ferris the Crab, the Rust mascot, Linky the Lobster, they're both crustaceans. So, you know, match made in heaven. So, since the data plane handles all the network traffic in the mesh, right, you really need those proxies to be secure, efficient, and low latency. Low latency means that we really want to avoid garbage collected languages like Go or Java, because a garbage collector pause impacts tail latency significantly, so you don't really get that predictable performance of a language that isn't garbage collected like Rust or C++. But we also really don't wanna deal with the kind of memory and safety-related security issues that you get from C or C++. We don't really want heap corruption and use after freeze, so Rust was really the only language we felt was an acceptable choice for the data plane in Linkerty. This isn't really true in the control plane. We need it to be secure and reliable, but the performance requirements, although we care about performance, the performance requirements aren't really as strict as the data plane. The occasional GC pause isn't that bad, so languages with managed runtimes like Go are totally suitable, and most of the Linkerty control plane is written in Go. So why did we write the new policy controller in Rust? Well, we liked it. The Linkerty team has invested pretty heavily in Rust. We're big fans of the language, we wanted to use the same libraries we use in Proxy, and Rust has some benefits in the control plane. We like a lot of its language features that make it an expressive, productive language that also help us to write correct code and help us to make invalid states unrepresentable. So what's the Linkerty control plane actually doing? Well, to put it broadly, it's a set of microservices that run in Kubernetes cluster, and they watch resources from the Kube API, and then they serve services and pods, and then they serve a GRPC API that's consumed by the data plane proxies. And the proxies use those APIs to discover where to route traffic and what policies to apply when routing that traffic. In particular, this policy controller that we're talking about today is responsible for telling proxies how to enforce server-side policy. And it's watching a set of CRDs that describe policies, like what clients are authorized and how to authenticate those clients, and then it tells the proxies that make streaming GRPC requests that controller how to apply those proxies. So if we wanna write controllers in Rust, we need to talk to the Kube API. So we need a set of bindings. Fortunately, that exists now. This is crates.io, this is the Rust package registry, and this is a package called kube.rs, and it's kind of like a Rust version of client go. It also has some other things that we might get to see some of shortly. And then on top of kube.rs, the Linkerty team wrote a little library called kubert, so it's like kubert, but with a K. It stands for kube runtime, kube RT, and it's an opinionated runtime for writing Kubernetes controllers in Rust. Here's some of the APIs it provides. It has utilities for things like running an admin server, doing graceful shutdown and logging and so on. We're not gonna look at that in too much detail, but what we are interested in is that we have this module called index, which provides utilities for indexing Kubernetes resources. So this is the trade index names-based resource from Kubert's index module. Trade is like a Rust interface that we can implement on our own types. This is generic over some T, which is a type that represents the resources that we want to watch. And then we have three methods on this trade. Apply is called when the Kubernetes API creates or updates a resource. Delete is called when a resource is deleted. And then reset is called when we have a reset event and a large number of resources change at the same time. So we can have some Rust type that we defined, and we implement this trade. And when we implement this trade for our type, we can give it to the Kubert runtime, and Kubert will just call these methods as various resources are created and deleted. So those resources could be core resources or CRDs. In this case, we care about both CRDs and core resources. So how do we actually create CRD bindings? Well, here's an example of one of the policy CRDs, the server resource. This is defined by Lincardy. And it defines a server that receives traffic in the mesh and can be assigned server side policies. So here we have a server that lives in the Lincardy viz namespace. It's named admin. This actually came from a real Lincardy install. And it has a spec with a pod selector, matches a set of labels on pods. And to save these pods as part of that server, and then it defines a named target port on those pods, and then it can include a protocol hint. So here's how we actually define that resource in our Rust code. So we have a bunch of these attributes with the hash on this Rust struct type. And those are derived attributes, right? So they're saying we want to implement these traits for this type, and there's some code generation that we're calling into, essentially, that automatically provides those trade implementations. So if we want this to be a binding for a CRD, we say, well, we need to be able to deserialize and serialize it, and we need to be able to generate a JSON schema. And there's a library called serde, just lets us put those attributes on this type, and all of the serialization, deserialization code is automatically generated, and then we have an attribute saying, well, we want to rename the field so that they're camel cased, instead of written in the Rust case with an underscore. And then we have these extra attributes, like that's rename all, and then we have here a derived kube custom resource that's saying this is a CRD binding. And then this attribute here is saying, well, this is the group and version, and then the name of the resource, and it is namespaced. And that generates the entire CRD binding for us. We don't actually have to write any of the code for parsing the JSON and turning it into this in-memory Rust representation. So finally, here's an example of how we would wire some of this up. This is a main function in a example Rust project. This isn't really from our policy controller. I've stripped it down a lot to not put so much code on the slide that we can't read it. But this is an async main function. It's using Tokyo Rust leading asynchronous runtime. That's not super important. So we parse command line arguments, and Kubert gives us some utilities for taking CLIRs, like configuring the kube client. Here's the kube context you want to use, and so on. So we have this ARDS type, we parse that, we get the client configuration, we configure the runtime with this builder, okay. And then here we start, we actually say, okay, we have this index type that we've defined, so we want to make an instance of that. And then we want to have the runtime start a watch on this server resource, that's the type here. We could pass in list parameters, like we want to only list these namespaces or filter in this label selector, but we're not doing that because we want everything. And then we say, okay, Kubert, we want you to actually index the namespace resource using that index namespace resource trait that we just looked at, which we're assuming is implemented for our index type. And we want to use this watch on servers. And we could do this for the same index type with all the different resources that we care about, and we can establish like relationships between different resources in that index this way. And that index is just whatever data structure we want it to be. It might be like a nested set of half maps or something. It might have different relationships between the different resources that we've indexed. And then we can do other work here. Kubert also has a utility for running like a validating admission webhook, which we might start here. We might start the GRPC servers that we use to serve the proxy API and so on. And then at the end, we just tell the runtime, okay, we actually want to run this and it will run all of these watches that we've started. And we'll call into the different methods on the index. And that's really the whole thing. So this code is actually live right now in Lincordie. If you're running Lincordie, you're running code like this. You can check out the policy controller on GitHub. And we are running a service mesh academy you can sign up for. If you're interested in Lincordie and learning more, we have hands-on workshops for production users. That's free. We also have a managed Lincordie offering that you can check out at the buoyant booth in the vendor hall. And I'll be hanging out at the Lincordie booth in the product pavilion for most of the rest of KubeCon if you want to chat about this stuff or ask me questions about Rust or Lincordie. That's about it. Thanks for coming. Eliza did a pretty good job on timing. So we have some time for one question. If you have. I can answer questions. Yeah, yeah. Any questions? Yes. How did you, instead of most of the control plans working in Go currently, how did you handle the interoperability between the Go and Rust code or are they running as separate binaries? There's another microphone you can use. Oh, okay. Yeah, these lights. They should be, you know, yeah, yeah. Okay, so yeah, that's a great question. They are running in separate binaries. Essentially, the control plan in Lincordie is essentially a set of microservices. We have a deployment that deploys pods that have all these different control plan services. And in general, the way that the Lincordie control plan services communicate with each other, right, is just through Kubernetes. So they create and update and delete resources, add labels to resources, so on. And that allows the different controllers that do different things to share state. They're just Kubernetes controllers. In particular, some of them also serve APIs that the data plan consumes in addition to just interacting through Kube resources. Thank you so much. I think the APIs basically solve the problems and Kubernetes APIs solve the problem of intercommunication. So let's do one more time for Eliza.