 Yeah, thank you. And also thank all of you for staying that long. It's almost over, just me. So my name is Christian. I do cloudy things at BWI, GmbH. This is the IT supplier for the German Armed Forces. You can ask me offstage what we do. But today I'm here to give you an impulse about maybe considering doing REST or using REST for your next operator implementation whenever you do implement an operator. So I use LinkerD as service mesh as the example here. So what does LinkerD do? It's a service mesh. It's been around for eight years by now, I think, or eight plus years. It does transparent MTLS for your workloads. It has policy, so you can use it to actually safeguard your services amongst each other. You get observability out of it, reliability features, retries, GFPC load balancing, all these kind of things. LinkerD follows three design principles that are interesting here. The first one is keep it simple, so you don't want to have that much of a cognitive overhead here. It's good, but not crucial for what we want to talk about. Then you have the minimized resource requirements. Since LinkerD is using the sidecar model, that's natural. You want them to be very, very efficient and have less overhead. And the third one is it just works. So you want that to be everywhere, essentially, and you also want it to just work all the time. And that brings us to how the architecture look like. So it's a very traditional architecture for that sort of thing, so you have the control plane, and you have sidecar proxies. There's this debate about sidecar proxies versus centralized ones. That's not what this is about. They just use it. So the question was, how do you implement a control plane? Traditionally, you do that with Golang. Naturally, you want to use Kubernetes Client Go, and that's awesome. It's a very mature language. It's a great ecosystem. Lots of support. Nothing wrong with that. The LinkerD team for the sidecars a while ago made a bit of a bold move because it was early on in the Rust language and decided to go with Rust for the sidecars. So they are all implemented in Rust. That systems programming language that, essentially, is very low level, but it also introduces a lot of safety features. So you have no garbage collection policies, no nil pointers, concurrency issues, these kind of things. At least the language helps you a bit more to avoid them, if you can, because what becomes apparent if you look at the history of LinkerD and also the control plane, and also recently I just checked in with the team on GitHub and basically took a look of what the recent issues were that you would experience. And these are four recent issues in the policy controller would result in, no, not the policy controller because that's implemented in Rust, but the, actually, destination controller, which would actually result from these runtime issues that are a bit hard to catch, like concurrency, race conditions, uninitialized variables, these kind of things. So when implementing the policy controller, the LinkerD team thought about, what could we do about this potentially? Would it be possible to implement that in Rust, maybe? Is that a good idea at all? Because we could avoid these things, and like with a policy controller or basically any control plane, maybe you want to really keep it up. There's not so much of an issue in data centers or even a full-face closed environments where you can just restart things, and it's not that much of an issue. But if you're in more edge environments or something like that, you might want this to just keep running if possible. So two examples to make it a bit more graspable. I hope this is readable, actually. I think it's a bit small. I'm sorry. So this is just a stupid go code that just fails immediately, but it compiles fine, and it just works. So there is no error program accepted, but it actually creates an error. In Rust, you can write the same, but you get a warning. So you can ignore the warning, but that's on you then. So Rust tells you, well, this code will fail. And of course, this also works in more complex scenarios. Secondly, here we have an example where there's an uninitialized variable. It's just not initialized, and we're trying to access this. This compiles fine in Golang, but it creates a runtime error. And well, that's not good. And in complex code bases, it might be hard to capture that. Whereas in Rust, the same example creates a runtime compiler error, actually. So it won't compile at that point, which helps you to prevent these situations. There's more examples. There's also a nice talk from Oliver Gould, the CTO from Boyant, the greatest of LinkaD. Like from two years ago, where it goes through more of these details, but due to the time here, this is the two I have. So just to give you an idea of what you could save. Now, the question is, what's all the plumbing that is now available in Golang, like all the operator frameworks, all the nice libraries and that, and how to do that in Rust? Well, there is QPRS, which is essentially the Kubernetes client goal for Rust. So that's covered. Then, actually, Oliver implemented QBird, or I did a lot of work on QBird, which is the Kubernetes operator framework for Rust. And recently, there's also a Prometheus create, create being the library format for Rust that allows you to also add those metrics in a nice way to your Rust operator. So there is now really a good ecosystem. And there has been many implementations, I think like 25 operators in Rust by now. So if you're in an environment where that is interesting to you, or you just want to try it, these three projects are highly recommended. And also on that conference, if you want to do a more in-depth deep dive on that topic, there's two talks, one by Matej and one by Flynn. Flynn, I'm sorry, I messed up your QR code. So you have to look that up for yourself. The Rust solution, how Rust is the future of cloud native that even goes a bit further into thinking whether Rust might be something to use on other projects as well. And with that, I'd like to thank you and wish you to have a very great KubeCon and a good evening. Thanks.