 We're going to start by talking about how to extend Kubernetes with WebAssembly modules. And with that, I have my talk with Flavio. Flavio is not here in person, but I'm going to play his video, and then I'm going to continue working and talking about continuing the talk. So let me just. Hi, everybody. I'm Fabio Castelli from Susan Rancher. I'm here today visually, but you will find my dear colleague Rafael on stage. We are here today to talk about how we could use WebAssembly to extend some pieces of Kubernetes. So inside of this room, there are many of you that are already playing, experimenting with WebAssembly building cool things. Many of you are already using WebAssembly to build functions and service platforms. This is a fantastic use for it. Others might have used WebAssembly as a way to implement plug-in systems. So you take an existing program, and then you grow it by using plugins to add new features, new capabilities. But this time, the plugins are going to be delivered as WebAssembly modules. This is exactly what we're going to do today. And the program we will try to extend with WebAssembly is going to be the Kubernetes control plane. So Kubernetes, if you think about that, it has already some points of extension. You can bring your own scheduler. You can bring on dynamic admission controllers. And this is the last point, the dynamic admission controller. This is what we are going to be talking about today. So Kubernetes has some admission controllers that are built into its own binaries. Think about pod security. But if you want to implement new rules, then you have to resort to dynamic admission controllers. They work in this way. You have external webbook servers that are being approached by the Kubernetes API server. So the API server is going to send requests to be evaluated to them, and they're going to respond back with an outcome, which could be accept a request, reject a request, or accept a request with some changes, which would be a mutating admission controller. So they're all running outside of Kubernetes. The majority of the times, they are inside of a very sync Kubernetes cluster. And it is working. This is a proven technology that is being used by many projects, many open source projects, such as Open Policy Agent, Gatekeeper, Kiverno, and last but not least, Qboarden, which is an open source project that I contribute to. It's a project we submitted to CNCF for inclusion instead of Sandbox. What makes Qboarden different from the other projects I mentioned is the way of writing and distributing policies. So in the Qboarden case, you can take a regular programming language, or you can use Rigo, which is the query language introduced by Open Policy Agent. Then you compile this policy into a WebAssembly module, but then is distributed using a container registry and then is executed by a webbook server, our webbook server, which is capable of running WebAssembly. It has a plugin system based on WebAssembly that allows it to gain more and more policies to be run. So this is something that is working. It's working well. We are pretty happy about our decision to embrace WebAssembly for this task. But I've always been thinking about what if we could reuse part of these ideas of Qboarden straight into the Kubernetes control plane? What if we could actually run policies that we define inside of the control plane, making them basically like admission controllers, the one that are bundled inside of it, but without having to submit all this custom code for our customer-specific needs into the upstream Kubernetes repositories? So the idea basically is to make the API server aware of WebAssembly. And make it leverage computational units written using WebAssembly. Basically, have a sort of function as a service platform inside of the API server, which is there to evaluate incoming requests and use policies while doing that. So why would we be doing that? What do we gain from doing that? I think there are a couple of advantages. So first one is we get more predictable outcomes when evaluating policies. That's because with dynamic admission controllers, the API server is reaching this webbook server and then is waiting for an answer, which could be accept, reject, mutate, or it could be nothing, so a timeout, or it could be an HTTP status code. So this is the kind of unplanned failures that you have to take into account when you register a webbook. And this is where this setting about failure policy is going to determine what happens, what has to be done by the API server. So you can pretend like nothing bad happened, so you set it to ignore. And so the request is accepted even for the webbook server didn't say anything about it. But in doing that, you might let something dangerous get into your cluster. And by the time you notice that, it might be too late. Or what you could do is to be more strict and then set it to fail, which means that you're going to reject everything, even things that are legitimate. But then you risk of having a kind of denial of service because the webbook server is down and nothing can be done inside of the cluster. So it's a tough place to be. There is no clear answer to what to set for these policies, for the failure policy. But if you move the policies into the API server, then you don't have all this uncertainty anymore. So this is one reason for going ahead with this idea. The other reason is I don't know about how you feel about that, but you have kind of the gut feeling that a lot of resources inside of Kubernetes clusters are actually used to run infrastructure code, not user workloads. So take, for example, the keyboard and stack. We have a webbook server, which is there waiting for requests to be evaluated. It could be doing a lot of request processing, or it could be sitting idle. It all depends on the amount of load that you have on the infrastructure, how many requests are being sent and made against the API server. And at the same time, you have a controller, which is there doing its own reconciliation loop against custom resources. And it could be doing a lot of reconcilations, or it could be doing just nothing. So what if we could get rid of those two pieces of a stack and just have the policies being run on demand, function as a service, inside of the API server? I think this would be great for certain scenarios, certain environments, where resource usage is really important. Think about edge scenarios, where you have a limited hardware, and you have to make the best use of it. And the best use of it is to leave as much space as possible for user workloads. So if we go ahead with this experiment, this is what we can allow our users to do. So the idea that we're going to explore and that I actually implemented with this proof of concept is to basically run WebAssembly policies inside of the API server. Now, big disclaimer, I don't know how this is going to end. I mean, this is working, but this is a big change. Maybe people will be completely against that, and I get it. And so I didn't want to invest too much time and energy into that. I don't want to maintain a downstream fork of Kubernetes just to have this feature. That means I try to reuse as much as possible from the Qboard and Project, because we have plenty of WebAssembly policies ready, and I know the stack. But that doesn't mean that if we decide to go further with this POC, it doesn't mean that we have to embrace the WorldQboard and primitives and stack. So that being said, what I have done, I have introduced a new feature gate inside of Kubernetes that is capable of running these Qboard and policies, capable of downloading them from container registries. So let's see that in action. So here, I have a simple K3S cluster made by just one node, which is running only the vanilla K3S stack. So there is no Qboard in there. This is a patched version of K3S, but as the feature gate I'm talking about turned on, and this feature gate I'm talking about, it has its own configuration file. So let's take a look at that. Here, you can see that we are defining one policy, which is called privileged, which is being downloaded from this container registry. This is an unchanged Qboard and policy. This policy is interested about pods, about creation, and update operation. And this is going to be enforced inside of a default namespace, which means that you won't be able to create privileged containers inside of a default namespace. The next policy we have over there is called trusted repos. And this one is interested about pod events, creating the update. He's enforced it inside of a default namespace as well. And this is going to reject the containers that are coming from the docker hub. So if you try to create a pod which has a container coming from the docker hub, this policy will prevent you from doing that. So let's see if this policy is in action. So first of all, we are going to try creating a privileged container inside of the default namespace. And as you can see, this is being rejected. It is being rejected. We get back a notification from the API server in the very same way we would with a dynamic admission controller. Now let's try to create a container that is using an image from the docker hub, the alpine image. And again, the creation is going to be rejected this time by the other policy we have enforced. So just to prove you that this is actually working, let's try to create a privileged pod, but this time into a different namespace, the task namespace. And as you can see, the creation happened. That's because the policies are enforced only inside of the default namespace. Now every time a request is rejected, these code will also emit a Kubernetes event to keep track of it. Now I wrote a kubectl plugin that allows me to retrieve this information. And as you can see here, I have a fancy table which contains all the events that happen about rejections. So this is all right now in terms of demo. I will just end back the microphone to Rafa who is going to talk about more interesting things and an Easter egg I put into the demo. All right. Thank you, Flavio. All right. So then we thought, can we use WebAssembly somewhere else? So we already have the API server. We can run keyboard and policies on top of the API server. That's fine. But can we do something else with the control plane? Well, we thought about maybe logging, doing something fancy with logging maybe even before getting into memory or even when it's at rest. So if an attacker gets into your machine, they cannot even get the logs from your node. Even if it's on memory, it doesn't sound so important. It's important, but it doesn't sound so cool for a both concept. Authentication authorization. Well, you already have the authenticating proxy. It's not super nice. Controller manager, it's just a controller. Maybe yes, but no. Graphics collector strategies. You can use finalizers with that. Add your finalizer whenever you want and then delete it afterwards when you want. So you can control garbage collection in the end, even on different namespaces wherever you want. So yeah, it's not so nice. And yeah, maybe it is something fancy with its CD, but in the end it's just gRPC. So I can just put something in front of it or even something completely different as long as I implement the gRPC protocol, and that's it. So it's like, yeah, maybe the control plane, we cannot get that much done with WebAssembly right now, thinking out of the top of our head. And so we looked also thinking about the data plane. So control plane, data plane, we see lots of different people doing really, really nice stuff. We have translate. We have run wassey that allows you to run wassey modules on container D. We have wasmets. We have a lot of different things. And we are talking only about Kubernetes related stuff here, where you can run and off Kubernetes. So you have a lot of things, but then you get a little more into the data plane, like we understand as that. And we have Envoy and LinkerD. And I'm just going to pick up two examples from them. So with Envoy, you can run WebAssembly today on your HTTP filters. The thing is that, and this is something we see right now on almost every project that is using WebAssembly, is that you need to provide the local file name, where this WebAssembly module is. Or you can provide an HTTP server. Honestly, I didn't look if it checks for certificates if you can use specific CAs from your system. But in any case, there is something clear here that the way you distribute WebAssembly modules is still something that we have to work more on that. So that's for Envoy. It's already supported, at least most of what we can think of for bread in our filters. Then if you also think about LinkerD, this is the status of the service mess for 2022. And they basically say that they are not looking for using WebAssembly for the core logic, which makes sense. But however, as a mechanism for enabling end user plugins, wasm may very well make sense for LinkerD. And it's in that light that we are evaluating it, which makes a lot of sense. Because in the end, you have a core. And then you allow that to be extended with WebAssembly in lots of different ways. OK, so that's the data plane. Then we thought, yeah, maybe the data plane is either something that we, I mean, we are not going to write a proof concept for that right now. So what could we do? So we go and get into the user plane. This might sound interesting. So we have to main CLIs for Kubernetes in the user plane. And that's QCTL and HOME. And so with HOME, you are able already to say, depending on the arts where you're running HOME, you are going to use this binary or this other binary and this other path, which for systems where you have different need for different architecture binaries, that's cool. And QCTL is not even going to check that. It's just going to run what is on the path, and that's it. So we thought, what if, I don't know if you looked for, if you know about Crew. So Crew is the package manager for QCTL. And basically, it was like, what if we try to build something that is Crew Wasm, where you can write your QCTL plugins with Wasm. And this is what we did. Actually, most of the work has been done by Flavia, so thank you for that. And we are distributing the policies as we are doing with QWarding today with OCI distribution. This is written in Rust. And so this is basically what we use for using OCI registries in order to distribute the Wasm modules over there. And so right now, we brought two plugins. I am only writing this one. There is the decoder. And this one is a Wasm plugin. And it's a Wasi program in the sense that it has its main, it has everything. So it's a full, let's say, Wasi program. And then there is the second one that is what Flavia was referring to as the Easter egg. And if you saw that he ran at the very end, QCTL QWarding was actually a Queer Wasm plugin. So this is actually working, and we are really happy about it. So what's better than one demo, it's two demos. So let me check. Let me check if I can as you see to there. So we are going to show how Queer Wasm works. We are able to list the plugins that we have installed in our system. This is an empty list, so nothing. We can pull one plugin from the network. This is happening as we are talking. This is not recorded, so maybe it's a little slow sometimes. I already pulled that. And the only trick is that I need to have my Queer Wasm in my path. So actually the directory where the sim links are created for all my plugins need to be on my path, otherwise QCTL is not going to be able to talk to that. So now if I list that again, I see that the coder one that I just pulled. And now I show that I just have a Kubernetes installation here. So now let's create one secret. And the not-so-secret thing about Kubernetes secrets is that they are not really secret. So what I'm going to show is how we can show the contents of the secret itself. So I'm going to create one with QCTL and username, password, and then I'm going to show that with JSON. OK, we have the base 64 over there. That's fine. So now I'm going to run the QCTL decoder plugin that I just downloaded. And this is going to show the contents of my secret, Kubernetes secret. So you can see that. And it shows more information on the table, so it's pretty cool. And also what happens if I show a Kubernetes TLS type secret directly. So the decoder knows that, and it will show this in a different way, which is, well, actually I'm showing this as it is from QCTL. So you can see that. And now I'm going to read that with the decoder plugin. And you can see that it actually goes and basically and reads the certificate itself and writes you all the information about it. Yeah, I'm just showing here that actually the keyboard and plugin that Flavio showed before is the other plugin that we support. And that's it. I don't have the keyboard and installation that he has where we are registering events. I don't even have the keyboard installed here. So it's going to be an empty output. But yeah, that's the idea. So what's nice about Wasi today? So the really cool thing is that it has a withX definition. This is going to be ported to with soon, I think. Then we have primitives that are super useful. Clock, file descriptor, primitives, environment variables, command-line parameters, and also initial socket support. We cannot create, as we were saying before, we cannot create outbound connections, but we can create inbound connections. We can listen. And so we have inbound socket support. And this is really cool. And you know, what's next? So as I was saying, outbound network connections, we have with, that is what is coming after the withX. And yeah, that's basically it. We also saw that there is an argument limit when we are using withVanjen. This is going to be lifted as well, because this will change with a new binary. So this is going to be fulfilled. And this is not going to be a problem anymore. And one thing that I love about these examples is that your plugins actually are complete programs in the sense that they are actually going to try to say, they are reading whether you have provided an environment variable called kubeconfig or not. If you don't have provided the kubeconfig environment variable, it's going to try to find that on your home. And then we are mounting the home directory. We should reduce that. But it will try to go and read the kubeconfig file from your home, basically, and marshal that, and create the network to the outside with a fermion project for the wasexperimental toolkit that allows you to create outbound connections. So actually, the plugin is really creating the CDP requests. And that's working fine. So what I really love about this is that where do you draw the line with wasm today? Do you have functions that you import in your module and you call and they are super complex, but on the host? But I see that we are, day after day, making more complex guest programs where we are able to do more stuff in the guest. And then with wasi, we just have the exports and we call to that. And then we do a lot of stuff inside of the guest, which is super nice. Because going always marshaling and marshaling some JSON and putting that to the host and then the host will do whatever it has to do, we can do better than that. And I think we are doing really good progress there. So some links that you can find on these slides for the experiment that Flavio did, also the plugins that we have written, crew wasm itself. So if you have interest about that, just let us know. But yeah, this is basically it. So that's everything I have. And thank you Flavio. Thank you.