 Good morning. It's like today we don't have that many people. Hey, good morning. Morning. I like your shirt. It's a real shirt back, right? It's still back now, yeah. Remember where it comes from? I think I have that shirt somewhere. Yeah, it was quite a famous one back in the day. Yeah, I don't know about anybody else, but I struggled to, I haven't joined into this meeting for a while, I must say, but I between having to log in and having to find the pass code and whatever it was non trivial to get on this call. So I'm wondering if, if other people are struggling with the same thing. Yeah. Give them a few more minutes. What's the attendance been like recently? It's been fairly good. I think, I mean, it hasn't been like, you know, 20 or 30 people, but typically we get like, you know, seven to 12 people. Oh, okay. So more than today. Okay. Cool. Let's maybe a while and see if somebody asked. We could always, I mean, it would seem like a waste to, to, I guess we can record it and people can watch the recording. It's already being recorded. Otherwise we could postpone until next meeting, I guess. Yeah. Yeah. Maybe everyone's still busy counting votes. Yeah. Yeah. So it's election. So people are watching that. Maybe we can just give it a go, right? So Taylor, do you, maybe we can, it's going to be recorded anyways, right? Yeah. We can if you want. The only thing I worry about with just a small audience is it doesn't give a lot of people the chance to ask a lot of questions. Yeah. Which is what I assume people have given the nature of this project or maybe people have nothing. I never know, but. I'm usually a very noisy. I'm usually a very noisy guy when it comes to asking questions. I'll be that, that question ask her a person if we want to go ahead. We can always repeat it again in a couple of weeks time. If we think it's worth it. But yeah, I mean, given that we're all here, we may as well go ahead and I'll ask questions. How about that? Yeah, I don't, I don't care either way. I just wanted to make sure that like, I figured people wanted us to come present what was going on with Crestlet because they're curious about what it is and probably have a bunch of questions. And so no matter what I present and talk about in demo, I figure people will still have questions. And so that's what I'm trying to make sure we, we give people that opportunity if they want it. But if we're, if we think that we can cover most of it here, it's perfectly fine by me. Oh, we got somebody else. Derek. And we can do, we can do Q and A on the Slack channel as well as, as a sort of supplement. It's never the same as a in person, but it's better than nothing. Yeah. It's a runtime Slack channel. The CNCF. So should I just go ahead or you have other things you do first? Go ahead. Yeah, we don't. Yeah. Okay. Let me make sure I have everything shared, right? I think I got everything in place, but you know, kind of double check. I guess, and I didn't have something open and I thought one second. Okay. I'm going to go ahead. I'm, I was planning on just showing a few slides. I'm not a big slide person. And I figure people want to kind of dive into like how we've done things and what we've architected, but on these first things are just kind of cut to cover the purpose. I am reusing them from another slide deck. So forgive the theming, but they contain like the perfect information and kind of give some overview about why we did Cressa and what we did it. So this is recording just an introduction. My name is Taylor Thomas. I am a senior software engineer at Microsoft. And I'm one of the lead maintainers on Cressa, which is a Kubernetes Rust Kubelet is what that's, that name derives from, but I'll talk a little bit about that and what it does, why the project exists. And then I'm going to show some demos and if people are interested kind of like the architecture. So that's, that's the kind of general overview of what I'm going to cover. So let me go ahead and share out my screen, make sure it is, and I'll start the show, showed up on the wrong display as is want to happen. Okay. So just a little bit of background. So what's this whole wasm thing? So the idea of Cressa project is that we, we do, it was really meant to work with wasm and wasm stands for web assembly if you haven't heard of it. And just a quick overview of that in case it is completely unfamiliar to you. It's basically compiled binaries that can run, be running a browser through JavaScript. That's how it's mostly been used, but it can actually be used to run things outside of a web browser as well. And that's where we introduce another acronym that I'll talk a little bit about as we're doing this called wazi. We like our W's here in this space. I'm kind of like Kubernetes likes it's case. We like our W's. So we have this thing called wazi, the sensor web assembly system interface. There's a landing page right here on this slide. That's a pretty simple one. And what it is, is a standard for interacting with the host system, no matter what the OS is. So it's a very well-defined set of things that you can do right now. It's very new. So there's things that are missing. Like right now there isn't full networking support, though my other people on my team and in the community are working on some of the initial networking support. For wazi. But it has definitions of how you write to a file descriptor. And the security model of a web assembly that just comes with it. So it's a fine thing. So that way we can run it everywhere and I'll, and I'll try to show a little bit of that while, while demoing today. So. Quick question while you, while you're going, would you like us to ask questions along the way? Or would you prefer us to keep them to the end? You can ask them along the way or, or the end, whatever you prefer. I can, I can handle both ways just fine. Okay. Cool. This wazi sounds very interesting. I was just curious, you know, there've been attempts like POSIX, you know, decades ago to do essentially some of the things. How, how does this differ from something like POSIX? Yeah. So this is a very common question is like, well, people have tried stuff like this before. And when we get a little bit to the security model, people say, well, we tried this before. And the completely blunt and honest answer is, I'm not entirely sure it will be different or not, but there are signs there that a lot of people are, are coalescing around this. Now, now wazi has become, has been the kind of the forefront of this. There's lots of efforts in and around the wasm space, but wasm has a distinct advantage because of its, its history of like being used in the browser and has a very calm, like these, like I said, a very common set of things that can do and way of interacting with things in a security model that goes with that. And so it could, it could end up being like, Hey, people tried this before like POSIX and it might not end up that way. But there's a very concerted effort in this community right now to, to get this to a state that could actually be used on all systems. And even right now, even with its limited features, I can take something that is a compiled, that's a wasm binary that's been compiled against wazi. And I can run it, I can compile it on my Mac, run it on my Windows machine, run it on my Mac, run it on my Raspberry Pi over here, run it on a Linux VM somewhere. I can run it anywhere I want to. And so if the fact that it can already do that, and it's on its way to define these other things makes me think that there is a distinct possibility. But I, I don't think any of us know for sure that it might not just end up like any other, any other effort to make something more cross compatible across everything like POSIX was. Okay, yeah, yeah, I was just going to say the counter argument and I'm just kind of playing the devil's advocate. The counter argument is that POSIX could do what you just described 30 years ago. So, so like, you know, what do we, what do we get now? But we'll talk. Yeah, I'm going to talk more about the specific benefits about why we said, okay, well, why did we dip our toes into this? Like I know that a lot of these things can be done. It just enables it in a little bit different way. And I'll get, I'll get into that in just a little bit because I'm going to cover a little bit about why, why we did the things we did with WASA. Good question, though. So the other thing to the, that I like to kind of make a specific and a difference here is, and I'm, I'm talking to SIG runtime. So this is a little bit dumbs down for the audience. So pardon that. But the idea is just to kind of explain where we sit in kind of our labor abstraction. So we all know what OCI is, I'm assuming in this meeting. And we have, and we obviously, that's kind of a little bit too low level for us. We actually thought as we were playing this, like how could we enable WASA on Kubernetes and we're like, well, can we implement it kind of like as an OCI, like with like a shim that's running underneath and all that. And honestly, the process model didn't work out the best for us. It was a little bit too much overhead and too tied to the idea of containers still. We also tried and there's a, you can look up, there's a repository called walk inside of deus labs that was Stan, that was was among Kubernetes. And we tried implementing the CRI interface, but the CRI interface is way, way, way too container specific. Like we couldn't even test it against the normal testing tools because it makes the assumption that everything is a container, no matter what, which I mean, that's not a problem, but that's just how it's defined. So that didn't work for us as well. We kind of went up to this virtual kubelet level. And I'm pretty certain everyone knows what the virtual kubelet is. So it's the idea of like, we're masking as a kubelet. So that's what Crestlub is doing in this case, is it's masking as a kubelet, but it goes a little bit above and beyond the normal virtual kubelet and the fact that we're implementing other things that the kubelet has as if it were like a drop in replacement for kubelet. That's not it. That's not its purpose, but like we have a lot of the same kind of features. Like right now I've been working on implementing the plug-in watcher system so we can have CSI plugins. And so I just, I have a PR open for that actually right now to land that. And so those, that's kind of the level that this, that Crestlet sits at here is that we aren't an actual kubelet in the sense of like we're doing the CRI and all that, but we're performing a lot of the same functions and pretending to be a node no matter where it's running at. Okay, so here's to answer your question about the, well, why did we make this then? So there's kind of five big things here. The first two are a little bit more self-explanatory. So there's a security aspect here. So wasm is a completely sandboxed runtime. So you have to explicitly grant in permissions for it to do something. If you want to open a file, that file has to be explicitly granted to the runtime to be able to use. And I'm assuming the same thing will hold true for network sockets and other things down the line. This is another one where people have asked like, well, people tried this kind of security model before. And once again, like we don't know for sure that it's going to work this time, but given its success in the browser and its sandbox space, we think that this can still translate very well to, to an in, like in a server running model, like we have with wazzy. And so that security is something that, that is very important to a lot of people. And that's one of those reasons we thought that wasm would be a good choice here. The other, the other reason is density. So when we went from like bear, bear blades to VMs, we were able to start packing more stuff onto the compute power we had. We were able to start packing more stuff onto the compute power. We were able to get more VMs to docker containers and docker containers pack, it allowed you to pack it even better with the use of C groups and everything to make sure things weren't stomping on each other. Wasm modules are even better at the density aspect. And that's, that's something that ties into the last point about being a smaller footprint, but you can fit a lot of wazm modules because they're very tiny and run very light into a, they're very tiny and run very light. And so, you know, when you get into the whole resource constraints and everything that, in the state, like how you have the resources blocking Kubernetes to be able to find those things. We haven't gotten there yet and decided how that's going to work, but you can, you can push a lot more onto there. And a lot of these things are just. Essentially running on a task or if you're familiar with go, it's basically like a go routine. And so if it's not doing anything, it just kind of gets parked. And so you're not going to have like something that's just running and taking up that, some of that space that's required to just keep the container running. So that density is something that we have. We've also seen would be a really good feature for a lot of people. When I say more control here, I'm referring not to anything outside of the project with more internal to the project when I'm referring that part about, we didn't use CRI because. It was very hard to test it. And we often have people ask us like, well, why didn't you implement CRI or implement this? And. It was just because right now this kind of virtual Kublet. Level of abstraction rat. It gives us a lot more flexibility because all we have to do our API contract with Kubernetes is that we just have to say, we're a node and schedule things properly and update the status. If there's something else down the future, down the line in the future, like a CRI V two, that's more flexible. We probably look at moving to that model and implementing that. So about the. The density part basically you. What you have now is with the crosslet is. You know, like a Kublet tied to WebAssembly binary, right? So with density is that the plan is the plan to have multiple. WebAssembly binaries tied to the Kublet or because that it ties to Kubernetes node. And so when people want to run WebAssembly nodes. Using Kubernetes. So that interface is not already, it's not defined. I believe right. And that's where you want to get. No, so right now it actually runs it just like a Kubernetes pod. So you can specify multiple modules as containers in a pod. You can have multiple pods running on the same node. Just like you would a container. So it can already run multiple on the same node, just fine. Got it. Got it. So it will be like a container will be a WebAssembly module. Right. So, and that's how are you. Yeah, it's basically we have made it pretty much a one to one mapping in the sense of like. You have a container and then you have a was a module. Now they kind of, they're a little bit different. They're not a one to one mapping in the technology itself, but they are in the Kubernetes like object parlance, like a module is just a container. Got it. Got it. Yeah. Cool. Makes sense. For what it's worth, we ran into similar problems with people trying to run virtual machines on, you know, use Kubernetes to control virtual machines as opposed to containers. And one of the popular abstractions was, as you say, use pod as this pod is the thing that most of Kubernetes interacts with and what's inside the pod is kind of less important and it could be containers. It could be virtual machines and presumably it could be was a model modules. Maybe that's where you end up. Yeah. And that's, that's basically why we did the thing that we're doing, right? Like we're just handling pods and scheduling pods and running the things that those pods tell us to run. So that's a completely correct assessment of it. Now, the last two things are really, are really interesting to me. I think they kind of expose a little bit more of the future of what some people are trying to do, at least what we've heard from people. I mean, this is still so new and bleeding edge. One of the things that this is actually run anywhere. And I use that very loosely and with big air quotes around it on purpose because it's, no one wants to actually make that promise because we know if that's never going to be the actual case. But in terms of comparing it with, with Docker, this Crestlet and wasm stuff are not meant to supplant Docker. There's plenty of workloads that actually work better in Docker and wouldn't be worth the effort to port them to wasm, even if everything was in place with the wazzy spec and all that. But if we're being perfectly honest with ourselves, as much as we say Docker is work anywhere or more portable, it really isn't. It's just a Linux technology. I mean, I work, I work in Microsoft. I know there are some very smart people who have made windows containers a thing that work well, but they aren't even really the same thing. They work with very different underlying like hosts and libraries. And if you build like an engine X container, you can't go run that on a way as a windows container, like not running a VM. You can compare that to WebAssembly modules and WebAssembly modules. Like I said, I can compile it on one computer and run it on any other computer. And so that is much like that's why I'm saying run anywhere very loosely because there's caveats to everything, but it is much more portable than a Docker container is in that sense because I can have any sort of node running this workload. I mean, even right now we have, like I said, that we have support for Mac, for Windows, for ART 64 and for Linux. And so for Crestlet itself. And so those things can run pretty much anywhere we want them to. The last thing is kind of tied to the density, the smaller footprint. So Docker, as I think we all know if we've ever tried, like Docker has some pretty heavy overhead for smaller embedded devices. If you're on a pie, you can run something like K3S, which is awesome. I often use K3S for our control plane when I'm doing like a Crestlet cluster at a Raspberry Pi's. And the smaller embedded devices are becoming more common. It's this idea of like the edge. I know it's kind of a buzzword still, but a Wasm module has very little overhead and can run with a much smaller footprint than a container can. And so that's one of the reasons we have Crestlet is that we're able to run on those smaller devices with much more ease than just having a full Docker overhead or container overhead. I know people have gotten it working. That's why I said it's not like a perfect thing. But I think it makes it a lot easier to work to run it on these smaller devices. And with one of the implementations we have, it's called a provider, which I'll go over in just a second. Those, those are, it's called Wasm 3. It's actually optimized for the, for the smaller embedded device run times. I have a question on that one. So is that density mostly achieved by basically using the same process super or having the, having one Wasm process that's running all of these modules? Yes. There is one Wasm process. Each process, each Wasm module has its own memory space. And there's a whole debate around the, well, what's the security model here? We, the people always ask about, and to be honest, that's not my forte, but I do know that that's still so like under debate right now and how exactly it'll work and what the implications are. And what are the worries? Like when you run them all on the same, that same parent process. But yeah, they're, they, that right now that's how it's done is we achieve the density from two things. Number one, the size of the Wasm module. Most Wasm modules, like a simple, if you want to do like a simple server demo or a simple, like hello world kind of example, even a Docker container still is several, like you're talking at least 10 to 20 megs. A Wasm module is bytes, kilobytes, worst case scenario, maybe a meg. So you get a huge size reduction there. And then also the fact that we're sharing that, that same kind of like parent thread and how I mentioned that because it's running just basically on like tasks, those tasks can get parked if it's not doing anything. Yeah. So like the image sizes is definitely one thing that I think traditional containers, I don't know if we have a name for that, but could never beat in terms of Wasm just because like the, the operating system comes with the runtime basically with, with Wasm. But yeah, I was just thinking from like a Kubernetes perspective, like is it ideal to have like one per pod so you can still throw these processes in namespaces or C groups or something to add that extra layer of security on top of what kind of, is in the Wasm process. Yeah. And that's maybe something that we can, we can look at. Like if this is something where people are getting interested in and want to try it out, like that's the kind of thing we welcome the feedback on. And maybe we can wrap some of those things like it. The main thing around like relying on C groups is one of our, our drives from the beginning has been that this is cross platform. And let me tell you for the whole plugin system, boy, was that an adventure to get that working on all like windows and Mac and Linux and just because of like different, the differences between operating systems. And so anything we do, we want to make sure it's is that it's cross platform, which is why like looking at C groups directly is probably not a good option right now, just because it's limited to the Linux side of things. Well, I think that's more for like adding a layer. Like if you're on windows, you might have a different way of isolating the processes. I mean, C group is just how you would do it on Linux. Yeah. If you're totally development, you wouldn't even bother with me. Yeah. And I think that's something that we, we will continue. This is where, because it's so bleeding edge with, with wazzy, like we don't know, like, we don't want to make an assumption about a security, like some of those, like if you want to add an additional like wrapper thing, like a C group around it, we don't want to make an assumption yet on how, how that works or what it'll look like because it is so new. One other question. Sorry, this is maybe slightly off topic, but hopefully it's useful to other people as well. So, so these was the modules. Are they interpreted or are they compiled? And if the latter are they compiled just in time or they compile ahead of time? Well, all of the above. So like there, there's different compilers that can be used in each one. I think the one we're using is a, I want to say it's ahead of time, but there's, there's just in time, there's ahead of time. There's some that I think are just interpreted. If we want to talk more of that, I can actually invite one of my coworkers who is like completely in like headfirst knee deep in this space inside the wazee community. But I do know that there are multiple ways to do it. And it just depends on the kind of optimizations you want to do for obvious reasons that you choose one or the other. And just to, to contextualize my question, I mean a lot of this debate is around portability and the, the ability to take high performance things and execute them on multiple different platforms. And, you know, there've been many, many efforts over many, many years. And some of them are, you know, interpreted and then became just in time, like Java kind of stuff. And then other stuff was binary only. But then has, you know, all of the challenges associated with, with not necessarily portable binaries or emulation layers or whatever. So I was just kind of trying to figure out which of those sort of three spaces this fits into. It seems like it can, it can choose which of the three spaces if, if I interpret your answer correctly. Yeah. So if you want to learn more about that, I can either connect you directly to my coworkers or we can come back next time, if we have a next time we present, we can talk about that a little bit. So anyway, let's see, I just wanted to talk a little bit about what we have in there before I demo it. And then the idea of what's a provider in our, in our project. So basically what we have implemented is we have the basic pod life cycle, things like downward API support environment variables, post path secret config map volumes, and then support across all those operating systems I mentioned. We don't have the cloud provider volume types yet, but we're going to have it by the next release. Like I said, I just put in the plug-in discovery system. So now we'll be able to implement CSI. It'll look a little bit different because we're not going to have like a sidecar container on, on wasm that's running this, but like you'll be able to implement CSI things. We haven't implemented some of the eventing stuff. And we don't have full Kubernetes networking support yet. So we're not totally tied into that system yet, but that's actually going to be not this release, but the release after that we're, we're targeting that. So this is, this is just to make sure that people are clear. Like this is still very new because of the space that it's sitting in. So there's certain things that are still missing, but we're, we're moving very quickly. We're hoping don't hold me to this. We're hoping for a one.o release around February where we can say, okay, like now you can start using it for things. Like it's still very new. There's still missing things due to the space it's in, but it's like a solid, it's been solid, it's been tested and those kinds of things. Now I also just wanted to cover quickly the idea of what a provider is. So we stole this term from virtual kubelet. So the way Crestlet works is actually just an abstract kubelet running system. And it delegates the logic to actually running the thing that you're trying to run to something called a provider. And so you can actually implement a provider for anything. One of the other maintainers of the project actually has an OCI one that is implemented for Crestlet. So it can run normal containers using Crestlet. And we have two that we implement that are kind of entry. And that's called WASC and WASI. WASC is another project that is out there in the community that is a actor based model. It has a host runtime and it uses something called capabilities. And these capabilities are generally. Well, they can be two different things. They can be provided by another WASM module or they can be like a native capability, which is just a compile compiled against the operating system that's at. So WASC actually has network support because it uses the native capability for your networking. And so this model, this actor model allows you to hot swap things. So you can swap out a provider. So you could swap out the networking implementation and none of the other things have to know about it. You can swap out any of the capabilities provided and the provider has no idea that the, sorry, the, um, the other WASM module that's consuming those doesn't need to know that it changed. It just, it just swaps it out. Um, it's added some strong security models on top of normal, uh, a strong security model on top of normal WASM modules. So we're talking, um, some key signing. Like it has to have a, it has to be signed and embedded with a token that, um, validates that it's allowed to run in the, um, In the host, that it's being provisioned to, um, to be clear though, WASC is kind of a, a little bit of a, um, in across the kind of a square tag round hole, because WASC is kind of its own ecosystem too. It can run its own, um, thing outside of Kubernetes. It even has a cool feature called lattice, um, that allows you to arbitrarily connect in, um, nodes into, uh, into like a cluster, like a meshed cluster. Um, and then those modules can talk across that, that lattice to each other. Um, but it has a whole bunch of support for things like streaming file storage and logging and all sorts of, all sorts of things that is pretty cool, but we also have the implementation. Um, for Crestlet. The operating, the downside is if you're going to use WASC, it's kind of a buy into the ecosystem because it has a different application design than a traditional Kubernetes version. Um, the WASC provider is kind of the, uh, It's a minor comment. Yeah. We had the WASC actually present in one of our meetings. Yeah. I actually remember that. I think they, I forgot that I saw that. Yeah. And I talked to them about it. Kevin is a, is a good friend of ours. Yeah. We've been collaborating with them for a while. And actually we're moving the WASC provider. Um, out in under the WASC project umbrella here, um, soon in the next few months. So. Yeah. And then he mentioned the Crestlet too and how you guys are working. So. Yeah. Yeah. It's great to see a collaboration. Like, Yeah. And I mean, I think one of the challenges also, like, uh, you know, time at all, time at all to Kubernetes. Right. So when you, um, Because a lot of people are using Kubernetes and then you're talking about maybe people using K3S or right. So, but then. Uh, Anyways, just a comment. Yeah. Well, and that's the thing. The providers are offering to offer flexibility anyway. And so that's why we have this model. So if someone wanted to implement one for you, like. Hashtag Corp's Nomad, or if you wanted to implement something for, Um, Like functions on like Azure or on, or on, uh, AWS or on Google, you could do that with a provider. Um, But in this case, like we just have these two that we've, we've supported and WASI is kind of the more reference implementation of everything. So it follows the WASI standard. The promise is doesn't have, um, It doesn't have networking yet, as I mentioned. Um, and the WASI provider follows more of like the traditional Kubernetes runtime model, not that it is a Kubernetes runtime, but that like you have a pod that serves as a, um, That contains containers, right? And those containers, which are WASI modules, um, are just acting as individual processes. Um, Well, they're technically running on the same host. If we're, if we're talking the actual technical details, but they are, um, Following more of that, that mental model that Kubernetes has, they're each kind of their own individual thing, and you can connect them using services and other stuff. Eventually when we get the networking support in. So that's kind of the overview of everything. Um, were there any other questions? About kind of the details around the projects, um, why we created it, anything else before I kind of, before I go into the demo and kind of show all works. Don't have any, so yeah. I'd be curious on kind of some of the contributors you have to it and some of the use cases, um, today. Are there any production use cases? Like. There are no production use cases for, uh, the reason that this is still so new. We still had the warning on there. The big warning sign that says, do not use this in production, please. Um, so we don't know that anyone's using in production. I, I hope nobody is yet. Um, but we have had a lot of people connecting it and trying things in various ways. So we've seen, um, was that there's one of someone in the community from octail octa, however you say that, that company's name, um, who did a demonstration, like using open fads and Cresslet and was a modules, which was really interesting. Um, I know that a lot of people have reached out to us or in the, like IOT edge space because, um, there, there's been a lot of a, a lot of work done recently in trying to kind of, how can we use Kubernetes to schedule things out to these, like edge, these very like leaf nodes at the very, very edge of things. Um, and so we've seen a lot of people talking about it there. Um, uh, I'm assuming whoever uses WASC has probably also looked at some of this stuff if they, if they do Kubernetes things as well. Um, we've mostly just been trying to get like things implemented all the way out before even focusing on like, oh, here's the use case study of company X and company Y, uh, right now, but I do know that those are the kinds of people who've been reaching out as mostly the people who are kind of tinkering on the edge saying, well, I want to try doing things with WASM and then people who are doing IOT edge stuff is kind of the big categories that we've seen. And like that, I guess this also addresses it a little bit. So like this is kind of our next steps of what we're doing. Um, and then zero dot eight is kind of going to be that thing where we solidify more of the demos. We have some basic demos, which I'll show, but like solidifying some more production like demos or that something that people can use as a, as a reference, um, for an actual real, real world application. And so that's kind of why we're, I'm trying to get things done like CSI volume support and full networking support, because those are kind of critical to do the full real world, um, like those are some of the most powerful examples that people want to see. So I have a question. So, yeah, when, when you talk about a WASM modules, uh, do people actually want to pack more capabilities within, like a Web assembly binary, right? Do we want to add more capabilities inside that? Or they just want to decouple more of the little components different places. I would imagine that you have the capability to create the modules with, you know, either heavy or lightweight, right? But have you heard of how they want to run these modules or is it up in the air, you know, how people want to run them? It's still up in the air. So WASC focuses on very small modules because it's an actor mod, right? So each thing, each module is supposed to be an actor in the system. So it's supposed to do one specific task. And so those are meant to be very small. I imagine that just like with containers, we'll see a little bit of both. But we don't have enough like real world usage to like completely guarantee that that's going to be the case. But I think that we'll see a little bit of both use cases there. I'm guessing though that initially it'll probably be the smaller constrained workloads that'll be the first targets that people try doing just because they're smaller and they're easier to get going than to like do something bigger inside of a WASC compiled binary. And so I think that'll change as WASC gets more and more things solidified. But I am not sure is the complete truthful answer. Cool. Cool. Yeah. Thanks. Anyway, so let's, I'm going to go ahead and demo and kind of explain the architecture as well about how it works. If we have extra time, people are curious, I can kind of explain why we did rust and the draw, the why, if it went like, why it has gone well, some of the drawbacks and all that, if people are curious, but otherwise I'm just going to go through the demo and the architecture real quick. So let me switch screens really quick. Okay, let me increase the font size real quick. So double check to my cluster didn't die. So I was going to try to have a, what I was calling a Franklin cluster altogether, but of course the demo gods conspired against me. I was going to have a random Windows VM connected and my Raspberry Pi connected, but then my Raspberry Pi's had issues and I couldn't, and like I couldn't like even get into it over SSH. And my Windows machine was having problems. So of course, I'm not real shot, but I promise if you take these, you can run these on any operating system. I'm not just making that up. But such is the life of working with a demo. So right now I'm just pointing at mini cube. You can run the, we have instructions for most of the major cloud providers to this point. If you use another cloud provider and you find a problem with it, or there's not documentation, that's something we're always looking for is more documentation there. So you can run it against anything. The two requirements are that you need to be able to create a bootstrap token. So you can generate a bootstrap config. And the other requirement is that you need to be able to be in a place where the Kubernetes control plane can reach you over the internet somehow, whether that's through a tunnel or something else. So one of the things that I do if I'm running like a local machine against like an AKS cluster, I will use inlets to just to set that up. I don't have that set up now. And that's why I'm doing it on mini cube because you need that endpoint coming back so that the control plane can ask the cubelit for the container logs. If you don't want to worry about logs, you don't even need that. You just need the bootstrap token. So I'm actually going to clear this all out so you can all see what it looks like. So there is a bootstrap script that's included. And this is one of those things, we're hoping we can just have kind of a one-click process. And don't worry about just this is just a tool we're using. And so this bootstrap script is just creating a bootstrap token. I can actually show it because this is an audience that might actually care. So we have one for Windows and for Linux or Unix-like systems, I should say. And this is just doing basically exactly what cube admin does. It's creating a token and then creating the secret. So you'll need admin credentials to your cluster to do this. And then creating the secret in the proper location and then generating a kube config for the bootstrapping process. And so we can actually see that right here that it's set up and has that token that was generated. And so it'll do all the bootstrapping. So it has the bootstrapping process implemented from normal kubelet. And so now when I start it, I'm just going to use a shortcut. This is just going to compile it and pass in the right flags. I actually can show you what it looks like. So I'm just going to be running the wazee provider right here. And so it's just running the Crescent wazee binary with the proper flags to give it a no name, what port it's listening on, the certificate it wants to use, or output to. And so when I run that, and I thought it was built, but maybe I made a change. So it's going to go ahead and start up and run. And while that's running, I'm going to come over here, run, just run, just a second here. Like I said, the demo gods have not been smiling on me lately. Because I just compiled this last night and apparently I accidentally made a change somewhere. It's compiling right now. Yeah, it's just compiling down here. It's just doing the final linking step. Yeah, there we go. Okay, so you can see it running and it tells me, okay, I'm ready for you to bootstrap. So it's going to, you have to manually, so if you have it set up properly, Kublet will automatically approve the client certificate for authentication, but you have to manually approve the certificate for the TLS that it's going to serve. It's endpoint on. So I'll go ahead and approve it right over here. And now that's been approved. And we see back here that it's going to run. Right now, one of the little bugs we have, we can't figure out if we can't tell kubet proxy to stay away because it tolerates everything. But you can see that it's up and running and trying to do its thing. Then we'll go ahead and do just so I can show both of them just run, wask. Okay, so while was he sitting here running, I'm going to go ahead and create an example demo here. So we have a couple of these in the main tree. So I'm going to do to control apply. So we actually have a couple different examples of each one. So in this case, we have examples in pretty much all the languages. We have one that support that have like really good wazzy support. There's more languages they're adding each time. We have one in C, one in rest, and one in assembly script that we're all compiled in their own language down to wazzy. So we'll go ahead and do the one in C just for fun. So it's going to create a symbol config map in a pod. And we can actually see kubectl get pods. It ran. So this isn't a long running pod. It's just outputting a few things. It ran to completion. And I can do kubectl logs on it. And I get back the logs from inside of that pod. So in this case, it was just printing out a bunch of things that were mounted from the config map. And if we look at the actual manifest, you'll see that it looks pretty much just normal. So we actually store our images in a, or sorry, our modules in an OCI compatible registry. It's pulled down from there. We've mapped things in from the downward API, from the config map that's linked in. The only thing that's different is we have to make sure we node select on the arch that we're looking for and tolerate those nodes because we have it set up so that those nodes repel all normal pods in case you're running it in a heterogeneous environment. And so the thing is, is now I can, I can delete after I delete that pod. So let's go ahead and delete that. And let's go ahead and do rest. Oops, hi. Just to apply. Okay, so now rest was created. And before I continue, I'm going to quickly approve this guy's certificate. So now if I say kubectl get, or sorry, logs, I should all get pods first. Okay, we'll see that it ran to completion. It's the same thing just written in rest. And then we can get logs. And we actually print out a file in this case. We're reading a file from the system. And we have the same kind of values there. So these were written in two different languages compiled to the same target. And you can run this module anywhere. So it can run, if you're on a Windows machine, you could run it right now. If you're on a Raspberry Pi, you can run it over there. And so that's the beauty of this is you can move it around to these different systems as you want, whatever type of nodes you have connected. So there's just a simple example for that. You can also see just to show this is running, you'll see that we have these two nodes that have been created. And so those are all running and connected with a normal kubelet serving certificate. So the last one, just a quick WASC example. Let's go ahead and just do uppercase. Okay, so this one is actually going to do something a little bit different. We'll see that this is running now. And it is actually serving, like I said, we don't have the network can connect up, but this is actually serving on a networking endpoint. So if I do curl and pass foobar, it actually returns back and it's running a server that's returning back and uppercasing everything I send out it. And this web assembly binary is actually super, super tiny because it doesn't have to worry about the server that's handled by another capability within the WASC provider. But you can also do some limited networking things right now with the WASC provider. So there's just the simple demo of this running. And once again, this would run on any system, wherever you want it to run. So it's just a really nice portable thing you have. So anyway, there's just a quick demo and that's all I had for demos right now. Were there any questions? Yeah, on the crosslet, when you have a different crosslet for WASC and WASI, right? So for the first two examples, you ran it on the WASI crosslet, right? The last one was something. Yeah. So if we look at the actual, if we look at this file right here, you'll see that it's selecting on, I don't know why it's selecting on Linux, that's not supposed to be there. But anyway, you'll select on WASM32 WASC and it's the same thing with the tolerations right here. They're set to repel things that aren't its proper runtime. Got it. So on the initial step, you actually built the crosslet for both, right? So I think that's what you did initially, right? So yeah, it was just compiling it locally. There's pre-compiled binaries available for download. I was just doing it straight from... Got it, got it, got it. So you need to build, okay, so there's pre-compile if you want to, I got it. Yeah, we have installation instructions and everything in our docs folder in the repository. I just compiled it directly because I was just on my machine and also because I thought I had already compiled that last step recently, but apparently no. Got it. Any other questions there? Okay, so that's what I had for that. Were there any other questions around the project? What we did? People want to know anything else about it? Yeah, you mentioned about Rust, right? So what... Why did you choose Rust? Well, okay, so it actually might just be easier to... I don't know about that. To just show some examples here. So these are the four reasons that we picked Rust. The first and most important is that WASM and WASI support is the best in Rust. I don't know why that started out that way, but that's what it is right now. And so it has first-class support for compiling to WASI. You just basically have to add a compilation target and then just say cargo build, which is the build tool, cargo build target, WASM32 WASI. And so it will... It's because of that a lot of the web assembly things and the WASI things are built in Rust. And so that's why one of the biggest reasons we chose it. And I know that people often ask this because actually our team... So our team... I'm still one of the core maintainers of Helm. Our team has lots of experience in Go and other Kubernetes things. And then the other three things are the safety, which is all around what... I guess it's the memory management model of Rust. So Rust doesn't have a garbage collector, but it has a very strict ownership model. And if you've ever heard of somebody who started with Rust, you'll hear about fighting with the borrow checker. So that's checking all your data is going to the right places. And this is very powerful. It leads to longer compile times. But honestly, the trade-offs have been really good because of it. They have... It's caught bugs that we wouldn't have caught otherwise. And there's certain bugs that we would see, like we found one in Helm that would have been caught by the Rust compiler had it been in Rust because we were accidentally sharing data and creating a race condition. They're not even the race checker caught. And those kind of classes of bugs are entirely eliminated with Rust. There's still plenty of bugs. This isn't like some magic bullet, but it avoids whole classes of bugs and avoids... There's not going to be no point to be referencing or anything like that unless you're doing explicitly unsafe code, which has to be called out as unsafe in your actual code so you know exactly where it's happening. The other two things were just more like bonus features, which were the extensibility and developer experience. And by extensibility, I mean, here's an example. I mean, if I want to pass a specific client, I mean, I have to implement the whole interface or whatever, whereas with the generic support that's in Rust, I can build a custom type that handles almost any object. And this can actually be used with a CRD. So the CRD stuff is actually really easy with macros, which is just compile time, code generation in Rust. You can generate a CRD and use the exact same API client interface that you would for a pod or anything else. And so that flexibility is very, very nice with Kubernetes where there's many things that behave very close to each other with just some slight differences that can be handled by Rust fairly easily. I mean, I have some other examples here, but there's not a lot of time to talk over just about how those things have been very helpful. And then the other thing has been around developer experience. As someone who has done a ton of Kubernetes things in Go, and a lot of things in Go in general, like doing basically doing anything where you have to do large projects against the Kubernetes libraries or API, it's an absolute nightmare in Go to upgrade or add any library dependency. Because some of them have different versioning schemes, and you have to often figure out like specific caches to get things to compile or specific pinned versions. Happens to us in Helm all the time. In Cargo, it's pretty simple. You just kind of look like this, and there's also conditional compilations. So we offer features like this ops thing as a way to opt into using CLI flags that we have. If you don't want to use the CLI flags, then you just turn off the feature called CLI. And then you'll just be using our normal config objects. You won't even get the dependencies or the dependencies it pulls in or the code that's there because it'll be omitted when it's being compiled. And so you just pull in exactly and only what you need, which we found for dependency management being very, very nice. And also kind of like what I mentioned with some of the other things, there's macros, how error handling works, the flow control, there's this idea of results and unwraps and how you get everything to work is great. But last to just to cover it, there are caveats there. The Kubernetes library in Rust is missing some of the more advanced features. So streaming manifests in from like a list of manifests like we do in Helm a lot. Some of the patch creation and other things are just not there and probably won't be for a little while. There's just very advanced features that more like advanced projects might miss. And I really miss goes just like ease of starting something async or sorry, I should say concurrently. I can just say go, go whatever and just shove off something onto a go routine. Whereas in Rust, there's async, there's two kind of competing async runtimes and they're all kind of a nightmare to like try to figure out which one you want to use. And if you do it, you're kind of bought into it. And Rust kind of has a really steep learning curve. But what we have found is that once you get used to the language, it is very powerful and offers a lot more flexibility and safety when writing these kind of cloud made of applications. And so that was just kind of the extra benefits that came on top of us choosing Rust primarily for the safety and wasm side of things. So hopefully that answers that question. But I know we're at 10 o'clock, so I don't want to Yeah, that's pretty good. Yeah, thank you. Yeah. Well, thank you for your time. Yeah, thank you for the presentation. And I think it was really helpful and informational. Yeah. Quinton, do you have any last all good? Was that a good thumbs up or like I have to run. So yeah, thank you. It was very useful. Thank you. Thank you. Thank you, Derek. Okay, bye.