 I'm Matt Butcher. I'm the CEO of Fermion. My ignoble past had me as the creator of Helm, also the creator of the Illustrated Children's Guide to Kubernetes, all those, you know, giraffes and zebras and stuff. Those were stuffed animals that my kids had. And so it's really weird to see really giant versions of your kids' stuffed animals all over. Michelle? I was actually in the audience when Butcher first came up with the book and read it for the first time. And it was at a little hackathon at a company called Deus back in the day. And so, yeah, we've been working for a while now. I'm a principal engineer at Fermion, and I work on spin, spin cube, some of the cloud stuff and a few other things. And I'm Radu. Nice to meet you. I do it in the least, I have to say your title or something. We're buying Ralph time to get a friend. I'm a CTO of Fermion and still trying to stall for Ralph. I do that a lot. And this is Ralph Scalacci. Sure, I'm Ralph. I am a, what am I, a principal product manager. They change names periodically, but they don't promote me, which is great. At Azure Core upstream. And so in Azure, my team is a team that does all the building of upstream things like Kubernetes and containers for the service teams like Azure, Azure Kubernetes Service. A little out of wind, forgive me I was running. So I've been doing along with Matt and other people in the community. I've been doing WebAssembly for like three years, four years really concentrating on it. Really am in the end of it. So I'm going to give the mic back, but that's what I am. That's what I've been doing. All right, I'll introduce WebAssembly. You got like three minutes to pick things to catch your breath. Yeah. All right, so really what we'll do here, we'll talk a little bit about what we are talking about when we talk about serverless workloads, what we're talking about when we talk about WebAssembly, why we think it is the third wave of cloud computing. Michelle's going to talk a little bit about how, what the, what the block diagrams of this look like, right? And then Radu's got his, you know, spin cube, spin cube up here in the front and, and he'll walk us through some of these demos in real time. And so our hope then is that, you know, as we get through this discussion and have a little back and forth chat, it'll give you a lot more clarity and a lot more context behind what many of you heard at the keynote this morning. So serverless is a term that just gets misused all over the place. But when we're talking about serverless, we actually have a very specific definition in mind, and it has to do with a software development pattern. So when I say serverless, what I'm thinking of is, we're doing without that software server that we always have to write. It spins up a new socket, listens on a port, and all that like control process logic that goes around that. Serverless programming, instead of starting up a server and running it and handling multiple requests on that same server and dealing with all the moxing and all of that, serverless is basically like event driven. An event comes in, the process starts up, handles that one event, runs it to completion, and then shuts down. So the advantage of that is you're not focused on any of the sort of long running process. You're just focused on addressing that individual piece of business logic or that individual piece of functionality that you care about. Now this also comes with some really cool performance gains, and that's what we're going to see kind of as we go. But before we get there, let's talk a little bit about WebAssembly itself. WebAssembly was a technology that, as we all know, was built for the web browser, right? It was built to allow companies like Figma to do things like taking C++ code, to do high performance vector math, compile it into a binary format that their JavaScript in the browser could then call into and get sort of the performance benefits of it. I believe there's a possibly mythological story that somewhere buried in Excel is an old C library from... No, that's real. So Excel Online, there was an MS research team that found the Lambda calculation, Lambda function code from 1985, written originally in C++. And two years ago, they compiled that to WebAssembly. So if you do a Lambda function in Excel Online inside one cell, that's code from 1985. That's C++. Thanks, WebAssembly. Yeah, thank you, WebAssembly. That's right. Running your 1985 code today. But really what WebAssembly is is a bytecode format that is constructed so that it has a security sandbox built around it and so that many different languages can be compiled into that same bytecode format. So really when you think about the WebAssembly spec, it's basically a set of instructions that you use in this bytecode format and then a set of instructions on how to execute these kinds of bytecode. So you should be hearing this and thinking, well, yeah, so why was this important in the browser? How do you build a system like this in the browser? Well, you want a couple of different characteristics. The browser runs a lot of untrusted code. We go to websites, we do not view source and see what that JavaScript is doing. Things get side loaded from all over the place. So the browser's security sandbox has to be really, really good. Likewise, when you're in that browser context, the software that's running there shouldn't care if I'm running on Mac and Safari or if I'm running in Windows with Edge or some exotic architecture with some one-off web browser. It should just work. So you can't tie it to operating system and you can't tie it to system architecture. Really, going back to the old Java mantra, you got to kind of compile once and be able to run anywhere. The third thing you really need in the browser is fast startup and execution time. We do not wait around for websites to load anymore. In fact, research shows that people's attention span begins to dwindle at around the 100 millisecond mark and consequently in Google's page ranking system, they use the 100 millisecond mark as a timely time to receive your time to first byte. That was time, three times in one sentence. First byte, 100 milliseconds make it so, right? So when you're looking at systems that need to start up and execute in the browser, they need to be really, really snappy. And the fourth virtue for WebAssembly in the browser was, ideally, it should be able to support any language at all, right? Take some old CRC++ code, introduce some brand new Rust code or some Zig code, use that TypeScript and JavaScript, those skills you've been honing, the Python, you know, any of those languages should be able to run in WebAssembly runtime. So if you look at that list of four things in the context of the browser and now sort of mentally shift to what's important on the cloud, you end up with very much that same list of four things. We need a really good security sandbox. Virtual machines, really good security sandbox. Container, good security sandbox. WebAssembly, really good security sandbox. I just dissed containers without even saying anything there. WebAssembly should be more secure than a container because it does not have access. The kernel is not an attack surface in WebAssembly, right? It's all sort of sequestered in the same list. Cross-platform, cross-architecture, you saw the demo this morning, right? It's awesome to be able to deploy one binary across a cluster that's split between ARM and Intel. Even more importantly is the ability to move the workload to the cheapest kind of compute, the most powerful kind of compute, whatever it is that you want to accomplish there without having to send it back to the developers and say, hey, I know I told you Intel, I need it on ARM now, right? So this makes it a very powerful concept for being able to do platform engineering well. And then finally, this idea of multi-language. If we're really talking about running these binaries, the optimal situation is we can say, yeah, if Rust is your favorite language, write it. TypeScript is your favorite language, write it, and we can run them all in this kind of sandbox. You want to do this one? This is the boxing one in my style, but you can do it your style. I like the boxing metaphor a lot. Well, I feel like everybody heard it this morning. So, yeah. I think you should do this one. All right, you know, this is the way we got going on this. We were looking at the sort of evolution of cloud, and we sort of started over here, right, where we were hosting things on bare metal, and we had a one-to-one relationship between the piece of hardware we were running and the operating system that was running on top of it, and the big change with virtual machines was saying, now we can run multiple operating systems on one piece of hardware. And so we introduced an abstraction layer in the form of the hypervisor, and then we were packaging up, you know, kernel and drivers through all the libraries, all the operating system, and all the way up to our applications. Those things are big, powerful. You call them the heavyweight class, right? Yeah, they're big and they're powerful, and when you knock them down, it takes a while for them to get back up again, right? Their start-up time is slow, and the image sizes are large, and yet they're still really powerful and very necessary. If you look then over to the container side of the equation, you got that middleweight class. If I mess up my boxing metaphors, just jump right in here, because I know boxing. But in this case, then, Docker gives you the ability to sort of package up a high-sliced shape of your operating system, just the utilities you need, just the piece of the file system you need, and just the long-running server process that you want. Docker is excellent. Containers are excellent for packaging up and then executing long-running processes. But when we get over to serverless functions, right, these things that are going to start up, run to completion, and then shut back down again, we want a kind of system that's going to be able to start up instantly and then shut down. It's going to be able to move across architectures and start up on Intel, be scheduled across all of those architectures, and use the absolute minimal resources that we need. So we end up with this sort of lightweight runtime class, where we're really just talking about that WebAssembly binary and maybe a few supporting files, and that's executing right there in the WebAssembly runtime. I think I'm actually going to skip these slides. This is kind of showing how scaling up works and how in the traditional Kubernetes model you really have to sort of pre-provision things because the startup time is slow enough that if you don't provision well enough in advance, then you're going to end up asking users to wait 12 or 15 seconds for a pod to come online, and nobody does that, right? So we create auto scalers that are sort of optimistic and try and pre-scale. So the dirty laundry, of course, and everybody does this, this is not really dirty laundry, it's called survival. What really people do is they spin up something close to the peak workload and they let it run. And that's an even worse way. That's the real world that we live in. And that's when you end up with these cases like we've been talking about where you might have only 15 to 20% of your compute is actually utilized and the other is just idle, right? And idle is expensive. So ideally what we really want is a case like this where you can pack in, where you can scale, you know, just exactly to kind of match the curve. And that's the way we built this particular system was to scale up when, as requests came in, and as soon as the requests are coming down, again, they're serverless functions. So they start up, they run to completion, when the request comes in, it starts up. So you're kind of automatically scaling up when you got zero to five, you're up to five instances, five to 10,000, you're up to 10,000 instances, then you're scaling back down again. We'll see a good demo of that one. So that, I'm going to hand it over to Michelle who's going to introduce Spin and Spin Cube in a little more detail than this morning. What if instead of handing it over, I would just hand it over back to you? Like I did. I think you did that already. It doesn't work again. You burned that one. Sounds good. Okay, so what is Spin? Spin is actually a developer tool and a framework for building and running serverless apps with WAVAssembly. Okay, so like what does that mean and why does that help you? When you're getting started with WAVAssembly, the learning curve can be a little bit seek. You're going to learn about run times and wit files and binary format. And if you're like me, I started from like the spec. Like I was like, let me just like read the spec, which was a terrible mistake because there's a lot of math jargon in there that I have no idea about at all. I bought a remarkable just to read the spec, by the way, and I didn't use it for two years. Anyway, so that's fine. Like you could totally go that route, wit files, wasm time, compile your binary, figure out if your language supports a wasi target, et cetera. No problem. Or you could use a tool like Spin, which gives you kind of the Rails like experience or just a quick getting started experience. And I think that Ruddy might go over that in a bit. So essentially we have some templates you can work with. Language, you pick a template. You fill in the blank a little bit. We have some conventions you can fill in. And then you run a few commands and you've got your wasm binary built, your application built, and then you can spin up and you're running your app, which is super nice and you don't have to touch any wit files. You don't have to learn about anything. And you can just kind of get that endorphin rush of like, hey, I'm using WebAssembly and I'm doing serverless and that's really, really fun. And I'm gonna interrupt you because one of the things you used to say is the best thing about learning Rails, back in the Rails days, that's how long we've known each other, was that you can write your first blog right away. Your first, was it a blog? Was that the... Yeah. And then back your way into learning about all the mechanisms or just, you know, kind of blissfully use it as it is with the template language. And Spin was really designed along that same DevX aesthetic where you could get going really fast and then diving as deeply or as shallowly as you prefer from there. Exactly. All right. So, intro, spin, queue. We're originally Kubernetes people, actually. We worked on Helm, a lot of day slabs tools, Draft, ZNab, like we've kind of been around the block in the Kubernetes space and we got excited about WebAssembly. We're like, cool, how do we make this run on Kubernetes? Okay, so kind of let's talk, let's take a step back and talk about how containers run on Kubernetes. Well, a lot of times they're powered, or not a lot of times, they're powered by a container runtime in addition to a lot of other things that are helping you schedule and execute your app. And I'm actually going to invite Ralph to talk about... Uh-oh. ...Container D and the container D, Shim and Brian Hwazi and kind of how we can enable WebAssembly with that. Right. So, okay, now I've been thrown the baton here. So, Kubernetes has a really wonderful abstraction mechanism at this stage in its evolution to allow you to swap out individual container runtimes. And that's called container D, which is really a kind of an API specification for how Kubernetes talks to individual runtimes that might be different. And so the individual runtimes kind of implement the actual inner portion of that API. And those runtimes are usually the build that does that. It's called a Shim. And so there are many different container D shims in the container D project in the CNCF. There's like Run C, there's C Run, there's, you know, Podman versions, Docker versions, and so forth. And all of those run containers. Well, it turns out that you can actually build the container D Shim that runs other types of processes. And so at Microsoft, one of the things we wanted to do was be able to integrate the benefits of WebAssembly into the benefits of containers in Kubernetes and have it be as seamless as possible. So we invested in building something called Run Wasi, which is a container D Shim crate in Rust that allows anyone to actually use it to build their own customized Shim that can be installed and just work seamlessly in Kubernetes. And that's actually what we originally did in cooperation and collaboration with Fermion and some other people, Docker help, Wasm Edge is a second state, for example, helped out and so forth. And that is the work that we used in collaboration with Zeiss. That same Shim is part of container D. The Run Wasi project is. But the Shim that we built for spin is part of spin queue. So you can actually use that and that's optimized to run spin workloads because the developer experience is so good. So that's how it sort of works both for containers but also for WebAssembly. And I'll leave you with this. This last thing is we also built it to run both containers and WebAssembly. Which means that the spin Shim in spin queue can run with a service mesh sidecar just like you do now or something like DAP or whatever it is, you can run more than one container or WebAssembly in a pod and it's transparent to you. That's how that works. Did I do okay? You did great. Did I do okay? Made sense? Alright, now off the stage. I'm done. Just kidding. Help me. So essentially, okay, so you have this amazing container D Shim and I just want to stress how amazing this is. It took a really long time to make and get right and make sure it all works beautifully. So it's a really key piece of spin queue. Now once you have the Shim you need to be able to install the Shim and once you've installed the Shim you need to install a runtime class in Kubernetes and once you've got that you need to make sure that your pods and services are configured correctly to use all of those things and it ends up piling up and becoming like a really big issue. So rather than going through all the complex tasks of orchestrating how exactly you might run your pods and workloads in Kubernetes we actually built a spin queue which is a set of projects that allows you to install everything and run everything really seamlessly in a container, excuse me, Kubernetes native way. And that's essentially spin queue. You've got your Shim, you've got your installer, you've got your custom resource, your spin app custom resource, an operator that essentially orchestrates and makes sure that everything is running as expected and you at the end of the day get to just run your pods and workloads on Kubernetes without having to think about any of this. And that would run on anything from a small K3S cluster all the way up to the biggest Azure cluster you can provision because it's just built the way Kubernetes expects it to be built and it runs the way Kubernetes expects it to run. You want to start doing some demos? Sure. We've got some other demos and some other things that we're showing off at our various booths today to stop by the Microsoft booth and find Ralph or stop by the Fermion booth and talk to any of us. Or just yell at us in the hallway and pull aside and demand things. That's fine. While Rod is setting up really quickly I also want to point out that because SpinCube has the Shim because the Shim is compiled to all the different architectures and operating systems you might logically be interested in using and can be compiled to many more. The truth is you can use the SpinCube anywhere. So like although Microsoft is very proud of having invested in this area because we think it's really important for our users, for example that you can use this anyway in any company, any cloud, on-prem and even in one of these little boxes. Cool. So we've heard a lot of words and now we're going to try to put everything together in something that runs anywhere from this tiny k3s cluster all the way to clouds and multiple node types and all of that. So let's dive a little bit into Spin. So Spin, as Michelle mentioned is an open-source tool. It gets you from not having a project to being able to build it, push it to your container registry and then run a WebAssembly application. So Spin has a couple of commands. First of which is SpinU. It comes built in with a bunch of templates that can help you get started with the language of choice. In this case, I've built a tiny application in JavaScript and there are two parts to the Spin application. The first one is it has a tiny configuration file that tells you a little bit of metadata and then what are the routes like what's happening in this application? This is an HTTP app. Spin can help you build any kind of event-driven applications. In this case, the event is an HTTP request but the event can be an MQTT event. It can be a Redis or any kind of Q trigger. We have support for a couple of those. And then you have a little bit of information about Matt talked a little bit about whenever there's an event incoming how this works with WebAssembly is the system loads the WebAssembly module it instantiates it, executes it and then shuts down. And so the source field tells Spin that whenever there's an event this was a module needs to be executed. And then there's a little bit of metadata about building this component. Spin just calls into your native tool chain. In this case, it's NPM run build because it's a JavaScript app. If you're using Rust, if you're building cargo, if you're using Go, it's a tiny Go compiler. The second part to a Spin app is the source code. I mentioned that with Spin we're building serverless style applications so we're following the serverless application model which means you're not starting with a web server you're starting with an handler, with an event handler. And in this case, the handler is HTTP request. So the response is an HTTP response. Spin build is the command that calls into that native tool chain to build your WebAssembly component. And then we have a Spin registry push command that takes your WebAssembly binaries. You can have multiple components in the same application and that piece of metadata and pushes it to an OCI registry. You can continue to reuse any of the registries that you use today. Anything from Docker Hub, GitHub Container Registry, any of the cloud provided. Container Registries, this produces an OCI artifact. You can continue to sign it the way you do it. You can continue to use all your software supply chain tools. Once you build it, this is where SpinCube comes into play. You can take your registry reference that you just pushed and then SpinCube Scaffold is a command that basically generates a Kubernetes manifest for you. And I'm going to open the one from this application. It is a simple CRD that defines a spin application. If you've used Kubernetes, my assumption is you've seen a CRD before with an operator. This is just that. It defines a spin application. It defines the name of it. And then the spec contains things like what is the reference from the registry that I need to fetch. And then the executor, Ralph talked a little bit about the container. Dshim. And then this is where the spec for the application is replicas, volume mounts, readiness probes, liveness probes, all the things that are used to when running Kubernetes workloads. You can do that here. And then once you have that, you can do keep control apply or whatever GitOps tool you might be using. This is just the YAML file that you deploy into your cluster. And then you have a spin app object in your cluster. So in this case, we have a spin application with 10 instances. What I'm going to do is, again, this is running on this tiny cluster. So I'm going to try to scale that to one first. And hopefully we'll see this graph go very quickly down. And I'm going to try and scale it back to, say, 50 maybe. And because this is a WebAssembly module, a WebAssembly component, and not a large container, the startup time in Kubernetes is significantly faster than for anything else. So in this case, again, this is a tiny cluster that runs here on this chair. It is now running 50 instances of that application. This is the kind of startup time that you can expect from running a two-megabyte WebAssembly component, as opposed to a, say, 400-megabyte container. If you need a flexibility of containers, of course, you can run your container side-by-side. This is just a regular Kubernetes cluster. And even so, I'm logged into a... Actually, I'm logged into a cluster that has two nodes that we care about. One is R64. The other one is X86. And I can do the same thing and scale transparently across different kinds of nodes architecture. So if you look at this app, it's scaled with 10 instances across. X86 and R64 is running an ampere cluster somewhere in the cloud. And I can scale across all of those transparently because WebAssembly is a portal binary format. It's a portable application that can run anywhere. I think Ralph at the Microsoft booth has a demo when we're scaling the same application across Windows, Linux, R64, X86. It doesn't matter once you build it. I actually build this application on my machine. And so with Spin and with SpinCube and with WebAssembly, you're looking at significantly improved startup times and smaller binaries with smaller attack surfaces. And this is the way Spin and SpinCube work today. I also have, if you come by the Fermion booth, we have a demo of running 5,000 spin applications on a tiny two-node cluster. So if you're interested in that density story, cost-saving, please come by the booth. I have a question. You first. Can you tell people about TTL.sh? Because I think it's so cool. TTL.sh is a temporary registry that is phenomenal for testing. It's built by the awesome folks that have replicated. So if you need, like, a quick CI CD thing to test and push an OCI artifact, I've been using it for every single thing for the last couple of months in testing. So TTL.sh is cool. Oh, that's cool. I got a different question. Okay, I'm assuming that you pushed that little plexiglass box as hard as you possibly can. Is that true? And if so, when did it stop? I have not pushed it because I wanted it to be alive and useful for a demo, but it's definitely able to handle more than 50. Burn it down. Burn it down. Do we really want to? Yes. Do we want to? Yeah, you want to see where it stops, don't you? Okay, let's try 75. Sounds reasonable. Oh. Surely you need an extra zero in there. Well, if you come by the booth, you'll see a cluster with 5,000 of them. So, oh, look, 75 works. Well, what about 100? I mean, come on. What's your pop? I don't know. I mean, am I crazy? Don't I want to see it come and then finally loom? So what are these on here? These are raspberry pies. What's the spec on them? These are five raspberry pie fives, and I'm one of the fortunate people that got them while they were in stock. It's the four core, eight gigs of memory. Nothing fancy. It's literally five raspberry pies stacked together with a switch in between. So this is not running 100. Okay, now I'm foolishly going to try more. I said 500, and I think that's the challenge. We're hoping it's not actually 10,000 because this session is going to go a little over. We can stop here. 200 is the last I'm going to try. It most likely will handle it properly, but it is a K3S cluster. It has a SQLite backed API server, and I really... Okay, it's going. It's going. That's pretty impressive. It's going. All right, the great thing about this is in other environments, not necessarily hyperscale, right? You can see how much more performance you can take out of an older machine or a smaller chip set, for example. You can still do amazing things that with a larger container you might not be able to do. It's a 200. All right. And I mean, that's the kind of thing that you've got. Also, it's getting really hot downwind of the fan. Yeah, oh, you made it. That's great. And these are the kind of profiles that are great for if you've got existing clusters that you don't want to keep increasing the size or you're trying to bring down the size of your cluster. It's great for more edge and IoT scenarios when you might be dealing with smaller hardware profiles in the first place. Especially older ones. Like outside of hyperscale, a lot of things were deployed two, three, four, five years ago. And they're not going to get a hardware upgrade anytime soon. The way to drive more functionality out of something that can't be changed is to drive the density in. And that's exactly what Roddy just did. You put the slide up there, so... Are we back to that? We're back to have a look at SpinCube, spincube.dev, github.com, SpinCube, and then Spin is referenced from the SpinCube documentation. But SpinCube is the new project that we announced this morning. Michelle had the keynote. If you're interested in it, we're going to be around at the Microsoft and Fermion and all the other booths that talk about SpinCube. Thank you. Also, we're nice. So if you want to try it out and if you find something and file a bug or contribute something, we'd love to have your contribution. We'd love to work with you. Please join us. And we do have to end by thanking also Sousa, Liquid Reply, and many other individual contributors have been involved in this project in its various forms for quite a while. They just couldn't fit all of them on the stage. Yuki and other contributors in the container space. So we've got a couple minutes left for questions. I can run this mic out, and you made it easy for me. I don't even have to run. Yeah, we saw the announcement of SpinCube contribution to CNCF. Is there a plan to contribute Spin Framework itself to CNCF? I will tell you the reason behind the question also. We have seen some licensing chaos in the infrastructure as a cloud, as a code area. So that's why we are a bit skeptical, like when we saw SpinCube contributed to CNCF but not Spin. So that's my question. And I can answer that or you can answer that. One of the things we did when we released Spin was we released a commitment to never change the license on Spin for exactly that reason. And that was two or three years ago that we saw this coming and we needed to do that. Which organization Spin ends up in? We're not sure yet. WebAssembly, a lot of the WebAssembly development is done under the auspices of the bytecode alliance, which tends to handle more of the WebAssembly-specific stuff. And since Spin doesn't necessarily have any close ties in with Kubernetes, we've considered putting it there. We've considered putting it in CNCF right now. We're just not doing anything. Also, on behalf of Microsoft and its customers, we needed that commitment to believe in the longevity of the workload, right? So for us, that was a critical promise that they had made to the community of people using Spin. So that's a very important point. The sandbox, is there any integration with the capabilities of the sandbox? I don't know if it's fine. It's a lot of read and write, that kind of thing. The question was around the sandboxing aspect of running WebAssembly applications. And the answer is yes. So Spin is based on top of WasmTime and the Wasi spec, which allows you the capability to define things like files that an application is able to access, actual network hosts that the application is able to access. So Spin, by extension, because it's built on top of WebAssembly and Wasi, it is a fully capability-based system where we have to define ahead of time all the things that your application is allowed to access. So anything from files to environment variables to outbound network hosts to any kind of external thing that your application has to be defined ahead of time. Thanks for the presentation. I was wondering if you could elaborate a bit on why Kubernetes native app APIs, so deployments, pods, et cetera, aren't suitable or aren't a good fit for Spin. What is it about WebAssembly workloads that mean things that work and have been worked well for a long time and no longer appropriate? I'm going to go very quickly and then pass the mic around. So the question is why Kubernetes native APIs, why pods and deployments and not something else? And quickly, I'll mention that you can run Spin on your own pretty much everywhere. We've started very early on building on top of Nomad. So if you have a Nomad system, for example, you can use it. You can run Spin on SystemD anywhere essentially with the binary. What we wanted when running on top of Kubernetes is to integrate as much as possible with the way people deploy applications today. So Ralph mentioned the fact that you can run WebAssembly applications and containers like literally sidecars in the same pod. The ability to reuse your existing workflows for deploying applications and CICT and monitoring is very important for a lot of organizations we're talking to and so one of the reasons is that. We're also working on a high-density version of all of this that is not using the pods for some of the reasons like maximum pod limit and other things that are sort of constraining it in Kubernetes. But the idea was to meet where organizations are and how they're deploying their software and then have a high-density version that works without necessarily having to require pods and deployments. And I'll just quickly add to that and reiterate I think maybe Ralph had mentioned before too is that these integrate with your cloud-native tools you can use Dapper and Kata and service meshes and Otel and all the stuff you already use you don't have to learn anything new and I just really don't like learning anything new unless I really have to, so. Hello, I would like to ask if there are any disadvantages to this approach or when you wouldn't use it? The question is whether there are disadvantages and when you would not use this thing. There are, I think if your software needs to do things like access a GPU directly or try to run a Linux-specific thing that is not available anywhere else then you probably wouldn't try to compile to WebAssembly because it won't because of the portable format and you have to abide by the sort of portability aspect of the thing you're trying to compile. Thank you everyone, spincube.dev and we're around to answer any questions you might have.