 Hello and welcome to today's session. My name is David Flanagan and I have been using the Spen project by Fermion for the last six months. Server-side WebAssembly is something that is very exciting to me. Why? Because WebAssembly gives a great developer experience. Containers are fantastic in production, but if you don't run Linux, it can be rather cumbersome to work with locally, especially in a microservice architecture. Spen brings a fresh approach using the WebAssembly sandbox and applying just enough glue to get you productive within seconds. In today's video, I'm going to show you how to get started with the Spen project. And then push it forward, how to build container images and how to run them side by side with your container-based applications on Kubernetes. I hope you enjoyed this video. Let's have some fun. Okay, first thing first, this is the Fermion website, available at fermion.com. You will find a page all about Spen where you can go and learn more and find the documentation and even the source code. Important to note, the Spen is open-source, and you can find it at github.com. Currently, it is on version 1.1.0. If you click the Get and Started button, this will give you all the instructions that you need to install Spen across Linux, Mac, and Windows. Just grab the binary or copy the command and execute it in your shell, which will get you access to the Spen binary. That is the only prerequisite. From there, you can do everything that you're about to see me do on this video. Okay, so now we're in my terminal, where we have a pretty empty directory structure. I have a Just file that has a few commands I will run through in due course. And I have an opt directory, which has some Kubernetes resources that we will take a look at later. In order to get started with the Spen framework, you can run Spen new. This will give you the ability to select a template to get started. WebAssembly really doesn't care what language you write your services in, much like a container. You can write in any language which has a compilation target of WebAssembly or tooling to convert said code to a WebAssembly binary. Fortunately, some of the best languages out there have native support, such as Rust. However, there is a growing community of other people helping their languages become WebAssembly ready. This includes support for JavaScript and TypeScript, for PHP, for Python, and even recently, Go. Without TinyGo, which is nice. So for today, let's start off with a Rust application. To do that, I selected the HTTP-Rust starter kit. This just means that we get the bare bones that we need to do Hello World over HTTP with Fermi on Spen and Rust. And I'm just going to call this Hello. We can give this a description if we want, but I'm going to skip that for now. We can also specify an HTTP base, which will become more apparent and important as we add new components to our Spen application. Right now, I'm just going to accept forward slash to mean hit on the root endpoint for our application. I don't want to support wildcards, at least not yet, so instead of doing slash dot dot dot, we're also just going to do slash. Now we have a new directory called Hello. If we take a look inside Hello, we'll have a source directory for our Rust code, ecargo.toml. If you've written any Rust before, you should be pretty comfortable and familiar with this layout. The first thing I'm going to do, because I know that this is going to be a multi-component Spen application, is grab our Spen.toml and push it to the top directory, separating it from the Hello component. So if we switch to VS Code and open our Spen.toml, we just need to make a couple of changes. All components within your Spen application have three main sections by default. The component itself, which says where the source is, this is accessed to the compiled wasm, because we have this under the Hello directory, we'll update the source. Next, we have a trigger. This is the HTTP root that we configured during the Spen new command. We don't need to modify that. Lastly, we have the build, which tells it to run a cargo build using Rust's native toolchain support to compile to WASI. However, we move to Spen.toml, so we do have to set the worker to Hello. Let's jump back to our terminal and run Spen build. And if we've corrected the Spen.toml properly, this should build our WebAssembly binary for us to run. Perfect. Now we can run Spen up and we'll see that we have access to our HTTP Hello World Spen component over localhost port 3000. So let's just split this tab to a curl for HTTP localhost 3000, where we get Hello Vermean. So what does our Rust application look like? Let's pop back to VS Code, open Hello, Source and librs. There's a couple of use statements at the top to pull in the libraries that we need to make this program work. First, anyhow result. If you've written Rust before, you're probably familiar with this, and if not, just skip over it for now. It's not that important. Next, we pull in the Spen SDK, which gives us access to HTTP types for request and response, as well as an HTTP component. We then have a function, which we can arbitrarily call whatever we want. By default, it's called handleHello. However, we could just make it Hello. The name isn't that important because it's wrapped by this procedural macro called an HTTP component, which will handle all the glue of hooking the function up to the WebAssembly runtime. We can see this function takes out a request and delivers a response. It ships with a little bit of debugging just to show you that you can use your normal Rust print line function if you want to understand how your program flows and runs. The only thing we need to do is return an OK and pass in an HTTP response. You'll see here that we set the status to 200, we're adding an arbitrary header of foobar, and we return a body of Hello.bremion. Let's change this to be Hello World and Save. Now we can come back and do a spin build, and we get incremental build with the standard Rust tooling. That's one of the unsung benefits of working with WebAssembly. The development experience is almost as good as your native toolchain. If it has good incremental build, you're pretty much sorted. Next, we run spinup and we can run our curl again. This time, we get Hello World. Alright, let's jump back to VS Code and make one little change. This is going to make our developer experience just that little bit nicer. As you've seen, when we modify the code, we can go to our terminal, we can run spinbuild followed by spilled up. And we will see the changes as we make further HTTP requests. Now, we could do that with one command called spinbuild dash dash up, which will build our application and spin it up. However, we still need to go and control C the old process and restart it every time you modify the code. And there is a nicer way. If we open our spin.toml and come down to the component build specification. Here, we can say watch and we provide a list or an array of the resources, files that we should monitor for changes. When these files change, spin will rebuild and relaunch our application. So we have to do hello source star star star dot RS. Yeah, like that. This just means recursively scan our hello source directory for Rust files. We also need to do hello cargo.toml because if something changes in a cargo.toml probably our dependencies will most likely want to do a rebuild to. So we can save that and jump back to our terminal. And here we can now run spin watch. This will monitor our source code for changes and rebuild. So to see that, let's do a curl for local host 3000 where we see hello world. Let's pull open our labor s and write hello wasm fans double exclamation mark. And when we hit return, when we redo our curl, we already see the update changed. If we try and make this change a little faster, we'll be able to see spin recompiling our application. So let's revert this to hello world single exclamation mark and hit save. And if we jump back, we see the rebuild and the re up. Now you've got a pretty sweet developer experience for iterating and building on your spin applications. This works really well, especially when you start to have multi component or micro service like spin application. So why not do that? Let's add another component to our spin program. Okay, so to add a new component, we can just run spin add next to our spin dot tumble. Again, we get the picker to choose the starter template that we want. However, if you don't want to use the picker, you can do this on a single command. This is spin add HTTP GS. We're going to call this component echo. Not going to worry about a description. And for the path, I'm going to match on slash echo. Now when we run double L, we have our echo application right next to our hello application. Let's jump back to VS code and take a look. Here we have a source directory, this time with our JavaScript source code. We have our webpack configuration and a package.json. The things that we need to take our JavaScript and compile it to a web assembly binary. Let's leave this right now as the default JavaScript code and pop open our spin dot tumble. At the bottom, we have the same three sections for the component configuration. We have the source for our echo asm. We have our path and we have our build instructions. Let's add our watch and co-pilot did pretty well to auto-complete. However, this is JavaScript and not a TypeScript project. Of course, I could have used HTTP TS instead. Let's jump back and run spin watch. We haven't built our JavaScript application yet, so it's going to do that now. And to do so, it needs the webpack CLI. Let's run curl on localhost 3000 echo. We get the hello from the JavaScript SDK and on the root, we get the hello world. Let's modify our index.js and say hello.js wasm fans. We pop back over to the terminal. We can see the rebuild. And if we jump down and run a curl against our echo, we get hello.js fans. What's really special here is that spin watch is monitoring all of the components within our application. Consider this like a service oriented architecture. Your spin application can have one, three, a hundred or a thousand different components. As you make changes to those components, it will rebuild only the parts that it needs to and spin your application back together. This provides a really fast and iterative development loop. One which I don't think can be beaten in a container based environment. Not only that, it handles all the routing for us. We actually don't need to worry about an API gateway because each of our components has a path that can be configured with wildcards at any nested level. So let's make our echo component actually echo. Here, we can remove this and set the body to request.body. This just means whatever we pass in to the HTTP request will be returned. Our application should already have been rebuilt. Meaning if we hit the echo endpoint, we won't see a response. However, let's do d and say hello.js wasm fans. And bear in mind that shells don't like exclamation marks in your messages unless you handle the quoting accordingly. Hello.js fans. Let's not forget the wasm. And the response come back. Perfect. Okay, that's all we're going to focus on with regards to building applications with spin. Let's take a look at working with spin applications. In a container based context. So let's start with easy mode. First, the spin binary already has everything that you need to build a OCI artifact and push it to an OCI registry. You can run spin registry push and set registry host and image name. Here I do ghcr.io slash raw code slash shim this way. And I'll call this the latest version. This builds a OCI registry and throws it to GitHub container registry for us. No tooling, it just works. We can then pop open our browser to github.com slash raw code, clicking on packages. And we will see that the latest version was pushed less than a minute ago. So spin makes it incredibly easy to build these OCI artifacts and make them available via standard OCI registries. However, if you want to take things into your own hand, you can. In November last year, Docker announced that they would support WebAssembly workloads on Docker desktop. Meaning you can even just write a Docker file to create your Docker image. So let's take a look at that. So let's create a Docker file. Here I have an empty Docker file. And the first thing I'm going to do is from scratch. This will give us the ability just to add our WebAssembly modules to spin.toml and get a special build process for building WebAssembly containers. What we want to do is copy our spin.toml to dot or root when there's no work there specified. Next, we're going to want to copy the two WASM binaries that we have inside of our local directories. Now let me point out that this is probably not how you would build a production image. The correct way would be to do from Rust 159 as Rust Builder. Get your tool chain for compiling to WebAssembly. So Rust up, add, tool chain and so forth. Adding your build steps with the spin build and then extracting the WASM modules into your final layer. But we don't have time for that today. So we're just going to make it work using spin build locally. Normally this is a bit of a faux pas, but with WebAssembly it's not that important because the WASM that you build on Linux, Windows, Mac, X86 or even ARM is going to be the same. So take advantage of that great local developer experience with spin up and spin build. And then just ship it in a container for prod. Next we want to go into the echo directory, which has a target and a WASM file. Like so. And you're going to want to replicate this path inside the root of the container image. Why? Well look at the spin.toml we've already copied in. The source files to execute are already specified here. Meaning we should be able to copy this and see that it matches exactly here. Which it does. So let's just copy this for the Rust one. We want this source from here to here. Now the last bit's going to look weird. But you do have to specify a command. We're just going to set it to slash. The runtime will handle the rest. We can now pop over to our terminal. Where we can do a docker build x build. Here we can set the platform to WASI WASM32 and set a tag. And we'll call this raw code, shim this way, latest. Where we set the build context to the current working directory. It just means where it can find the dockerfile and any other files that it needs. And it's built. Now we can do docker container run dash rm. We're going to expose port 18 inside the container to 3000 on the host. And we're going to set the runtime to be io.containerD.spin If I could type. And the platform to be WASI WASM32. Now we can specify our container image. Like so. Now there's no logging at the moment in our application until we hit it on one of our requests. So let's do a curl localhost. 3000. We can see the logging and hello world. Let's change this again and provide a body that says hello wasm. And hit our echo endpoint. Perfect. Now if that's not cool enough, it doesn't stop with just docker build and docker run. With docker compose you can also specify the runtime and platform flags and run your web assembly applications side by side with containers. Meaning your spend applications to speak to Postgres or Redis can be orchestrated with docker compose with a Postgres container and a Redis container. That is pretty cool. Okay. So what about Kubernetes? Well here I have an opt directory with a subsequent Kubernetes directory which contains a couple of Kubernetes manifest. The first one is a standard Kubernetes deployment which I've just called run wasm. The only thing that's really different in this deployment from any other deployment is that we have a runtime class set to spin. We referenced the OCI artifact that we pushed to GHCR. And if we look in the runtime class example you'll see that we've configured the spin runtime class to use the spin handler. This just means that the container dshim forespin will exist on each of the nodes. That's all I need to change to my YAML and I can kubectl apply-f Kubernetes like so. The runtime class is created, the deployment is created. And if we run get pods we'll see our application is running with three replicas. So let's just confirm it works. We've got kubectl port forward to this pod 3,000 to 800 and then the curl localhost 3,000 and we get our hello back. Awesome. So what's actually going on here? So when we set the runtime class spin to use a handler called spin it just means that there will be a shim on each of the machines called that handler name. For instance, container dshim spin. The container dshims are actually provided by Deus Labs, a team within Microsoft. These shims actually support running a number of WebAssembly workloads. There's the spin shim and the slight spider late-in shim. These allow you to run different types of WebAssembly applications within containers with container d even to Kubernetes. Now you might be asking how do we get those shims available within our Kubernetes cluster? And to do that locally with Docker desktop you must find this article from Docker. It announces the Docker and wasm technical preview 2. On this article you will find a bunch of links here to download Docker desktop for Linux, Mac and Windows. This contains everything that you need to use the new slight and spin shims. Bear in mind when you open Docker desktop it will ask you to upgrade. Don't. These shims are not available in any subsequent release at the moment. However, as this stabilizes and matures it should be available by standard by default on all Docker desktops. But what about your production Kubernetes cluster? For that there's a project called kwasm. Now the offers of this project will actually tell you this project shouldn't need to exist however for now it does. Again, to run these applications you need the shims on your nodes. kwasm is an operator which will run inside of your Kubernetes cluster. It allows you to label the nodes that you want to make available to run WebAssembly workloads even if that's them all or maybe it's just one. It then runs a privileged application on each of those nodes to download the binaries, make them available on the path and then finishes. It's just that simple. Of course you should keep running kwasm because nodes that are femoral they will spin up and spill down. So you need to make sure as nodes rotate and new nodes come online that that operator continues to make your shims available to your container D run times. As you can see from the compatibility chart here this works on Azure Kubernetes with limited support for GCP and AWS but it works great on Minicube 2 or even kind. So if you don't want to use Docker Desktop and you want to use one of those, feel free. And there's full support for Canonical and Digital Ocean Kubernetes too. So it's a really cool project that gets you up and running with WebAssembly on Kubernetes in no time at all. So that's it. Thanks for tuning in for this session. We've covered how to build WebAssembly applications, server side WebAssembly applications with spin. I used a Rust service and a JavaScript service. My spin.toml composed them both together to act as kind of a router. Of course there are options to do routing within each of the services themselves. That's up to you. We get a great developer experience with a sandbox that's unparalleled and a truly ubiquitous runtime. It works with existing container technologies, can run in production on Kubernetes. What are you waiting for? Go check out server side WebAssembly with spin today. Have fun.