 Okay. So welcome to Cloud Native Live, where we dive into the code behind the Cloud Native. So I am Shahrir and a sense of ambassador. So I will be your host tonight. So every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer your questions. So into this session, I'm stoked to introduce Michael, who will be presenting on intro to intro to spin queue. Okay. So this is an official, uh, live stream of the same safe. And as such is subject to the sense of code of conduct. So please do not add anything to the chat or questions that would be in violation of that code of conduct. So basically, please be respectful of all of your fellow participants and presenters. So with that, I will hand it over to Michael. So let's add Michael to this. So, Hey, Michael, how are you? Hello, I'm fine. Thanks. And thanks for having me. Yeah, yeah, awesome. So yeah, I think you can intro, uh, like give some intro of yourself and we can start. Okay. Sure. Thank you. Yeah. So my name is Mickey Mark. I live in Denmark, which is also where I'm presenting from now. And you can see we have a nice spring weather here. Finally, um, on a daily basis, I work at a company called Firmian. I'm heading our product and developer relationships teams at Firmian. And I've been around the Cloud Native space for many years now. I have previous been working in Microsoft and it's been part of Azure Kubernetes services and other cloud-native technologies. I've actually done a lot of work around Windows containers and bringing that into Kubernetes where it feels like many, many years ago. It probably is not, but it feels like many years ago. Also briefly involved in the open application model of open projects back in those days. Okay, awesome. So yeah, so I think we can start now, right? Sure. Yeah. Okay. So yeah. May I add the screen, I guess? There you go. Thank you. Yeah. So, okay. So I'm, I'm here to talk about SpinCube, which is an open source project, uh, that is currently, um, in the, uh, process. Uh, it has been, it has been, there has been a submission, uh, for an approval for contributing this as a sandbox project, uh, into CNCF. And basically in the next hour or so, I will talk about what SpinCube is. Uh, I will hopefully, well, I will try to demonstrate. And as you said, I may break things as long as we go along and then we'll have some fun trying to figure out how I break them and hopefully they can work again. So SpinCube open source project, which streamlined developing, deploying and operating WebAssembly workloads in Kubernetes. This is an open source project that has been, uh, created by contributors, uh, representing Microsoft, Liquid Reply, Susie, and Fermion, the company I work for. And, um, Fermion, at Fermion, we are spending a lot of time, um, building, uh, projects and enabling you to use WebAssembly, um, in your cloud native setups. Uh, and what the SpinCube project is, is, is trying to do or is doing is also giving you the experience of using WebAssembly workloads in your Kubernetes clusters. Uh, SpinCube sort of has a, um, has a, um, a developer tool, um, that you need in order to, to start, uh, using this project. Uh, this is another open source project, which is called Spin. And Spin is an open source project that we at Fermion have been building over the last, uh, two years. Uh, Spin is a developer tool that helps you build WebAssembly microservices and web applications. Now you, you may be familiar with WebAssembly as a technology that was built to create, um, uh, more performant, uh, um, user interface applications inside of the browser. Uh, there's been a lot of scenarios where companies like Figma and Adobe are creating, uh, great application experiences that run inside of the browser. And the WebAssembly specification is also able to run outside of the browser. So to speak on the server, we sometimes refer to this as server-side WebAssembly. And we've taken that technology and with Spin build the framework around how you easily build applications following microservices, uh, type of applications with a, let's call it a functions-based developer paradigm or serverless developer paradigm, because there are things in WebAssembly that really lends, that lends itself really, really well to use that type of paradigm. And we'll hopefully get back to that a little bit in the end, um, as we talk a little bit more about how, how the whole developer experience around these WebAssembly applications look like. But, but the thing that Spincube then enables is to take these applications, really, that you've written using the Spin framework and then have these really, really efficient serverless ways of running applications inside of communities. Um, and we'll expand on what serverless means in, in, in this case because we obviously know that there are nodes and there are things behind the, behind, behind the scenes going on there. We have, even though the project is in very early phases, we have already have some people, uh, start piloting this project, uh, in, in out there. Um, at a recent KubeCon in, um, let me think, that was in Paris we did here in Europe, uh, late March. Um, um, together with Microsoft, Fermion and SysGroup from Germany, uh, we talked about how SysGroup had been using Spincube, uh, to actually see how some of their workloads they run today, uh, can, you know, with a, at least a similar performance, if not even better, um, save a lot of money in terms of the compute costs they have to use supporting those workloads. And it, it, it speaks to some of the core benefits of using WebAssembly, uh, uh, as an alternative of, you know, overcontainers, uh, because the workloads are simply smaller and far more agile in terms of how you can run them and thus enables you to, to save a lot on your compute costs eventually. So if, going, you know, just giving a little bit of overview of how the Spin and Spincube fits into some of the things that, that, that we at Fermion have been involved in, uh, Spin as the open source application project can run in many places. Basically the project enables you to implement your own hosting solution for Spin. You can run it directly using the CLI, which is also part of the developer experience. And then Spincube enables you, uh, to run this into Kubernetes on any cloud provider, local Kubernetes clusters you have. Um, we build experiences around Spin with Fermion cloud as a fully hosted option. And we also have sort of, um, taking the experience of what we have with Fermion cloud and build what we call the platform for Kubernetes as well. So there are many options in how you can use these applications and Spincube is for many people sort of the place to start and start enabling WebAssembly to run inside of Kubernetes. So the Spincube project, I'm going to try to break it down, what, what is inside of that project? Uh, because there are, there are four, four, there are actually four specific repos as part of the organization. If you go and check it out and get up. Um, the first one we have is the, uh, Spin operator. So we build a Kubernetes operator that helps you deploy Spin applications and sort of take care of the life cycle, creating deployments and services that are needed for these applications to run. It basically introduces a custom resource in your Kubernetes cluster that is called a Spin app. And we can dive a little bit into why we've done that and the benefits of that as best we dive into the details of it. Another very important piece is the container de-shin. Uh, so if you, a requirement for actually running Spincube today is that you are using container de-shin container runtime. And container de has this concept of shims that you can apply that, that enables us to run Spin applications through this shim. So you don't need a different, um, container runtime. You basically need to have a shim that container de can call out when you create a Spin application or run a Spin application on the node as opposed to running an OCI compliant container. And those are not the same. And we can, we can also dive into that a little bit later. Once we sort of open up the box of what, what that is, um, maybe as we go through the, through the developer flow of that. And we have a runtime class manager and the runtime class manager enables you to easily install and configure the Spin in your clusters. We will see a demo of how that works. And finally we have a plugin called Spincube. So Spin as a CLI command that you would use in your development workflow has a plug in this enables you to easily create the deployment artifacts for instance that you need to, to deploy your applications into Kubernetes. That's sort of the overview of what is in, what is in, in the Spincube project. Now if we sort of try to take them in a, uh, sort of a time order, uh, typically you start with, with Spin as the, uh, framework to build your applications. And Spin has this very straightforward workflow where you do Spin U to create an application, you use Spin Build to build your application. And then you can push your application into a registry. Um, so when we push applications into registries, those are OCI compliant registries that we use. And basically what we do is we take the Spin application, which has a few different things in them. And we will see that once we get into the demo as well. Package those up as, uh, artifacts inside of the OCI, uh, image and push that into a registry. It's not, it's not a container image. It has a different format and there are specifications being worked on how these WebAs and the, uh, OCI images will eventually look like, uh, spin a sort of spearheaded this a little bit and, and, and done some initial implementations, but will eventually it her to whatever the specification will end up looking like. Um, so, so the, the, the way that you would normally do this in containers where you, you, you package your application up as a container and you put them in a registry where it can be pulled from, you have the same model with Spin and it enables us, uh, to, to apply the type of GitOps and deployment pipelines that anyone is, uh, that most people have today and sort of apply the same methodologies and all that's how you want to do that. And moving along with the, the, where the, the plugin comes in, uh, the initial command that was made for the plugin was basically to scaffold the YAML that is needed to deploy the application. And so it's been cube, uh, scaffold command is what you would use. And then eventually you have a YAML file that you can go and apply into your Kubernetes cluster and then have the application being, being built in. Uh, I see there's a question coming in about whether the endpoint is the endpoint per push local. Uh, no, it's a remote registry. You, it's, it's any registry you can push to. So wherever you run that registry, you will push it, whether it is a GitHub container registry, docker hop, a private registry you run in a cloud provider or something like that. You, you can push it to any of those. Um, Spin registry. Okay. So I think he has got the answer. Okay. Cool. Yeah. If not, let me know. I will keep an eye on chat and try to answer questions as they come along. And, you know, if things are not necessarily clear there, cool. Okay. If we take a look at what the role of the spin operator is, though the, um, like with most of these operators, there are some things happening in Kubernetes. In this case, a custom resource called a spin app is being created. And basically what the operator ensures is to unfold the, the, the spin custom resource, uh, that is defined, uh, into, you know, the deployment needed to actually go and run the application and then set up a service as well. There are things around runtime configuration and other things that we do that we will then inject into the application at runtime. And the reason, the main reason for introducing a specific operator is really that we wanted that custom resource of the spin app, uh, to mirror and look a lot like what you would do when you do, uh, define a spin application in your development workflow. Um, so a spin application in itself has a manifest, uh, and that manifest defines all the different services that make up the application. And basically it's, it's easy in the custom resource to sort of mirror the definition of what that manifest is rather than, you know, putting it on the developer or the operator who wants to deploy this application to translate what should this be in terms of a deployment, what should this be in terms of the service and so on and so forth. So it is the main objective is really to make a smooth and simple user experience in terms of how you could do this. The, um, the custom resource definition for the spin app, uh, inherits a lot. Well, it doesn't inherit, sorry, but it, it mimics a lot of what you will define for deployment. So for instance, resource limits, um, obviously the image reference, um, and those type of things that you, I used to see in some of these other resources inside of Kubernetes, but still we want to rabbit in something that is very similar to the application manifest from a spin application looks like, which is one of the reasons why we have it there. And then if we sort of go even further down the stack of what happens inside of our Kubernetes class, we have the two, uh, the last two components, which is the runtime class manager and then the ship. So the runtime class manager is sort of the easy one because that basically is, um, is an, uh, is an operator in itself that is easy, that enables you to do the container de-configuration, uh, and installation of the shim that is needed on your Kubernetes nodes. So, and this is, uh, and this is also where the shim sort of fits in because the shim is the one that enables, uh, us to run the spin application, um, as a single process on the machine and not as a, as a, as a container, the way that, uh, the way the containers would normally run on your machine. And sort of this, this is where you start seeing, there was some differences, actually the low level execution of this, uh, where it becomes a much smaller footprint. Uh, it's a single process that is being basically being run for each of these applications on your node. And we can do that with container de-through implementing this shim. Uh, the shim uses a library called run wassy, uh, that a team at Microsoft built and run wassy has, uh, is also being used in other shims that exists, but are not necessarily part of the, the, um, the spin cube project. Uh, but run was is sort of the core of how you can run WebAssembly server side. And then the spin shim is how you specifically run a spin application, which is a variation or an application model or something that builds on top of core server side, WebAssembly, you could say. Um, so those are sort of the, the four main things that the spin cube project consist of and how they sort of map together and what the role is. Uh, and how this is all laid out in, in the spin cube project. So without further ado, let's, let's maybe try this out and, and dive into how things work and we can install the thing and deploy some workloads. Yeah. I am just going to go over and share another window. So we're able to see a terminal. I hope I soon enough or make the fund big enough for people to be able to follow along. If not, let me know and chat and we can, we can, I can adjust that. Okay. So, um, I'm right now just running on my local machine and the thing that I prepare, um, I just see this question. If I have a good link to share afterwards. Yes, I do have a good link. Basically spin cube dot dev would be the landing page that would enable you to, to move on to spin. There's a lot of resources in there, but I have a few links at the end. I'll, I'll, I'll share those as well. Actually, I think spin cube is just github.com slash spin cube. I think that's the organization and you can be able to find the project there. Okay. So, um, the first thing we're going to do is we're just going to try to go through the experience of installing an operator and creating a spin application and then deploy that into the cluster. So what I've done to begin with is I've created a K3D cluster that I run locally on my machine, um, with, uh, in, in order to enable container D and get the shim installed inside of your K3D cluster, uh, you can create a cluster. Uh, I might have to come in. Actually, there's no show here. You can create a cluster and point to this image that the spin cube, uh, uh, repo has published. And basically that will give you a, an image for the Kubernetes nodes inside of a K3D cluster that does include the container D shim and has the country, container D configuration, which means that when you use K3D clusters, uh, locally, uh, you don't have to go to the, uh, the runtime manager installation and installer container D basically comes in that image. Uh, but that means that I am out. I just have a single node on my machine here, right? And that has all the roles that I needed. And, um, the only thing I installed, uh, into this, uh, um, into this cluster is serve manager. So we need, we need spin, spin, uh, spin operator has a dependency on serve manager. So that's the thing that needs to be installed before we can go ahead. Okay. So there are, there is a tutorial that you can follow for this on spin cube.dev, but we need to first apply a few, uh, uh, resources that are needed in here. And I unintentionally need an L there. Uh, the first thing is we need to install the, uh, the runtime class, um, that we want to be able to use. And then there is the custom resource definition that we need for the spin app and the spin app executor. And then basically the spin operator we can install using HelmChart. Uh, some installing version 01o of this, creating a name space, uh, for the spin operator. And it will now be pulled in and installed into my cluster. Uh, one node on the two custom resources we have in here. Obviously the spin app is the one that I was, uh, mentioning earlier on that defines the application itself. Uh, the executor is a concept within the spin app where you can have, uh, container DCM is the, the executor that, uh, is installed, uh, with, um, with the spin operator as default. That's basically tells the spin operator to use, uh, the container DCM as the execution of the runtime for the spin app. As I mentioned earlier, we, we, we built, uh, product, uh, called Vermian platform. They use as a different executor. This has higher density and a different way of scaling execution. So, so from a, from a spin queue project point of view, there is, there is an opportunity to extend how the actual runtime is implemented there. If anyone wants to do that, or if there are various scenarios around running Kubernetes on specific things that need another type of executor than what we do with the container DCM today. Um, that is, and now I'm saying thing that I necessarily don't know anything about, but I guess there is an opportunity that you could build, uh, a way to actually run these applications if you're not using container deep, but it's not a thing that exists today. Uh, but I might have some engineers at my company for who I know about who was shaking their head. And I, as I put forward things like this, but anyways, um, the last thing that we then go ahead and define here is that container these shim and now we have everything set up. So three resources and the helm installation is what we needed. And if we go ahead and we check out deployments now, we can see we have our spin operator installed and we can see we have the, we should be able to see the spin app executor as well, which is the container DCM. And finally, we can go and ask if we have any spin apps in our cluster and there are no spin apps installed yet. Okay. So that, that's now our cluster is ready to run spin applications. Then what does the spin application look like? What is this? Um, how do I get from, from this and actually then trying to deploy something? So I also have spin installed in my machine. Um, you can go to, um, developer fermion.com slash spin, which is where you can find the spin project. Um, it's on github.com slash fermion slash spin as well. Um, and once you have spin installed, uh, you can create a new spin application and you can use, uh, a set of templates that exist. Let me just check. Uh, so we're using a JavaScript template. I actually think I need to do, I don't, oh yeah, I need to do T for template. I'm just creating JavaScript to create this, this application. Uh, let's just call this demo. I'm just going to accept the defaults for the template that I have. What's interesting about WebAssembly is that we can actually mix in that languages, because WebAssembly, once it gets compiled, compiles into a common format across the programming language that you use. Uh, so whether I have to execute something that was written in JavaScript or something that was written in Go, it doesn't make a difference once we get to the point of executing it, which actually is the same story as with containers, right? We don't know where the binaries inside of the containers came from. The big difference being though that if you build something and deploy it in a container, you are responsible for, uh, whatever dependencies that are external to the binary that you compile, right? So if you rely on certain frameworks or other things, you need to bring them in the container. If JavaScript, you would need to bring node inside of your container or something else. With a WebAssembly and because the runtime that understands running WebAssembly is built into the shim, we only need the WebAssembly. And if we look into this manifest file that I have here, our WebAssembly is called .wasm. So in this case, I'm just going to deploy a single one, going to deploy three files. Actually, I'm going to deploy this, uh, I'm actually sure if I'm deploying the definition, but anyway, I'm deploying the WebAssembly, which can be, you know, 150 kilobytes, two megabytes, depending on what language you use, this might difference a little bit, but it's, it's actually super, super small when we built them. And that's the only thing I need to get onto my node. And so I hope that give you an idea of how, because we have these small binaries that need to move around, it becomes much more agile, pulling things in, it's faster, scaling things is faster, and so on and so forth. But that's, let me just give a little bit of overview of how this spin model works. So I mentioned earlier that spin is built to create microservices type of applications, and following this functions as a service or serverless type of paradigm. Spin has this concept. So the first, you know, seven lines here is just some metadata around our application. But the first concept that you need to understand about spin is that spin has a concept of a trigger. So basically there is something that is triggering your application to run. In this case, I am defining a trigger that listens to HTTP requests. Now we can discuss whether that's event driven or whether it's just a web server. Anyways, you can send HTTP to request to this thing and it will respond to you. And part of defining the HTTP trigger is I need to say what route I want to listen to and which component is serving that route. Now the component is then the thing that maps to my web assembly, which you can see here in 1914. All this means is that when this spin application is running, if an HTTP request hits root or any route, this is a white card determiner, then that HTTP request will be forwarded to the component called demo, which is this web assembly file that is defined on here. Other trigger types that exist in spin, and this is an extension point, so various can be built, is a trigger type that ties into MQTT, looking at queues for PubSub, a trigger looking at Redis queues as well. We've also experimented with something we call the command trigger, which is sort of a run ones or run to completion type of trigger. And there are some other messaging triggers out there as well, but basically this is an extension point and whatever you want to listen to or if you want to have particular protocols or something like that, you want these applications to listen to, the project can be extended with support from that. Okay, there is some build commands in here and that's basically just sort of a developer experience thing, which says if I run a spin build command, because this is JavaScript, we're now reaching over to the NPM world and we have NPM to help install what is needed. The other thing I just want to quickly show is the actual implementation. So in this case, and this is where the serverless paradigm sort of shows itself, because the implementation that I have to do as a developer is basically export a function that takes a request and returns a response. And that is again, that is tied to that I'm using HTTP triggers, so I'm listening to HTTP requests. And once they come in, I'm able to return to that. And we can try and just add a few lines in here and say, hey, we are handling a request. And if we want to know what the actual request is, we can do something like this. And this is, let's just try to write out the whole request in terms of what that is. So we can see that once we start locking things as we get inside of a Kubernetes cluster. So basically, all this application is doing right now is saying hello from JS SDK. Let's just say hello from lives and CF demo. And logging to console. Okay. So we have all of this in now. I'm just going to do an NPM install. And once we have that, we can build our spin application. Oops, spin build. And now WebAssembly is created. And if we want to test this locally, we can do spin up, which then runs this locally. And we could curl this port 3000. And we can see hello from live since you have demo. And we can see we got some logging up here. So I hope this will give you a little bit of idea of what a local developer experience with this would be. I could actually do a command called spin app. And if I wanted to, I could start adding new components to this application reading in a totally different programming language. And in that case, mix and match things the way I want. I don't know if that's a common scenario, but it's just to show that the portability of these programming languages in WebAssembly is a thing that you can do. Okay. So now we have the application up and running. So the next thing we want to do is we want to do a spin registry push. And in this case, I'm going to use TTL.sh. If you're not aware, TTL.sh is sort of a public ephemeral container registry. I believe it's a company called replicated. Who's behind that? If I'm giving a shout out to the wrong company, I'm sorry, but I believe that that is. But basically, I can just, you know, we can put something off it. We could call this spin cube live. And my name in, and they have this notion that if I put one hour in as the tag, it will be available for now. So now my WebAssembly application is being pushed into that registry. And something that I actually like doing is I like to do a dog of manifest inspect after we do this, because I want to show you what this actually looks like. Let me just do this. Because you can see that the manifest for this thing, I actually don't know if this is, we should call this an image or what it is. So anyways, but, but the thing that we put into the registry, you can see that my WebAssembly is in here. In this case, for my JavaScript application that we built is a 2.3 megabyte WebAssembly. And you can see we have some config, the config is, I would assume, but the, the media type, the document has to be in here as well as part of a layer. And that is, that is it. That is basically what we pushed. So comparing to the size of a container, this is the size of the workload as a spin application as a WebAssembly. Okay. The next thing we wanted to do was to use the spin cube plugin. And we're going to scape all this and we're going to use the image that we just pushed and I already forgot what I call that. So what did I call it? Spin cube line. Okay. And I think one H. And I'm just going to output this to a file. And we can see that now we have the YAML that is defining my spin application. You can see the high version here. We're going to be one well, one by the way, because this is early days. But it's a spin app. It has a name. We point to the image. We use container DCM as the executor. And we want to run two replicas. Let me actually just change that to just a single replica so that because we only have the one node. Okay. So we should be able to go and apply this into our cluster. And then we can see that we have spin app. Still no one of those already, but we have one desired. And well, actually, let's just wait for them. Okay. It's already ready. So well, so the next things we can see is now that we know we have the spin app deployed in the right. And as I showed in a slide earlier on the operator have now picked this up and made it red and give us the desired state or sorry, the ready state back. But what was created behind the scene is that first of all, we get the deployment. And we also actually have a part. So the WebAssembly ends up running as a part like we would expect with containers as well. So it all sort of, it fits into the model that Kubernetes works with today. We basically just changed the actual implementation of whatever it is that we're running into something else. And we also set up a service. As you can see, I have a spin cube live service down here. So the easy way to test this out is to set up a port forward. And we can do that to our services. And I'm really bad at remembering names. So let's just do like this. And I think the other thing I also want to do is we should be able to do something like logs and then follow that one. And now let's see. We should be able to curl localhost 8080. And you can see hello live from CNCF. And you can see logs are being written out to stand it out like we would expect. So we have those logs with the Kubernetes APIs. And it all just works inside of Kubernetes the same way that it worked for the application, the same way that it worked locally. And also how we would expect this to work from other workloads inside of Kubernetes like containers. Okay, let me just pause for a second here. And if there's like any questions or anything you want to ask before we move on? I couldn't find any questions with the folks. I think we can move on. Okay. So yeah, a few things to say about these. The only thing I may want to add around the spin application, I'm just showing an example right now of what a spin application use looks that uses this thing called runtime configuration. Basically, runtime configurations are ways that we can hook up dependent services that we have. Inside of spin application, there are certain APIs that you can use. And those APIs they are standardized. They are standards in the specifications. There is an underlying specification for WebAssembly on the server that is called WASI or WebAssembly System Interface. And basically what that defines is the APIs that a WebAssembly runtime needs to implement. And so that when you write applications, you can write against those APIs. There's also a part of that which is called the where you can define sort of standardized interfaces. And one of those is called WASI key value. Another one is called WASI HTTP. So basically the application that you build, if you build a spin application, well, the component, the WebAssembly component that is part of your spin application, it's not a spin specific thing. It is actually a WASI HTTP implementation. So that also has a standard definition of how that looks. And within there, you can then add other types of interfaces. We consider the, in the case of a spin queue, you would consider the container DGM to be what is called the host implementation inside of the world of WebAssembly. When you run spin locally, spin is the host implementation. And the idea is that they both implement support for WASI HTTP, right? So once I've built this application, I can actually run these in those different implementations. Other projects that exist that also implement WASI HTTP, it is a WebAssembly runtime called WASI time, but also engine X has an open source web server called unit. And unit is also able to run these applications. So there is a notion of the application still being portable within that world of WebAssembly and not being bound to spin or the container DGM spin as the host for them. So within that world of creating these hosts, you can then light up other capabilities. And some of the stuff we implemented in spin is a key value interface. And I can probably, let me just try and just briefly show you what this would look like. In this case, I'm showing you the implementation of a spin application written in Rust. And the main thing is that you can see that there is this concept of a key value store in here. And I can open a key value store, and I can go and get stuff from the key value store, and I can set stuff from the key value store. So there are some few functions around the key value store that you would normally use when working with such one. The idea is that the API that you use here is standardized, is a standard again. So the host implementation, when I run this application, the key value store that is implementing the key value store can be different. When I run this locally with spin, it's a SQL light implementation in a file. But then what I was showing with the deployment before is that I can define this in a runtime configuration model case and say, well, I don't want to use a local ephemeral store for this. I actually want to send this out to a different key value store. In this case, we're using the type redis, and then we have the service URL for the redis that could be running inside of a Kubernetes server. And in this case, whenever my application writes to a key value store, we can sort of, through this runtime configuration, direct those calls into the key value store implementation and installation that we want to use. And this is sort of a world of WebAssembly that exists and is implemented and enabled in spin. But also, you can sort of, if you're in the business of providing Kubernetes clusters and capabilities for development teams, if you, for instance, work in a platform engineering team or something like that, you're actually able to provide some opinionated implementations of these while having developers write against generalized or standardized APIs when they write their application. And they don't have to care about how you store the key value data that they want to store. You can choose to use redis, you can choose to use other implementations as well for this, or even a managed service if you run inside of a cloud somewhere. Okay, so that was a brief tour in how SpinCube works. I think the last thing I want to show demo-wise is over here because there's another thing that's very interesting about WebAssembly, in particular in the cost-saving scenario. So WebAssembly are portable, like the binary format that you create can run across operating systems and run across processing architectures. And this is different from what we know with containers where we will build a container for a particular processing architecture or for a particular operating system. And that actually enables us to very easily spread workloads on different types of servers that we may have. So let me just quickly shift over to my trying to find my browser. There you go. To share my browser. Thank you. This is the SpinCube Dev website. As I mentioned, this would be your starting point. I'm getting started with SpinCube. What I wanted to show you is that I created a cluster. In this case, I'm using AKS in Azure. The reason why I'm doing this is because I don't have a bunch of machines lying around. You may see some in the background, but they're not too top right. But what I wanted to show is that I can actually create Kubernetes clusters that have different processing architectures across. So I have, in the case of AKS, I have two different types of VM SKUs that I'm using in the cloud. One of them is using, this is Ubuntu Linux, but we may be able, there might be a hint in here. There's no hint in here. This is an AMD 64 processing architecture that I'm using across these nodes. But I also created other nodes in here which are using, and I think we can get a hint of this right here, which are using an ARM64 faced processor. And a lot of the ARM64 based virtual machines that are available out there come at a much lower price point than the AMD64 processors. And because of the portability of WebAssembly and how they work, we're actually able to deploy our applications to run across any of these nodes pools, any of these processing architectures. So let's just quickly see if we can make that work in here. I'm just going to go back into my, actually just going to stay over here. I'm just going to see what I have. Okay, let's go over here and change our Kubernetes context and go over to the plus that we have in the cloud. And if we look at a set of nodes, you can see I had, I showed you, I had those, the node pool concept that exists inside of Azure Kubernetes service that I have two, each of them have two servers. What I want to show you is that I have, don't think I have any spin apps deployed in here yet. I don't want, but I have the spin operator and everything set up. What I have also done is I've actually installed the, let's see, I want to check the annotation. Oh, here you go. I want to get some node information. I'm installed the runtime installer today. Today the runtime installer is not, it's an existing, there's an existing project out there called kwasn. So the detail to this, there's an existing project called kwasn that I've been using here, but it's the same concept the runtime installer will eventually use. What I do once I have that operator in here is I can actually annotate my nodes of whether I want to enable them to run WebAs and do workloads. So there are ways without necessarily, well, you probably, I'm actually, I'm starting to ask myself questions. I'm not sure whether the spin operator knows whether a node supports WebAs and not. That's actually a pretty good question. Maybe I should open an issue. But nonetheless, to get the container D shim installed and configured, you can use this concept of kwasn and eventually the runtime installer and annotate these nodes. So you can see I have three of my nodes here are annotated being able to run that workload. And I've added a fourth node down here that is not yet able to do that. So we could go ahead and annotate, let's say the last node. So let's look for a label where the name is, I will assume this is the name, right? And we'll put that annotation in. And we can see we did not successfully do that because probably, well, I can probably not do that. So let me do something else. Let's do annotate. And let's just see, I think, okay, so now we got both of those annotated. And you can see I have the true down here. Let's go and check if the installer works. So we're going to get the logs from the operator. And we can see no pool one VMSS series one, that was the one we didn't have the one up here, right? It's now complete and is able to run the WebAs and the application. So I have an application ready to run. We can go and apply that. This is the one we call simple. And we should see that soon be ready. And if we get the part and we can see we actually, well, okay, that ended up on the one that we had annotated. So we can see we have that spin application running there now. But what might be even more interesting to look at is that if we scale this to, let's say we want to run for replicas instead, and we went ahead and applied that, we should see that this is now just seeing spread across a combination of AMD64 and ARM64 nodes. And we should be able to do port forward and see that it's actually running. This is going to be interesting because we, I did not try all of this beforehand. Let's do, okay, we need to get to port 80. And then let's curl this and see if it actually works. And that did, oh, I know, sorry, I know what I did wrong here. Let's go back. And I think I need to do hello. There you go. So we actually have that one application built as one build output, one artifact. And now is able to be running across these differences in like the cheaper servers as well inside of your clusters. And because it's all standard Kubernetes APIs and everything, you can have your containers running next to them and start, you know, extending on your platform and add this new capability in there. Okay, let me go back and I have a few things I just want to end with here. So this may be a question that some of you have as you sort of follow along. This is like, why do we need spin WebAssembly applications inside of Kubernetes? So I've been hinting at this as long the way I've been actually pulling these things up as I've been going along. But thinking about WebAssembly as a specification originating to enhance the experience of building applications inside of the browser, there are different things that design principles, you could say that sort of had to go into that specification. Now, if you want to run something inside of the browser, you want a small binary, right? Because everything you need to run has to be pulled down from a remote server before you can run it like you don't have things installed locally. You will eventually be able to cache them, but you want to make sure that it actually comes quickly, right? And you want them to be able to start up immediately because a browser application is a user interface application and you don't want to have like extended delays and, you know, for you to wait for them. So some of the facts that I've stated here in terms of how small these OCI artifacts are WebAssembly and how fast they actually start, you all get that when you use WebAssembly on the server side as well. Then from security point of view, a WebAssembly component and a component inside of a spin application are sandboxed. So what that means is that there's a shared nothing memory model. So even within a single application that has multiple components, which then would translate into a single part that would have multiple components inside of it, there's no memory being shared in between the individual components. And actually the model in spin as well is that between the triggers events happening, so in the case of my HTTP application between each individual HTTP request, there is also no memory shared. Like we're not the spin implementation and the container D, shim implementation is doing a fresh instantiation of the WebAssembly component on each request handling. And we can do that because of the quick startup time. So there's a lot of security benefits built into the system that actually enables you to easily build multi-tenant secure types of scenarios with this. And then the last bullet point that I spent a little bit of time on the last demo, which is, you know, portable cross processing architectures and operating systems. And again, coming from the browser, you want to write your code once and be able to serve it in a browser or different browsers on different devices that run different operating systems and run different processes. And all of these benefits you get when you write WebAssembly services. And thinking about how this, you know, how this fit into the cloud native world, for me, this is really just an evolution of what has happened over the many last years, where we've gone from, you know, well, being physically attached to servers, like servers being a physical thing, you know, and then having to learn how with virtual machine, all these things that sort of came along and with the virtualization technology, then we moved into containers that lifted that whole thing to a totally different layer where orchestration technologies like Kubernetes became super relevant. It just gave us a lot of opportunity to meet our health and WebAssembly through many really different solutions that we now have our workloads encapsulated and even smaller artifacts that we need to orchestrate that we need to deploy that has portability that are saved by nature and all of that. And that I think is really why you should be considering looking into this and why we are super excited about where this technology can take us. Okay, so yeah, I think one question is from Hamidi. So it is like, what types of applications are not working with well with spin queue? That's, I like that question. That's good. So a few ways of answering it. So I think one of them, oh, okay, might have lost some of the picture, but I'm not going to back to it. Anyways, I think for the question of which applications are not working well with spin queue. So there are a few things to think about. So WebAssembly as a specification is still developing. And there is definitely a varying degree of maturity across which programming language you can use. Rust is a fairly safe language to use in terms of WebAssembly support. We are working hard on JavaScript Python. I know Microsoft is working on.net and hopefully have stuff to share and an upcoming.net release as well. Java is not doing that well. Go is going well. Well, okay, not intended, sorry. But so there is that level. So language maturity is one. So depending on what language you use, some of them won't even work and things are not working well. The application model that spin attaches to, like I used to phrase functions-based or function-based paradigm or serverless, because WebAssembly is a, you know, requires a runtime and stuff like that. Like, theoretically, you would get more bang for your box in terms of CPU cycles if you wrote and compiled something to native. So if you have scenarios where you need absolute maximum performance for one particular task that needs to run continuously, then definitely the model that spin implements is not helping you. It would probably help you in terms of developer experience and portability and all these other things. But performance-wise, there are, there is some overhead. Although I would say in most cases, like, I saw a recent market study that says that most Kubernetes clusters run with an average utilization around 15 to 17% and that the number of parts being run on each node is maybe 20, 25, something like that. Because WebAssembly are smaller in sizes and because with the model that spin implements, they can react very quickly and scale quickly. You would be able to run more applications on a single node and the scenario where that will benefit you a lot is if your workloads don't have a similar workload pattern over time. So, you know, not all applications need, you know, system resources at the exact same time, but over time they sort of like come in waves and basically what you want to do if you want to, you know, increase utilization of your nodes, if you want to make sure that sort of the mean load on your node and the max load on your node starts coming as close to each other as possible and simply by adding more workloads onto the same node, there's a higher probability of you being able to do that. So, you get more in terms of density and you get more out of the servers you probably have already provisioned in a model like this. I see there's another question, which is how close are we to adding support for GRPC in spin was and wasi. So, we should, there is support for outbound sockets, inbound sockets in wasi today. We've done a lot of investigation around this. I actually don't know how that specifically translates into GRPC. To be honest, I have you, I would refer you to either the bytecode aligns Suleep chat where a lot of this is working, or we have a discord for fermion where you can ask this question, Hugo, because I can specifically ask reply to the GRPC questions. But I know socket support has been implemented and there's a wasi socket implementation today and we are working on some of the async implementation there as well. So, you would be able to run a WebAssembly today that can create its own outbound connectivity. In spin, how we've solved that so far is to bridge the requirement at least for outbound networking through that host implementation because then we can do it that way. But I have to defer specifically on the GRPC question. That was a long non-answer. Sorry about that. I don't think it's an issue. So, yeah, I basically muted the reason was like, yeah, when I'm going to talk, it's like there's some issue, like Nick, I think Johnson mentioned, you are going to lose your voice. That's why I muted it. So, yeah, that's the thing, I guess. So, I promise a few links and they are here. There's also a spin cube dev, fermion.com plus spin reads out this code. There is a in the cncf slack. There is a spin cube channel as well, which is definitely also a place to go. And where all the containers hang out and other users hang out. So, yeah, reach out and hopefully get engaged, try this out. We are very excited about this, that's for sure. Obviously, I guess slack is the best platform for these kind of stuff, right? Okay. So, yeah, I think we can wrap up, right? Yeah, sure. Okay. So, yeah, thank you so much, Michael, for an awesome session. So, yeah, see you again. See you. Bye-bye. Okay. So, yeah, thanks everyone for joining the latest episode of Cloud Native Life. So, we enjoyed the interaction and questions from the audience. So, thanks for joining us today and we hope to see you again soon.