 Well, thanks, everyone, for showing up. We're going to spend the next, I think, almost an hour and a half running through a tutorial or workshop on getting started with Serverless WebAssembly with Spin. My name is Mikkel, and I am part of a company called Firmion. I have one colleague with me up here, Matt, who will be helping out during the tutorial. And Firmion is a company who built this open source project called Spin, which is all about Serverless WebAssembly. So during the next hour and a half, you get some firsthand experience with Serverside WebAssembly. You get some familiarity with the Spin framework and a lot of the features we have in that framework. And we can end up doing deployment of Spin applications into a cloud offering that we provide, which is called Firmion Cloud. And we can also play around with deploying into Kubernetes. There's a fairly new Serverless AI feature we built on all of this, which you can go and sign up to here. It's part of the workshop, so we'll get to that step later as well as we go along. So when we do this tutorial, we've done it a few times by now, there are typically two paths that people can choose to do. One, self-guided, go and have fun, perceive the workshop at your own pace. Everything you need to go through the workshop and tutorials is on this GitHub repository. It's a QR code here to easily get there. It's slash Firmion slash workshops on GitHub. I will also, the second option, you can even mix and match these, but I'll spend a little bit of time to begin with introducing the concepts behind Serverless WebAssembly and Spin as a framework as well. And then I'll do, I'll basically walk through the tutorial up here as well so you can feel free to sort of follow along what I do or, again, go on your own pace. If you go on your own pace and, you know, you have things along the way where you need some help or you have some questions, just raise your hand and we'll deal with it as we go along. Okay, so let me start doing a little bit of introduction about WebAssembly and what this whole Serverless server side WebAssembly thing is all about. A few things that's always good places to start, you know, when we talk about WebAssembly, what's important to know by that is that, first of all, it is a specification. It's a specification of a binary instruction format. It's designed to be a portable compilation target. Okay, that might not mean a whole lot of things, but we will sort of unfold that as we go along. The specification and technology originates from the browser. So when I talk about Serverless WebAssembly, we talk about an implementation of WebAssembly specification to enable you to run WebAssembly outside of the browser. It's important to sort of state upfront, I would say, that so programming language support, it is emerging and it is stabilizing. So depending on what your favorite programming language is, there is various degree of support for WebAssembly outside of the browser or server side. And if along the way I say wasm, it's interchangeable with WebAssembly. It's basically the same thing. So just bear over with me. Again, it's just good to know wasn't WebAssembly. It's the same thing. So when we talk about this being a binary instruction format that is portable across operating systems and platform, what does that really mean? Well, it means that the path from writing your code, compiling it and running it is sort of a two-step thing. First of all, you have your code that will compile into wasm or WebAssembly, which is the binary format. Once you have your code compiled into that format, and that format is the same, irrespectful of what programming language you started out with, whether you started out in Rust, JavaScript, Python, go, any of those languages, you'll always get a WebAssembly binary once you compile them. There are various ways this happened depending on the programming languages. Matt did a great small YouTube video once where he sort of walked through the various methodologies based on those different languages. But once you have that and you need to run it, then you start running these inside a WebAssembly runtime, which is basically a small virtual machine that's being set up that can run that WebAssembly. And there are probably some well-known technologies in here or references to things like how the Java virtual machine runs, but also if you look at this in context of a browser, the virtual machines inside the browser would be the JavaScript engines like the V8 and those type of things. Okay, so that's how we make WebAssembly portable, and that's sort of how this all works. If we just talk a little bit about the whole runtime and VMs and what that means, I've created this sort of overview of the JavaScript-based runtimes and the WASI runtimes. So I talked about being in the browser, being outside of the browser, and this is sort of the same distinguish thing I'm doing here. These are all the browser-based runtimes, and the reason why I call them JavaScript is because the way that you bind from the WebAssembly to the WebBrowser APIs is through some JavaScript glue code, that's part of it. So those runtimes are designed to complement and run alongside JavaScript, your WebAssemblies, right? And that's examples of things like V8, which are used in Chromium browsers, and SpiderMonger used in Firefox. You can actually do stuff with Node.js around that as well as an opportunity, as a possibility. On the other hand, we have what I talked about as the server-side WebAssembly, all the stuff that runs outside of the browser. We can also talk about these as WASI runtimes, and WASI is a, I lost that English word, it stands for the WebAssembly system interface. What do you call it? An acronym, yeah, an acronym, thanks. So the WASI runtime, that's the WebAssembly system interface, and basically the WebAssembly system interface are those APIs that enable your WebAssembly to get access to something that reminds of an operating system. As opposed to in the JavaScript side of things, you get access to APIs that are browser-based APIs, right? So you get access to write-through-the-dom and those type of things over there, you'll get access to things like files, and so on and so forth. And there's a list of runtimes that support WASI over here, WASI time being the most predominant one, which is also what we use in our framework, and there actually is experimental support and Node.js as well on that side. Okay, so what is it that makes these WebAssembly so great? And why does it matter to talk about having WebAssembly running outside of the browser when the specification was designed to enhance browser experiences and browser applications? Really, there are these four basic things that are part of the specification that makes this really, really compelling. First of all, the binary size. If you look at a server-side WebAssembly, you can do a Rust Hello World example that is approximately compiled into a two-megabyte binary file, and it's important to remember that that binary file is actually portable. It can run on any operating system or any processing architecture, as long as it gets loaded by that virtual machine. Now you can, ahead of time, compile it to a specific architecture and a specific operating system, and it becomes a 300-kilobyte file. And if you look at how we use the spin-framing on this, the comparable numbers of sizes is 2.3 megabyte and 1.1 megabyte. So you can see how in the browser it's important that you have binaries that you wanna execute that are very, very small because you always fetch whatever you need to execute from a remote host over the internet, right? So that's what this is designed for. And being able to write server-side services with a characteristic like this in these small sizes that are on top of that also portable is really, really compelling. Startup times are comparable with natively compiled code. The best numbers I can find is that it's only 2.3 times lower than natively compiled, but that's actually a comparison of the just-in-time compiled version. If you do the ahead-of-time compilation, you'll get much faster execution on startup times here. So obviously there's a trade-off, right? You can compile something natively and that works on that particular platform and so on, but you can actually maintain portability and still get some very, very quick startup times. I think I touched on the portability already a few times now. The last thing here is the security part. Again, coming from the browser, everything you execute in the browser is non-trusted code, right? It's being downloaded from a remote host and you execute it and you don't want it to escape its own execution context. So everything runs in these sandboxed environments in the WebAssembly runtime. There's even this thing called a capability-based security model which means that I mentioned before that part of the WASI specification, which is the system interface, which is the APIs that you have if you're WebAssembly running outside of the browser, you could have access to the file system, for instance. Whenever you run a WebAssembly in one of those runtimes, they don't have any files unless you specifically at runtime define which part of the file system would be accessible for that WebAssembly. And that's sort of an example of this capability-based security model. You, as the one operating or running that program, specifically designs what or decides at that time what capabilities the program will have outside of its sandbox. Now, these are all really, really awesome things to have if you need to operate and run stuff, you know, servers or server-side. And so if we try to look across that under the use cases where the WebAssembly system interface makes really good sense, and we have sort of these four base characteristics as the foundation of that, part of what we believe and what we've set out to achieve with the spin open-source framework is really to build some development operator experiences that we quite haven't seen before. So both Matt and I and a lot of other people from the company, Fermion, have a background building cloud services from various big cloud providers. And in that space, at least, from a developer's point of view, there's always been like a little bit of a holy grail around the Heroku experience. That's sort of been the thing that everyone is trying to get towards in the new world of cloud native. And I think what we believe with this whole WebAssembly technology is that we actually have a foundational technology now that would enable us to do what we can do with containers today and what we do with containers today. But in a much, much better way in terms of the developer and operator experiences we'll get. So as we go through this workshop, I hope you sort of get a feeling of that and you can see where these things are going. So the three main use cases on top of all of these are primarily around cloud and plugins and IoT. I mentioned cloud already. The functions as a service types of framework, which is what SPIN is, is definitely a really, really good use case. If you think about plugins, there are various scenarios where people use the server-side WebAssembly to implement user-defined functions and databases, for instance. Meaning that if you have a database, you can write a function, compile to WebAssembly and run that inside the database. I know, I think it's Mongo that already has support for that. One of the databases, one of the open source databases do have that type of support for WebAssembly today. But also if you're in a scenario where you run a SaaS platform and you want to have your customers or your users extend that platform with their own code because of the security model, because of the portability and all of that, it's also a really good use case. And then on the IoT side, obviously the system resource uses and the fact that once you have a compiled WebAssembly binary, it doesn't need any other dependencies that the actual runtime to run there. So again, comparing that to trying to use things like containers in an IoT scenario, which is very heavy on the resource usage, and also often heavy in size, this is a really, really good progression on that. I mentioned in the beginning that the language support, the programming language support for WebAssembly is emerging, and I think I said emerging and stabilizing. So you can interpret that the way you want. We've compiled this language support overview that we have on our website. I think the QR code will take you there. So that's a really good place to go and take a look and sort of get an update on your favorite programming language or the programming language that you use and the state of support across the browser and what we have in the SDKs for Spin. So I definitely recommend going to take a look at that. Okay. So that was sort of the core of the WebAssembly server side. What are all the benefits in this technology and some of these things. And now I'm going to move into how we use that in this framework that we call Spin. So this is the developer tool, we believe. Well, it is one developer tool to build serverless WebAssembly applications. So I'll start to unfold a little bit and talk about what that is. This regard to Spin 1.0, we just released a Spin 1.5 last week. So we are somewhat ahead of this. But when we talk about this as a tool for building serverless applications, what do we mean? Well, there's sort of a natural flow around the developer experience and operator experiences for using Spin which are tied to these three commands, which we have like, you know, SpinU to start creating applications, SpinBuild to compile the WebAssembly and SpinUp, obviously, to run your WebAssembly. If we look at the SpinU, we have a variety of programming languages supporting Spin with SDKs that you can use. The most furthest ahead programming languages would be Rust, JavaScript, HighScript. I think Python and Go probably comes in on a close third. We do have some .NET experimentation we've been doing as well. But there's a lot of various programming languages you can use. And because we have, because we end up compiling everything down to WebAssembly, what obviously happens once we go, you know, ahead from programming to compilation is when we run things with SpinUp, we don't care about where you started in terms of programming languages because all we have to run are WebAssembly. It's all the same binary format that means the operator experience on this side is highly simplified, even though we get a lot of opportunities as developers when we start out things. So along the way of that, what we build into the framework as well is what we believe is essential for a lot of developers today, which is, you know, easy access through a set of APIs to a lot of the supporting services that you would need. So easily being able to call remote endpoints through HTTP, having access to key value storage is also built into the framework. Basically it means there's an API directly in the framework to set and get things from a store that is persisted across requests, getting access to SQL databases, and the latest thing we just introduced is having a large language model interface as well. So you can run inferencing based on a set of inferencing models that we provide. I'll talk a little bit more about that later. In the whole operation space, being able to package these things up as OCI-compatible images or artifacts, so we can easily use registries to move them around, pull them down, and, you know, assign them and all these things, and finally run them either just locally spin as a CLI, so you can always run a spin op command on any machine you want, so if you want to do something around system D or something else, do that on your own, you can do that. You can use the cloud that we built. We have a reference implementation of our cloud called Fermion Platform as well, which is built on top of Nomad as an orchestration technology and a lot of other technologies around that. And then there's a project called Run Wasi under the container D CNCF project to enable you to run a variety of WebAssembly run times inside of Kubernetes cluster. So spin would also be able to be deployed to Kubernetes cluster and it's part of the tutorial and the workshop as well, and if you get time now, you can try that out here or you can try that out later if you want to go back to the workshop. So it sort of gives you an overview of the features and functionality that are inside of spin. I think we mentioned the top row has been mentioned so far. The core concept with spin and one of the reasons why we call these serverless is because the model is very similar to what services or frameworks like Amazon's Lambda or Azure functions do is that it is what you write a set of functions that are being triggered by some external event. Like typically those would be ACP requests as an event, but it could also be events happening on remote storage or queue systems like something sitting in a Redis queue or other types of things. The way that these are implemented is something we call triggers inside of spin and an ACP is sort of the base trigger that we do because then you can always just call an ACP endpoint. You'll handle that request, but it's basically an extensible model. So we've seen people start contributing triggers for MQTT for instance. I think there's an Amazon SQS trigger as well out there. So you can sort of extend the framework with these type of triggers if you want, if you have scenarios that you want to do that. And obviously there's a story around variables and configurations and sequences as well in the framework. So a lot of this actually, we talk about building full stack applications, but there are actually a whole lot of stuff you can do with these things being available in the framework and the APIs. AI was the new one that I mentioned that we built into the service. So what we enable is to use the initial release that we did in the 1.5 spin release is to enable you to use Meta's Lama models for inferencing and also for embedding. So there's a Lama chat and there's a Lama2 code, I believe the other name is, that you can use directly from within your spin applications today. You can definitely use that on your local laptop. I would just say, you know, you can use it from generation to be in the tenths of seconds, so 20, 30 seconds sometimes. But we also, what we did in our cloud, and again this could sort of be a reference implementation as well, is that we built a way to actually share access to real powerful GPUs, specifically those are A100s, which means that we can get some inferencing requests into the, you know, kind of half a second latency or startup to actually run inferencing operations. So, I mean, this is sort of interesting in itself and in the world of AI, but again from a full stack development point of view, you can now start building applications where, you know, using key value store as a persistent cache, we have some examples where we, you know, have built a small sentiment analysis application and then we'll go in and, you know, you can combine all of these things. So if someone asks a question, it's like, you know, figure out, have the last language model figure out whether that's a positive or negative sentiment. We can go and make the inferencing call and we can get that reply back, but then we can actually go and store that in the key value store and say, hey, if this question comes again, we don't have to spend a lot of GPU and power, you know, generating another inferencing prompt. So let's just cash, pick up the cash from the key value store. Or we can maybe even more interesting and we can actually create an embedding around the question. So we can compare other questions to that embedding. So embeddings are things you do when you want to do similarity, sort of comparison between sentences, and you can say, hey, if you have a 95% similarity between whatever was said and what has already been asked, let's just take the cash thing instead of doing the actual inferencing call. So having all these things available, you know, you can very easily build up these type of workloads. So that's all good and great. I think as we go along and you'll start to get a feel about how the spin framework works, you can sort of see how easy it is to just start using all these powerful features within the framework. And a lot of these examples, we have this thing we call the spin up hub because we need a spin up hub where you can go and find examples that we provided, that people from the community have provided of, you know, applications that are ready to use, to deploy, there are also plugins and other templates and libraries that you can go and take a look at. So that's a good place to get some inspiration around the type of applications and how you would build those with spin. Okay, that was the introduction. That was what is server side, web assembly, what is the spin framework and a bunch of stuff going on in there. Now we will move into the tutorial part of this where I think, let me just check. Again, if you want to move ahead at your own pace, okay, that was a long way back, you can go to this GitHub repository. All the instructions and everything you need are there. It's a QR code as well. I will walk through the tutorial from stage as well. So feel free to follow along and I'll hopefully extend a little bit on what's happening as we do that. The actual workshop is about building a magic eight ball. So there are three variations. So this spinning magic eight ball is this thing, you know, you can ask a question, you can shake it, and it will give you a reply and you can choose whether you want to comply or not to what it tells you. But we can build this magic eight ball that returns a random response. It has to remember responses to questions. I mentioned the caching thing, so that's a way for us to showcase how you can use the key value store. I don't know who came up with that AI pun down there, but that's a magic AI ball, I guess. That's how you would pronounce that. So basically the responses and questions are back via language, large language model and are not just a set of questions that you can choose from being a magic eight ball. So if you get that far through the tutorial, you'll be able to try that as well. We're going to run through, I think there are a lot of sections in the workshop, but basically we will write a JSON API. So consider that a backend type of service to begin with. That will just give us replies back and that will sort of start introducing you to the whole model inside of spin, where you send an HTTP request, you get a response back. We can augment that with some AI stuff by basically sending that data that is being posted through the LLM inferencing model. Then we can add a front end to the application. We can start using caching, and we can deploy that into either the fermion cloud or to Kubernetes. And then there's like a little bonus exercise in the end where you can build a lottery spinning wheel application to showcase some of the SQL lite features that are there as well. So another link to the same workshop. Before we get started, I think the logistics is basically just, if you want to go on your own pace, do that. If you have questions along the way, just raise your hand, Matt will pay attention and can come around and help answer to any questions you may have. So before we do that, any questions or anything, I think we can do a little bit of Q&A right now. Yes, do we need the microphone for recording purposes? You were talking about SQL lite being available to the functions. What's the lifetime? I'm assuming that spans across instances of the function being run, or I don't know what the right term is. No, we can call them a function. Well, I think we call them components. Okay, the components. So if I've got five instances of that running at any given time or over the course of half an hour, that SQL lite with persistent data would be available to all of those? Yeah, so the execution model in spin is actually because I'd mentioned the quick startup time and the small binary size is that, for instance, in our cloud, we never run more than a single instance of any application because every request or every trigger that needs to be handled, we have enough time to actually load the component, the web assembly, execute the request of the event and just shut it down again. So you don't have any model or anything persisting between requests or between trigger actions that are happening, which is also why the SQL feature is there or the key value store is there. So that is externalized to that actual execution. So again, in the spin model, there are ways where you can define what's the implementation behind either the key value interface or the SQL lite interface. When you run that locally, because with spin you have everything locally, we just use a file-based SQL lite. So that's always there and will persist across requests. We have an implementation in our cloud where we can use other databases, real databases, but basically you can add a configuration to the spin application saying, I want to have a Redis server back in the key value API that is being used in this host or I want to use this database backing the API being used in the host over here. So you have options and how you want to do that. We don't have a way today to have, if we were, to run multiple instances, because, let's say, it's all about, it will basically just be a race condition in the end, who's going to come first? And then if it's distributed in multiple regions, we're moving into the distributed eventual consistency space of things and stuff like that. That's not a thing we've been at yet, but probably something we will need to think about at one point. Yeah? So if you run one instance at a time, are you moving that instance to be near the the requester? Or is it just in, where is it geographically located? So, I mean, I can speak to the implementation we've done for our cloud, which is currently, our cloud is not more than a, it's not even a year old yet. So in our cloud implementation, we just have a single region at this point, but, we are starting to design a multi region setup, right? And if you wanted to do that yourself, you also have to think about that as well. The work coming from, the huge benefit we get, because we use WebAssembly, is that, if you think about what you need, there's a great talk that's an engineer from Lambda did, AWS Lambda did a while back, a re-event talk about how one of the big problems with a thing like Lambda and, and this concept of cold starts, right? That if you send a request and the Lambda function hasn't been used in a while, it can take some time before it replies. And that is basically a storage problem, like a logistical problem, right? Because you may have a function that you created, or that you wrote, that is a Python function, that relies on a particular framework. Now the job to be done is to get that function code to a host somewhere with all the dependencies that is needed for that thing to run, right? So request comes in, you need to find the thing, you need to find a host to put it on. That host has to be warm. If it's not warm, it comes up cold, and you know, it's warm to let it in, and then you keep them up running, and there's a huge overhead in that, right? And you can start imagining how depending on the programming language and the frameworks being used, it becomes a big combinatorial matrix of hosts supporting some things and other options here. Because these are WebAssembly. That combinatorial matrix for us just have like one field in it. Because it's a WebAssembly, and we can run it on any architecture, any operating system, and we don't have any dependencies. So that logistical challenge of finding a machine that's hot, that can execute the WebAssembly and handle that request, and the WebAssembly is just like, we can use, we only need one pool of machines, basically, right? So, again, if you think about this in context of a Kubernetes cluster, for instance, a lot of cloud providers use concepts of node pools where these nodes either have, you know, different operating systems or different architectures and those on here. Again, these, you start creating that combinatorial matrix where you need to handle the logistics of all these things. And by the fact that this is WebAssembly, that problem goes away. So that's one of the big benefits in what we can do here, right? Okay? Yeah? By the way, get started with the workshop. Time is, you know, there's a lot of questions, so just go ahead if you want to do one. One question about how much time it takes to create the virtual machine. So does the virtual machine prevision or just runtime? Well, I don't have the exact... Well, so the... I mean, it's a... If you use a runtime like wasntime to do this, it's basically just a few milliseconds to start that virtual machine up, right? So the way that you would run a WebAssembly, is you would call the runtime and then pass in the WebAssembly file that is actually being loaded to run. So they also only exist for the time that the WebAssembly is running, and then that's all torn down. Thank you. Yeah? I'm working on automotive industry in Japan. So in automotive hardware is very limitation on DTR size or storage size. So first, do you think this solution impacts memory and storage? If it's applicable to that scenario? Yeah. Yeah? I think it was some con, actually, Bosch mobility solution. So Bosch, the German auto provider, they actually showcase how they're starting using WebAssembly inside of some of their solutions for in-car. I have previous experience working with Bosch as well, where they were trying to do these things. So one of the problems they have is like, you know, a lot of spread out GPUs in those cars and they really can't talk to each other on how do they utilize all the processing power best. And they were trying to do these things with containers, but it was just too heavy of a workload. And really, from what I saw back then and WebAssembly today, I think this is the solution that would be needed in there because again, you have the security, you have the small size, you have the startup time, you have the portability, which means that if you have like various processors being added to your car that you go and source from somewhere, you run the same application across all of these because the architecture doesn't matter. So yeah, I think it's highly applicable in those scenarios to look at. Okay, cool. I'll start showing some spin stuff up here and then, you know, more questions. We have a booth up here as well. We'll be here for the rest of the day and tomorrow. So come by and have conversations. We also have a Discord channel where we're more than happy to welcome all of you. Yeah. Okay. So let's switch over to the workshop. This is the GitHub repository we have and you can see there's a bunch of things that we need to get done in here. So as we do this, I will probably see if I can do like a side-by-side screen up here as I go along because that will help me see what I need to do and you can probably see in the terminal what is going on. Are we okay on the font size up here? Anyone else on the back? Yeah, good. I've already cloned the repository so I have everything down here. One thing to know about this repository is that it's structured in a way where okay, that wasn't easy to see. Let me actually do that over here. It's structured in a way where you have underneath the directory spin. You have all the articles describing the steps of the workshop and along the way we need to build a set of applications. The workshop is created in a way so you can there are code samples to do this in Rust and there are code samples to do this in TypeScript. So you can choose either of those. You can also do your own adventure and try to do this in Python if you want to but we don't have the code samples. For short-cutting, all the apps have already been written. So there's a directory called Apps where you can see the actual implementation of the individual steps as we go along. So if you need to, you can do that as well. Okay, so the first thing we need to get set up is to have an environment where we can actually run spin. So in the repository we provide a few options. You can configure your local environment. All that requires for you is to download, spin the binary, a few templates and plugins with spin and then have the tool chain for the programming language you want to use. So if you want to use Rust, you need the Rust compiler and you need to be able to compile to WebSMP from Rust if you're going to do JavaScript. TypeScript. I think the only thing you need is actually NPM and then you're good to go beyond spin. And it's not like I want to but I can't help sometimes calling out docker experiences a little bit. It is a different type of developer experience here but there's a lot of the same benefits you'll eventually get from this. So you can install all of that locally. We also provided a development container so if you use Visual Studio Code and you use docker because docker is still really, really useful in those scenarios. We created a container with all the requirements installed in it and you can also run that container in GitHub code spaces if you wanted to. I am going to go ahead just using my local environment because that is already set up. So we have homebrew to install spin. I already did that. And basically if we take a look we can see that I'm on spin 1.5. So I have spin installed. There's a few other things that the installer does for us. So spin has two concepts that I think are interesting to know is that there's a concept of template which is basically how you bootstrap an application. And there is a... Let me do that. There is a way where you can provide your own templates. It's a fairly easy thing. You go to spin up. You can see some of those things in there are actually provided as templates. The templates are components in an application. I'll explain in a bit what that means. But you can see there's a bunch of templates that comes with the installer and you can see where they originate from. A lot of these comes from the main spin repose. Some comes from the SDK repose like the JavaScript and the Python ones. And then you have something like a key value explorer. We'll get to that later in a QR generator. There's a pilot WebAssembly that you can just add to your application. So there's this notion of composing applications you can do as well. The other one is the plugins. So spin has a concept of plugins that will just enhance the developer experience. And you can see there's a bunch of plugins that have been installed already. Python to WebAssembly, JavaScript to WebAssembly, those are basically things that we have created to help get from your programming language to WebAssembly. There's some Kubernetes plugins and other plugins that I have installed with my spin. And again, it's very easy for you to write plugins for spin if you want to extend with more triggers and those type of things. That's the thing you can do. Okay, now that we have spin, I'm pretty sure that the next thing we need to do, that was actually part one of the workshop. Okay, well that was part zero. Let's get to part one. So the thing that we want to do is start with this magic eight ball application. I'm probably going to go ahead with the rust thing up here because it's been a while since I've done the JavaScript. So let's see if this can work. So what we will do to begin with is we'll use one of these templates to bootstrap the application. So we'll do the spin new command. We'll pick the template called http rust. And basically there's a... What do you say? We usually name these templates with trigger-programming language. No, sorry. Yeah, trigger-programming language. So it's sort of easy to remember. This is an http-based trigger. It's using rust. We can call this a little rust. And we can do a... Accept defaults. Usually the templates can ask a few questions of, you know, do you want to put in a description and so on and so forth. But we don't have to do here. I already had that. Actually, let me do something. Let me just do a directory. And let's go over there. And let me go back. Create it in there. Okay. I can now go into the directory that was created. And you can see what we get in here is we get a spin.toml file and we get some rust source code. Let's start up with that spin.toml file just to understand the anatomy of these type of applications. So the toml is a manifest that we create for every spin application. You have to create for a spin application. The first six lines is just a little bit of metadata around the application. So you can, you know, define versions, orders, descriptions. The type of triggers. Today we only support a single trigger type per application. But definitely we want to be able to mix trigger types in applications. And then the next thing, starting line 8, as you can see, there's going to be a... there's room for a table of array of components down here. So the anatomy of these applications are basically... all the possible components that make up your application. And what you notice is that part of the trigger definition here in line 13 is that there's a certain route within the HTTP structure that this component handles. So basically what we've defined for this component is that if an HTTP request hits under the root of, you know, this is going to be local host if we run this locally, the WebAssembly that we want to handle line 10 as the source. So that's going to be a WebAssembly that we compile. So you can easily imagine how another component can handle another route, which is another WebAssembly handling that route. So that's sort of how the anatomy of these applications work and you can start spreading out the various functions between various WebAssembly. And again, what's really interesting here is that from line 13 and 10 none of these say anything about the programming language that has been used, right? Because it's just a WebAssembly handling that request. And then from the build section down here, that this is built using Karkos, which is an indication to us that this is actually a Rust application. So if we take a look at the code in here, a few things to notice. If you don't understand, if you don't know Rust code, it should be fairly easy to follow along. I'll do my best to explain as we go along. A few using statement or imports at the beginning where you can see there's an SDK that we use from spin. The macro definition down here would say this is the function that is implementing this component. What that means is we've told the host one time that when the trigger triggers, the HTTP request is going to be handed over to this function. So I can basically add other functions that I need in here that would be functions internal to my logic. But the one that I annotate with that macro is the one that's being handed over the request. So you can see in line 9 how this function takes a request and sends back a response. And basically what we're doing here is we're just going to say okay with a header and some body in and we'll print a line out to the console with the request headers that were actually involved in that request. Okay, so now we have that. The next thing we need to do is we run a spin build command. Basically, the spin build command hooks into that Tomlify right where you can see there were a set of cargo commands defined. And now we build a common experience around, you know, if I had Rust code and Python and JavaScript in all in one application each individual component basically had its own build command right that I could define in there. So it's just a bash command that we call out to. So now my web application is compiled and I can run spin up. So now I have my application listening. So if we go down here we go into that localhost 3000 300 there you go. What did I do wrong? Oh, thank you. I don't have my glasses on. That's why. There you go. Hello, Firmion. Okay, so what happened? Well, when we curled the host process that we have running up here, actually the way the host process starts up a child process that is the virtual machine and the sandbox component instantiation. So the host process is listening on the port, which means the host is the one that's actually had the web server implementation because that's what we need here. So it hits that host process it looks up in the spinner tumble where which route matches the request, which means what WebAssembly do I have to hand this request over to? The WebAssembly is loaded the function is handled, the response is returned and we get the response back here. So that's just a few lines of code to get started with these types of applications. Oh, okay. I guess the next step in the workshop is now we go we want to change things. So let's go into this one and say I guess Bilbao that's the only thing I can think of when I do this thing is like what city are we in? Okay, let's do Bilbao. Okay, hello Bilbao we'll go and say that I do another spin build I did the spin build up there I can do a spin up up here and I can go back and hello Bilbao. Okay, so that's pretty straightforward, right? There's an easy way to us for us to iterate on this. We actually have a command and spin that is called spin watch and what watch will do is a regular watch command which means that if I go and change something in the source code I don't know why but I'm moving around it's just another exclamation mark in here. You can see that the build was triggered so now if I curl again we have the updated so spin watch is just again from a developer experience point of view you know it's fairly easy to get going here spin you pick your template start writing your function code at the spin watch and you just start iterating. Cool. And actually this is the same with TypeScript so if you go through the tutorial here on the left hand side we'll do the same we create a new spin application we build that with spin build we use spin up there was a bonus feature around spin watch and we modified the HTTP trigger oh we didn't modify the route but I think I talked about that right how in the spin autumnal we could define the routes and how that worked. So let's go ahead with that magic eight ball. So what we're going to implement now is instead of just returning some text in the body we're actually going to print a real JSON API where we can return one of these four answers asking later absolutely unlikely simply put no and they will be randomly selected. So we can go over here and you know what I am actually just going to I'm going to go and just show the app because I think that would make as much sense instead of you having to just see me copy pasting code over here. So again this is the finished app that this section will lead you through the spin autumnal is the same right they just structure is all the same and the only thing we changed in here is that we actually have a designated route for API which means that any HTTP request that comes into slash magic eight ball will be handled by this WebAssembly. The code implemented again we're using the same imports from the SDK we have annotated the handler magic eight ball function up here with that macro meaning this is where the request goes we have a JSON based on calling the answer function and we just return that in an OK HTTP response and then we implement another function down here so you can see how you're free to implement any type of function and structures you want in these small libraries and basically this answer will just return a string there are a set of possible answers that you can return and basically here to randomly pick an answer and that's it so beyond having that HTTP component annotated function as the handler whatever you want to do with that HTTP request you can do that in here again the commands to get this going actually let's just look at that build command in here for a second so there is this component build section as well right where this is the command whenever I run spin build it actually just runs cargo build you can see that the target definition for cargo is to produce a WASC compatible WebAssembly which is what we want to do here and then you can see there's also this for the watch function if I want to iterate and just have a watch running I'm able to define an array of blobs to be under the watch so if any of the files matching this the blob statements in here would change we will go and rebuild the application simply so again we can do a actually let's do a build spin watch sorry spin watch will always build and then this is nice so something going on here fails to select the version for anyhow okay what do we have in here any have one Matt can I get some help I know what is this this craze IO right let's check anyhow what's going on here do we need to be more specific let's do that let's be more specific let's see what happens let's try spin builds no fails to select a version for anyhow do we have something not updated here oh I know it there is a can I ask you to open a github issue we did not update the sdk references in here let me go ahead and do that actually let me go and revert this with one five two one four two but did you did you use the pre-build so we're still back to fails to select the version for anyhow we find spin okay I think it said 0.7 let's just try and do the latest this is just like you know I think the learning the demonstration we got to do is that like this is just rust development like what we were dealing with here there's nothing here that from coming from the programming language that you use that until we get to the compile time that actually sort of impacts you by the fact that you want to do web assembly that's not all true because the stuff you can't do in web assembly today on was it today but just from a sort of general perspective this is just developing functions in that programming language that you use spin up we have the magic eight ball shortcut that oh we should have asked the question I forgot that what's the question that we can try again should we build more spin applications ask again later should we build more spin applications absolutely there you go okay could have been better okay cool so again if you go through this in your own you sort of step through this and you can get a feel of how the how the actual code works so basically you can see how you know creating a JSON API hopefully you noticed in the in the browser window that my browser where I have an extension to pass JSON that this is this is truly JSON coming back I'm glad we didn't do unlikely by the way okay that was that was the second part of it the AI one I think I probably just want to walk through this looking at some of the code in here so the big the big change with this AI this is actually the first time we're starting introducing something from the SDK that is not necessarily just you know doing HTTP request and responses so we have this interface called LLM to do large language model inferencing based on using Lama to chat or Lama code so what we would need to do instead of just having a fixed set of replies we can do some prompt engineering here and we can start you know creating a context prompt actually isn't that what you call them that's what I would call it anyways but I think there's probably a different word for that but there's a context prompt that we provide to the large language model you know sort of setting the theme that you're now this magic eight ball and you have to reply and so on and so forth but what's what's I think what we believe is really powerful in this model and again this comes back to our core enthusiasm for developer experiences and all of that is within this SDK all I have to do to run an inferencing request is basically define my model and call that the infer operation passing in the model passing in my prompt and I'll get an answer back I'll basically get a prompt back from that and and anything in terms of hosting those LLM models getting access to the GPUs and all of that setup is all taken care of for you so you can see how you know going from a static set of answers in this type of application into having something that is way more dynamic it's actually pretty pretty simple let me go into this one here sorry I'm just gonna stay on the rust route I just want to see let's show that's not what I wanted to see what happened let me go back yeah so you can see we have the application in here we're gonna check if we get a question an answer back or not we're gonna use that Lama 2 chat I want to say for the inferencing call down here when you do this type of inferencing calls there's a lot of parameters you can turn like in terms of temperature and stuff like that like you know how precise do you want the answers to be and so on and so forth so we are able to pass use another function which is actually infer with options so we can sort of fine tune those things I think the one thing that I just want to show is how this will actually work once you get it into an infrastructure where you have that inferencing piece set up what I'm doing down here is I'm using another command in spin called deploy that is attached to a plugin we created for our cloud so what that does is it basically takes this application package it up as an OCI package deploy it into our cloud and in a few seconds this application should be available and if we are really lucky you can see the applications already up and running so again talking about that operations model and everything I mean it literally took us 21 seconds to deploy this whole application and it is ready to reply I think what I need now is to have a way to create that question let me see if there is a curl command okay so we can basically just pass in the data so what should the question be should we visit the Guggenheim museum later today let's see if I can get that right let's ask that question and I just need to get the URL right yes it's a must see attraction so we are actually with the few lines of code you have in here you now have an API that you built with some from engineering that is a real magic LLM based eight ball that can easily answer to these type of questions and again I want to call out the one second execution time I have a small batch timer here approximately one second to actually go through that whole flow of getting the API request to the cloud calling the inferencing model which actually right now is a set of GPUs in a totally different data center actually across an ocean have the LLM model run that inferencing and get the reply back to me this is actually a really cool demo I haven't done this I'm a little bit excited right now I don't know if you can hear that and by the way even though I'm not a magic eight ball I went to the Guggenheim museum yesterday and I completely agree with the magic eight ball it is a must see attraction you should go and visit okay so that was a little bit of AI which from the whole spin model it's just another thing you can do with your applications right now the next part of the tutorial helps you because these applications are great in APIs but if we actually want to build something that is a real web application and we want a frontend how do we do that so if you think about how I talked about the spin model and the stat spin tumble file it's actually very easy for me to add another component that I want to have to deal with the whole frontend and what's interesting about these component references is that it doesn't have to reference a web assembly that I have available locally in my machine so you can see in the code sample that is here is that the component that I want to use is actually a remote component that we get from github from a year old so we build this thing we call the static file server which is basically a web assembly component that takes files that are available within that to that web assembly and serializes them into a set of binary stuff that buys that we can send back to the client that can then get an index html, javascript, whatever we want to serve with that static file server we can get that we can get that out there so all you have to do is because of the templating system again we can do a spin add command and we can add the static file server and we can call the file server so the template is set up in a way it's going to ask us a few questions and the first thing is like what which path do you want to have the file server serve and we actually want to have the file server serve the root so if you go to a browser you don't have to do slash something it's just the root and then we can map you know what directory locally here when I build my application the files that I want to have the file server serve where they should, where they're located and in this case we're going to tell them that we'll have a directory called assets so if we look at the Tommel file now you can see there's another component being added so right now we have two components in here we have the original eight ball api and we now have static file server that is just a web assembly that I didn't build someone else, someone from fermion built and I trust that and there is some digest in here to make sure it's the right thing we get when we run and you can see how the route and the files are actually being set up here so the only thing we have to do is then just you know get whatever static content we want to serve get that into that asset folder and I think that that should be here somewhere let me just do a little bit of digging around and I think we should be able to find it there is the frontend so if we go to our back to our ai Rust and then we can we should be able to copy that into assets isn't that what we called it assets I think that's what we called it and if we do that we should get all the files let's just see what ended up in there ok so you can see now we covered an index HTML some style and an icon into the asset folder assets folder so if I do a spin up you can see that we actually now have these two components being served right there is still the API being served on magic aid but we also have the file server listening here and we can see that we now have a frontend so that's just the index so so we can try another question for this magic aid ball does anyone have a good question that we want to ask yeah ok I'm blanking right now something about food should I go to a fancy restaurant tonight tonight or just do a symbol take out and eat in my hotel room well I don't know let's see spin that one and did we get a reply back oh we're running this locally that's absolutely not what I want to do I want to deploy this to the cloud just hold on a second I need that lm lm feature so well at least we know the frontend work now and I'm deploying that into the cloud and then I have all the inferencing ready for me so last time this was the 21 second deployment let's see how quick it goes now might take a little bit longer because there are a few files we need to upload ok ok so now we have the application in the cloud ok should I eat fancy tonight this is a little bit more simple and your taste buds decide ok that's like I don't know I don't know what to do with that answer actually but anyways so we actually now with fairly small amount of code we have a fully AI backed or generative AI backed application that is a magic 8 ball that can help us make decisions in our life make any like important decision based on this as a disclaimer I want to say but again what this particular section showed you is how this idea of composing applications based on different components will works in spin we have this static file so that's good for web content frontend and how these individual components can actually call each other well that's a stretch here because basically it's my client calling the API but actually you can have multiple components that sort of can call each other so you can chain calls that way one of the recent examples one of our colleagues built was basically an oauth component so if you want to build a web application you want to do some oauth integration you can just take that component that added to your application and now you have an oauth functionality so so there is there is a vision here around this whole component thing that I talk about where in spin you can sort of see the outline for how at a given point in time this idea of composing applications of pre-built components is actually a thing you can do what we're doing in spin right now is a model that works just with spin but it is inspired by and sort of spearheading a little bit a specification in the WebAssembly community which is called the WebAssembly component model we actually do support that in spin right now but the component model in itself and the implementations of the component model in particular was in time which is the runtime that we use is not far enough ahead so we can actually use it but what that means is that there will be a standard specification for how you can build these WebAssemblies as components that would define a set of imports and exports so you can have a WebAssembly component that would in the case of it being a spin component it would import something called WASI HTTP which again is a set of defined interfaces that anyone can go and implement which means that that particular component can run in any host or just a spin host that supports WASI HTTP and you can think about it in a way where the same way that you use libraries to import functionality within your programming language you can actually do that on the WebAssembly layer which means that you can have a particular function implemented in one programming language but you can actually reuse that function by importing it from other programming languages because it all work at the WebAssembly layer so there's a lot of work still to be done to get that wrapped and make it usable in particular you need to bridge between all the tool chains and programming languages and down to the WebAssembly layer and understand the types and all of that but I think the longer term vision is how the same way here that we just pulled in a file server that someone else wrote and made it part of our application and the way of composing application is something that you can do with that component model in the future cool so that was the that was the file server part of it now we want to make this even smarter and we want to be able to actually store the responses that we have between requests so remember how I mentioned that each individual request like there's nothing we can persist with a concept that's a session that you sometimes know from WebService or that you would know from WebService normally so having a key value store is really really valuable for a lot of scenarios the key value store that we've implemented in SPIN the interface is is pretty straightforward I can show you here developerferminer.com is where we keep all the documentation for SPIN so if we look at I think we should be able to check out the key value store down here you can see the setup operations that we have for this interface is you open a store, you can get a value you can set a value you can get the value by providing a key you can set a value by providing a key you can delete, it doesn't exist get all keys, you can eventually close so fairly simple API like Redis and when I use that locally in my machine right now this is all backed by a file based SQLite so that's all just set up for you when you deploy this into an infrastructure like in our cloud we use some Windows database behind the scene to persist this with but we do have a I think it may or may not be described in here but there is actually a way where you can implement using Redis as the bag and store instead of just using the SQLite so if you have a setup where you want to persist this in a Redis database instead you can do that but it's the same API you would use from inside your code okay so let's see how far did we get so what we want to do is we want to store the questions and answers so again this is basically kind of a caching mechanism that we can add to this let me see how is this 05 and rust let me see how much we have in here yeah we have the front end and all that that's nice okay we can go back and we can start take a little bit look at the code so this is actually not AI enabled anymore it's just a shame but that's okay but there's another function that is now being implemented here so we'll go and check whether we have things in the cache and again the API is pretty straightforward like you open the store there's this I used to refer to it as a convenience feature I can basically say open default and it really ties into the local developer experience where within the component what I defined here in line 19 is what key value stores this particular component would have access to so remember in the beginning I talked about this capability based access model that WebAssembly used and that means that even though that all these components live inside the same application and would be deployed inside of the same application you have to specifically define what resources each component would have access to so in this case the file server component running in its own little sandbox will never be able to access that key value store because it's not defined in here that it has access okay so I can actually add multiple key value stores per component and multiple components can have access to the same stores right but they have to specifically get that access by defining it in here going back to that magic word default the way that that works locally in spin is that we just create that SQL light database for you and we store it there we also have that feature supported in the cloud but the way that the whole thing works is that you can pre-define these key value stores are made available in your infrastructure and then you know the component have to tell which key value stores specifically by name it wants to use that's actually in line 20 there's another concept of the capability based security model which is that if a spin component wants to talk to an HTTP host again you have to specifically define the host that the component is allowed to talk to and I think what I like about this file and this is something that we always come back to when we enable new features it's actually the same thing with the AI feature that I showed you have to define in here what AI models which are the LLM models the component would get access to but what I like about this is that if you just want to run a spin application which you potentially haven't developed by looking at this file you can actually see the various capabilities that each component would require and whether you can fulfill those or how you would fulfill those so it gives a great overview of what's going on and what's required from this particular application I think I have the other thing open here so the open default call just basically takes that open and takes that default implementation and uses it then we check whether by calling the get function whether the particular question actually exists in the store or not we're using a matte statement here in Rust so we can get an okay or we can get an error back if it's actually there we'll just go and let's see we actually get the answer in there that's the value if we get a value back we get the answer we put that into the answer variable if the answer is ask again later I guess that's the thing in this logic of the code we can put that we can put the question answer back into the store if not we can just provide the answer so basically I guess the logic here is a little bit clever so if my magic eight ball would say ask again later it won't keep saying ask again later like it will actually invalidate that from the cache which is fairly advanced magic eight ball logic caching logic going on here if we don't find it we'll just error back but we'll still set the question and answer that is being provided so we now have that cached and that's it like 20 lines of rust code here and we have a advanced magic eight balled answering mechanism with built-in invalidation for the case of ask again later okay so again we can build this application building is going great right now we fixed the anyhow thingy spin up the application go to the file server what food should I eat should I eat food I guess that's what we can should I eat food after this let's see ask again later that's good we shouldn't get an ask again later now unless we hit the logic that the new reply is going to be ask again no absolutely there you go okay cool which means that every time we ask the same question now we will also always get absolutely okay so now we have that caching mechanism and just to make this real real there is a dot spin directory in this application which is sort of like a working directory what you'll notice in here is that not only do we store the locks like whatever console locks is being written standard standard error but you can see you can actually see the sql light file that we have down here where we okay with the audio thing yeah okay where the there's a real sql light relational database and then there's the key value and I think we can do something like if we look at the sql light key oh there's a dump feature right isn't that dump I need to do that and then I can do just a second let me see I did this once it's just interesting to see so sql light oh there you go this one that's the one I want so basically now I'm just going to dump whatever is in that database to a local sql file and it's just to show you like the implementation of this so key value and if we take a look at that file basically see what we added like we just have a simple table it has a store name it has a key that's a text field and then value is just going to be passed into a block that's how we've implemented well not implement that's how we implemented a key value store in a relational database anyways that was a bonus bonus feature cool so key value store simple api 20 lines of advanced caching logic being written down here I already skipped this a little bit of head here because this is talking about how we can deploy a thing into the cloud and I showed you that with the AI example I think the thing I can add a little bit more on here is that I mean beyond being able to deploy this there's a UI where you can get to and you can see these application so if you look at the magic eight ball you should be able to see we have some requests that we received early on and we actually didn't provide any locks here but if we have locks so there's a little bit of a UI here to make it easy for you to see what's going on what's going on in there again the feature of our cloud is you can attach your own domain you can bring your own domain or you can try to get a good domain so magic eight ball probably someone already do magic eight ball let's do magic eight build bow I think that's the name we can do so basically now we have that application on magic eight build bow firmion app we're updating it we can check back a little bit later if it's actually fixed I'm actually not sure if the UI we even validated the link that's an interesting feature there you go so now we have a magic eight ball magic eight build bow firmion app cool nice feature of the cloud again it's an open beta cloud open beta so basically you can just come and sign up and try this out we have apps and a ton of requests and a single sequel database and some key value store and some LLM feature all of that for free so it's great to get started with so this next section of the tutorial shows you how you can take that key value implementation that I showed where we use a local sequel light file and use redis instead I think what I'm going to do is we can basically talk through this with the documentation that we have up here so let me go a little down so basically once we have our redis set up the way that we provide this configuration into a spin is through the spin.toml where we can set up an environment variable or we can set up a variable in there actually let me check if the code is over here that might be better what's this we don't have that for 7 let me check no this is actually using redis directly that's not the one I want to use let me check over here so in the spin SDK we talked a lot about the key value interface and the sequel interface but there are other APIs that we have there all with things like Postgres that we built it's I don't know how much we want to keep doing that because part of this whole component model work that I talked about earlier on part of that is also defining a set of standardized interfaces to talk to these type of things and I think let me go over here and see if I can find WASI HTTP for instance that's not the one WASI Cloudcore so under the WebAssembly repo one of the standards that has been worked up in something called WASI Cloudcore so this is something that we're working on together with primarily Microsoft and the idea with this WASI Cloudcore is sort of the whole framework that I've been talking with spin where you know you write these functions type of based functions basically an interact with various things like key value stores or messaging or you know blob store use HTTP and so on and so forth can be described in this interface that is called WASI Cloudcore that's the idea behind this project that's a shared specification where a set of other specifications are contained so the WebAssembly component model specification we have this concept called WIT which is a WebAssembly interface type definition so that's basically a language where you define the type of interfaces that your components either implement or requires so it's basically a language but what you can do with those WIT files it's basically a file defining that you can bring them into the combination of multiple WIT files we call them worlds so you can think of this as there is a WASI Cloudcore world which is basically the combination of all those interface definitions being available and the reason why this terminology I think works really really well is if you think about this in terms of responsibilities then in the spin model as an analogy the spin host would play the role of being responsible of providing the WASI Cloudcore world so we can say here's an implementation that's called spin this implementation provides that particular world then if you go and write an application as a developer then your responsibility is to define the world within your application wants to be hosted within so if you choose to take a dependency on things that are not part of this Cloudcore world you need to find another hosting provider or someone who provides a world that is a match for how you implement things so that and I think a little bit higher level for me that is really around how a lot of people today have an interface between operations and developers which is it's either containers like that's probably the most predominant one like if you as a developer or anything if you bring me a container then as a platform provider as a host or as an operations department I can run that that's sort of the contract that is there today either that or you do something that is probably fairly proprietary to a cloud provider the idea here is to sort of being able to actually lift that interface like up a few levels and say it's not a POSIX thing like a container would be but it's a cloudcore world for instance so you can build that platform and implement all of these things and say now I have this world available so if you build your applications matching that world I would be an option for you to be hosted in so that's the idea with how these things how these things are being defined and if you look at the key value interface definition that is pretty close to what we have in what we have implemented and it's been today is sort of like spearheading and trying this and getting some real world experience with this where you know I guess this is a I guess this is a rust example as well so you can see the bind gen thing in the top is basically just you know is saying that this thing is implementing that world about outbound key value but if you look down at the API as you can see there is an open bucket that's similar to how we did open in our command like you can set, you can check if thing exists and you can get something as there well right so so there would be a standardized API around this that if you wrote this particular code the idea is that you can take that code and actually run that in a spin host or as a component in a spin application and hopefully as others would adopt this you could potentially take that and run it as you know your favorite cloud provider or in a Kubernetes cluster or whatever the same code but it's a fairly high level defined set of code and then going back to that responsibility thing and that's the model you see in spin today then what's the actual database you know resolving this underneath is really up to an internal implementation of that host provider where in spin locally it's a SQLite if you run in Fermion cloud we have another database and again with spin you can then set it up to just use Redis as that implementation as well cool alright I totally detoured there so I have no idea where we came from so I need to go back in the browser to see where we were I think we were talking about Redis right we were that was a long talk about storing key value data in an external database but also a little bit of a look into the future of WebAssembly component model and we're really excited about that in the world of WebAssembly system interface there's a preview 2 definition that we hope will get done soon because that will enable us to do sort of a first implementation of this we actually do have some example implementations so if you lurk around in the branches of the spin project you might see some implementation there already but give us a few buttons maybe just one month I don't know I think we'll be there okay are you guys up for some Kubernetes stuff because that's where we are in the workshop right now there is an example in here if you're really interested in this and this is something that is useful for you I highly recommend that you go back and you try out this part of the workshop on your own you can always do that it basically walks you through how to use a k3d cluster to take the magic 8 ball application that we built ran locally or ran in the fermion cloud and packaged it up and just run it in a kubernetes cluster again this is I think the few things that I want to call out in here instead of trying to walk through this is through this through the run wassy project let me just get it up here so in the container d project I guess under the container d project there is a repo called run wassy that facilitates running web assembly or wassy workloads managed by container d meaning you could run that in a kubernetes cluster where you use container d and they actually just did a new release and I don't remember if they wrote anything in here what I was trying to find was I was trying to find the various shims that exist so you can see these are all wassen match, wassen time and wasma are all web assembly run times those are run times that are sort of agnostic to the programming model which means that you can always build a web assembly that complies with the wassy interface that has a main function and then just call that from the run time like in the case of spin you sort of wrap your component in a web server and yada yada yada but that's all developed by experience but you can always just build a web assembly and run it directly through one of these run times so they don't have that opinionated type of application model and experience that spin does have but those exist as part of run wassy and also spin is in here I just can't find a good reference to all of that is that because it's actually another repo I'm not sure but once you have this if you enable that in your Kubernetes cluster and the way you would do that is basically you will you will have to tell Kubernetes that there is now a new run time class in the Kubernetes in this case we will enable wassen time spin v1 which means that with run wassy that now understands how to pass a spin.toml file and understands that application definition spin.toml file and what's really really interesting and this is where comparisons with containers become interesting is to actually see the docker file and hopefully we have that in here let's go there and we don't but if I run I can generate the docker file I want to show that to you if I do let me see spin I need to do spin build I need to do spin k8 scaffold so there's another plugin we created to help set this up you can also handcraft this but there's a plugin called k8 for spin don't mind the registry but don't mind the failure or I want to show you the docker file if you're familiar with docker file the idea of docker files is you have all these files system layers and the first thing you do in a docker file is always say what file system layer do I want to build this on and if you remember all the way back around an hour and a half ago when I talked about dependencies anything if you want to run a python flask api or something like that in the docker container you always would have a file system layer underneath your application which is that framework and the set of dependencies required right so you need to build on top of that which means that the container eventually when you deploy it's like it's a big file system you need to move around if you notice here in line one we build from scratch and again scratch is a convenience magic word in the world of docker that means that there is no file system layers underneath and then the few things that we do is that we copy a file and we copy it in the web assembly files and again if you remember the other thing I said an hour and a half ago the size of these things are really really small so basically we will probably create a container here which is something like 5 to 10 megabyte and that's a fully contained application and all that is needed so if we build that docker file and we have those run times enabled in our kubernetes clusters we can now deploy them as real containers inside of those kubernetes clusters so I think this for me this is where all of those benefits of web assembly come together in a very straight comparison with what we do with containers today and how these things can be much more simple which where spin talks a lot about developer experience I think this talks about how the operational experience of using server-side web assembly can be highly improved cool ok ok timing is actually really good we have three minutes left the last sort of like the bonus thing is if you want to see how the sql lite interface works and spin you can always do that but actually what this is is sort of like it's a lottery thing that was built so the lottery here is like we can put in a bunch of contestants into a lottery and then we can go and pick a winner so what I did before you all entered the room is I let me see I added there's an api in here where I can add things you can go look at the code later, that's really trivial but I added each seat number into this lottery machine and now when we're going to pick a winner you can check your seat number if you're in that row or if you're in a seat with that number who's going to be the winner Matt will give you a t-shirt by the way you can all get a t-shirt if you want I think we have enough t-shirts but you know there's a fun small thing in the end let's curl this and see seat 18 is anyone in seat 18 that would be 18 there's no one in seat 18 anyways you can go and get a t-shirt by Matt if you want he has a bunch of Fermion t-shirts here as well throw a shirt at seat 18 yeah throw a shirt at seat 18 we can put it there and someone will oh that was actually close anyways I hope you enjoyed this I hope you got some time to maybe play around with the tutorial at least got some information from me walking through a lot of the tutorials the workshops are out there feel free to go and do that feel free to engage on our discord channel for anything spin and thanks for showing up we are at our booth and we'll take questions and hopefully have some great conversations with you afterwards as well thanks everyone