 I'm Ralph, I work for Microsoft. I do upstream stuff at Azure. I report through AKS, but I actually am part of the upstream team and so I work partly on upstream stuff like we, my team is a team that handles all building of the Kubernetes bits and the Docker bits and things like that. We give it to AKS and other services and if they have a problem, we help fix it. And then we're the ones that take the patch and bring it back to the community and merge it upstream and so forth like that so that the service team can keep doing it's work. My team was the team that built Helm 3 originally and the core unit of that is now Fermion. So the developers and everybody at Fermion, they part of their knowledge base, especially with Kubernetes but with developer operational tools, specifically Helm is one of those bases. So they built Helm 3. I was not only the PM for that, which means I did nothing and they did everything. So, and Mikkel is a former colleague at Microsoft but in reality, Mikkel is here along with Melissa representing Fermion on their building spin. So I wanna sort of do a little back step. For those, and Mikkel, you're gonna talk a little bit about WebAssembly and why we're doing that. Okay, but just really quickly, WebAssembly, the reason we care about it is it's more like a cloud native binary. As opposed to the metaphor of a container which was supposed to be like a cloud native application but it really is a cloud native OS. There's an entire file system in there. In theory, you could sort of crunch it down into a very small image but in reality, what we've discovered is most people have very, very limited time to get a feature up and running and so the containers tend to be very, very large. In particular, almost all of the code that we wanted to port into a container comes with all kinds of dependencies and libraries and things like this and so you wanted to make your container very small but in reality, a native application, especially if it's something like Python or JavaScript or it just comes with reams of text and you would deploy them even if they didn't get invoked, right? And so it's really easy to have a container that's without even thinking about it. It's 700 megabytes, 1.5 gigabytes and that was not the idea originally, right? The idea was we were gonna optimize this. So the reason we like WebAssembly is because it's actually mostly a binary format and so you really are in a module, you really are shipping just the executable and any opt-in requirements, opt-in dependencies. So you can bring other stuff into the module but you actually have to manually do that. The genius of WebAssembly as a binary format is that the specifications W3C and all the tools and runtimes are open source and it's been tested in WebAssemblies in browsers for five, six, seven years now and so there's a lot of resilience in the system. There's multiple runtimes, multiple tools, multiple compiler chains and so forth. So this moment here that we're at technologically is where we've realized that WebAssembly is a binary, cloud-native, immutable binary is something we really want going forward. If you ship log for J in a WebAssembly, it won't work. You can go ahead and keep shipping it. And that's a fundamentally different stance than containers. It doesn't mean containers are wrong, right? You're gonna use containers mostly in Kubernetes for as long as the industry exists. Containers are great and we've made huge progress on it but there is a reason why we're engaging in embracing WebAssembly, not just in Kubernetes but for what it can do elsewhere. So for example, you're not running Kubernetes when you're in a browser but you can use WebAssembly in a browser. Was that me or you? I just wanted to give you an awful break. That was me. Am I good now? Okay. Slowly. Slowly. Go ahead, give me a break. I just wanted to give you a break. But by the way, we haven't started yet. This is just well at the end time. We're really gonna start until another 10 minutes or early. So whatever you're saying now is not part of the show. It's just well speaking. Yes. This is just my quick orientation for... Almost everywhere I go, people will ask me some basic questions about why we're doing this, what it means, what does it mean about containers, what does it mean about Kubernetes and so forth. So I sort of like want to, you know, give the people who are here a little chat briefly about that kind of thing so that they're oriented. Obviously you'll hear this again throughout the presentation. People will ask these questions again and so forth. So shouldn't be surprised by that. So I mentioned that Kubernetes and WebAssembly is the reason we're using those things. Kubernetes is great for orchestrating processes. And we've come to understand over the last, God knows how many years, 2014, whatever it was, that Kubernetes is a container system. In reality, Kubernetes can schedule any process, abstract process. There's no particular reason it needs to be containers. And it turns out WebAssembly serves this purpose as well, very, very well. And so from a Kubernetes standpoint, WebAssembly is fantastic because it makes Kubernetes more valuable, more flexible. You can run WebAssembly, and I'll show you a little demo later on about what you can do in Kubernetes you can't do with containers and what that means. That's really cool. But what you're gonna, so part of what you're gonna do here is if we get everything running, if we get your spin WebAssembly up and running, you're happy with it, it works, Docker desktop integration where it works, the whole thing, then the idea is we get it to running Kubernetes. We get it running Kubernetes. I can show you some really fun stuff that is Kubernetes stuff and you don't have to worry about whether it's WebAssembly or not. We'll do a little service mesh. Drop Istio in there, drop Dapper in there. It turns out WebAssembly looks just like a container. Really great. But there's much more it can do and you don't really need to use Kubernetes. So a system like, Fermion's got like a Fermion cloud, so if you're interested in building SaaS and cloud applications really, really quickly, there's no Kubernetes there. There's just push and pull, those wonderful things, like right out of the box. You can do the same thing on-prem or in a device or in a network router or all kinds of things. So WebAssembly has a lot of future potential in distributed computing that is even beyond the Kubernetes stuff that I'm really interested in right away. So it's a very expansive experience. And so here, we'll try and get you up and running with Spin. And if we get you up and running with Spin, which I think we can do, then we'll try and get that Spin up and running in Kubernetes. If we get you up and running in Kubernetes, we'll show you how it feels normal and natural from the Kubernetes standpoint. But from there, you should realize you can go all over the place. Though, the other question I get, in fact, the gentleman asked this just before we just got in here is, why are we using Spin? Are we using, can we use other things? I hear other things. VMwares got wasm workers, Microsoft had a thing called Slite for a while. And there are any number of runtimes. The answer is sort of like because, the first answer is because the WebAssembly is a W3C spec, there are tons of open source runtimes. So if you're talking about runtimes or even application hosts, like Spin is a runtime inside it, but it's really an application host, right? It's a developer experience and execution experience and so forth. All of these things are open source because the spec is open source, the barrier to entry is extremely low for technology. That's fantastic. It means instead of having one runtime, you have like 37 to choose from. And if you're in a special case, it turns out that the spec is public. So it's actually relatively easy to implement your own runtime if that's something that you want to do. I wouldn't recommend it unless you really want to invest your time. But if you're in those situations, you know it. And it's a relatively easy runtime to implement. So that's one aspect about the runtimes. Relative to the app host that we're looking for, we're really looking for a brilliant serverless experience. And we want that serverless experience to be standalone. We also want it to be able to be hosted in Kubernetes very easily and be essentially transparent. VMware has a wasm workers that is like an Apache mod approach. It's very familiar for people who come from Apache standpoint, right? Works great. No problem. Spin is much like what we had done earlier with Slite. And it turns out that Fermion is just doing such a great job with the general approach, the open source philosophy and commitment. And with the interoperability of the component model, which Mikkel and we will all talk about a little bit, coming up, it means the work that you do in Spin, for example, can go to another runtime. But more importantly, other runtimes, the work you do there can come to Spin. And so that kind of interoperability is gonna be amazing. So Spin is just one of the most solid experiences from the developer standpoint from the operational standpoint that we have right now in open source. And so that's why we're using it. That's why we partnered with Fermion from the Microsoft point of view to do this workshop. But if you have questions about how other app hosts work relative to what we're doing here, raise your hand and ask the question, right? So the stuff we're doing is all open source. Fermion has a cloud service, Microsoft has Kubernetes services. So obviously we have vested interest in how to make it easy for you if you like what we're showing you. But ultimately all the technology here is open source and community driven. Great, if you have any questions, let me know. Otherwise we'll wait. Cool enough? Yeah, I think so. We'll probably get started in five minutes or so. There we go. But if you have questions, just raise your hand. Yeah, perfect. We will also, once we get started on the tutorial and everything, I think we will have more of our colleagues around, right Kate? Kate is one of them. Look for purple stuff. That's usually a Fermion thing. Did you bring any colleagues around? He may be. I don't know. Anyways, raise your hand when we go along the way on the tutorial and we have people who can probably come around to help you. So in five minutes or so we'll get started. Out of curiosity, how many people are working on Macs here today? I am not surprised. How many people are working on some sort of Linux box? Great, I love you people. How many people are working in WSL or Windows? I love you people slightly more, but slightly less at the same time I'm conflicted. For the purposes of this demo, one of the things that's interesting, if you get to, Windows should work with WSL, no problem for the spin portion of what we're doing. When you get to the Kubernetes steps, there is a magic trick. That the Windows and WSL people may need. I have the keys to the magic kingdom. They're undocumented, which means they're cool. Spoken like a true Microsoft employee. That's true. That's true. But there are a lot of interesting dances between the versions of tools that we're all using here. We're gonna be very transparent about that. I would expect that a lot of people will have problems with version incompatibility. Mikko and Melissa will start by talking a lot about that, I'm sure. Don't worry about it. We've sort of danced through most of the issues amongst all these tool chains. So hopefully we can get you up and running no matter where you are. So now that Ralph left the room, I can maybe pick up a little bit. Has anyone, how many here have experiences, prior experiences with WebAssembly? Three, four people? Cool, awesome. I hope that number will be 100% once we're done. Yeah, everyone gets to do some hands-on. We've done this workshop before, or this tutorial format before, and typically we try to do it in a way where we have provided all the information needed to sort of do a self-paced tour of this. So if you wanna go ahead and do that, we'll give you the links to the repository that has all the information and feel free to do that. We'll also walk through the steps from stage so you can follow along there if you want to, and maybe add a little bit more extra information and stuff as we do that. So really consider this, be able to do this at your own pace. And again, just raise your hands if you have questions as we go along. There will be people in the room that can help you. And I guess in one minute, we will, get started, yeah? There are some prerequisites, yes. So we'll get to those once we, they should all be documented in the GitHub repository so you get to see that. But if you really wanna get playing with Docker and everything and you don't have Docker desktop installed or an alternative to that, you could go ahead and do that now. Conference Wi-Fi will be somewhat under pressure probably. Yeah, but we will be using Docker, so we will be using Spen, the WebAssembly framework that we built at Fermion. We will be using Docker desktop. We will be using K3D. And I've no idea what Ralph is gonna do, but if you have an Azure account or you have an AKS cluster, that's probably a good place to start if you wanna follow along with his part of the tutorial later. It's not needed necessarily. You can do everything locally. Hi everyone, thanks for joining us. I think we're just about to get started. Okay, welcome everyone. This is our tutorial on building cloud-native applications using WebAssembly and containers. I am Melissa Klein. I'm an open-source program manager at Fermion. I'm joined by Mikkel, who you just heard from, and Ralph, who was speaking earlier. This is definitely a workshop. And so again, if you have questions, raise your hands, we'd be happy to answer them and help you out if you're having problems during the workshop. Okay, so our objective here is to give you firsthand experience with server-side WebAssembly and Kubernetes. I'm going to start with about a 10-minute introduction on WebAssembly, and then we're gonna start the tutorial. We have a QR code up on the screen right now, which will take you to the GitHub repository with all the information you need to do the tutorial. You might wanna at least take a look at that and make sure you've got the requirements in the first section, or if you know WebAssembly and you don't need the introduction, you can go ahead and start getting started. And again, please raise your hand if you're running into any problems. Give you guys all a minute to get that QR code. Oh, I should mention, actually, that QR code is also going to be on the slide, so at any time you wanna jump over to the tutorial, you should see that QR code in the bottom corner. So let's start with what is WebAssembly. Officially, it's a specification of a binary instruction format designed as a portable compilation target. You probably heard of it as a part of the browser, but now it's also available outside of the browser. Language support for WebAssembly is emerging and stabilizing. And you may also hear us refer to it as WASM, it's just the shorthand we use for WebAssembly. So how do we work with WebAssembly? Well, first we write our code in the language of our choice. We're gonna compile that code to a WASM target. That's going to include everything you need to run your code. And then we're going to run it in a WASM runtime. That WASM runtime could be in the browser or it could be outside of the browser. Also, this is running within that WASM runtime, so this is going to be isolated. Okay, so the WebAssembly language support. We have a listing here from all of the RedMoc's top 20 languages, and we have a chart which shows you where your WebAssembly will be supported. The core browser WASI, which is WebAssembly system interface, so that's the outside of the browser scenarios, and then spin SDK. We have a QR code at the top of the slide so that you can go to the actual page and not just the screenshot, and we keep this table updated regularly. You can also see that we've got some good support in our top languages. All right, so we're looking at these two different types of runtimes. You're probably familiar with the JavaScript runtimes in the browser, the V8 spider monkeys and nitros, and that's where WebAssembly got its roots. But now we've moved on to these WASI, these WebAssembly system interface runtimes. There's WASI'mTime, WASI'mEdge, and some others out there, all of which can also run WebAssembly. And since WebAssembly started in the browser, it was built with some characteristics in mind, things that we needed to run in the browser. We wanted small size, so we're not transporting large files back and forth over the web. We wanted quick startup times, so our users weren't waiting to start the app. Portability, because we all know how many browsers and operating systems and things the users are running. And security, because everything on the web needs to be secure. But these four characteristics also translate well for other applications. And because we have a WASI'mEdge, it contains everything that we need to have in one place. It also lends itself to a great developer and operator experience. So on the development side, we can build everything we need in one WASI'm and move that to production much easier. The operators also appreciate that, not having to keep a bunch of dependencies up to date, and those types of things. So the quick start times and the size of the workload also lend itself to IoT applications. So when you want things to be small and quick, plug-ins are also a great application for this technology, for things like user-defined functions for databases. And then also the cloud, where we can do functions as a service frameworks and move things around from the development experience onto the cloud without all of those headaches of trying to switch from a developer environment to production. So running web assembly outside of the browser, how do we do it? We can use a runtime, such as WASI'm time, but we build our code using the WASI'm target, and then we run it using WASI'm time. We can use a framework such as spin. So spin's going to give you some great developer experience and some extra features built in, and then you can run that WASI'm directly in spin. And then we can also run that WASI'm in using run WASI and Kubernetes. So those last two examples are what we're going to do in our tutorial today. All right, and just here's a quick look at some of the benefits you get with using WASI'm. On the right, you see a Docker file for building a Python Flask application, where you need to copy requirements and build those requirements and do a lot of extra steps to put together that Docker container. And on the other side, we have a Docker file for a spin application. We're starting from scratch. It doesn't need anything else. We're going to copy our spin file and our WASI'm into that Docker file, and that's it. And so what we end up with is a lot smaller Docker container, as you can see from the file size there. Okay, all right, so it's time to start the tutorial. Michael's going to come up and he's going to start walking you through that. Again, if you haven't done it yet, the GitHub repo link is in the bottom corner of that page, and I see people still wandering in. So if you're ready to jump in, grab that QR code and join us in the repo. Oh, I'm just going to disappear. That does not look good. Can I try it out of the adapter? And this is mirroring, right? This is really weird. I can't see what's on the big screen, but yeah. Yeah, mirroring is good. Thank you. Thanks, everyone. Well, thanks, Melissa, and thanks for some help with the video here. My name is Michael. I also work at Permian, where I lead our product and developer relationships teams. And I'm going to walk you through sort of the first part of this workshop here. So as you can see from the repo, did anyone get to the repo? Do we need to, I can just add the, the small QR code in the right-hand corner of the screen here. We'll take you to that GitHub repository where you have all the information you need to get to this, to this tutorial. So we'll start out building some WebAssembly applications using Spin. We will run them in containers, we'll deploy them to Kubernetes, and we'll end up doing some Azure Kubernetes service, which is where Ralph, who is somewhere from Microsoft, will take over and do some of that part of the tutorial for you. Okay, so we're going to do this WebAssembly for Kubernetes workshop here. We have these various modules that we'll walk through, and we'll start heading into the setup piece. Basically, we need to get our environment up and running and ready for working with Spin. You can go and download Spin. There are links here to how you do that. I can just walk through these. So whatever pain you may live through, I will live through the same pain as you will. I think that's only fair. We have brew and we have download scripts as well from the various operating systems in here. I tend to always end up using the curl. So let me do that. So basically, we'll go ahead and download the latest version of Spin. We released the version two of Spin very late last week, which has a lot of great new features in. If some of you went to the WebAssembly, CloudNative WebAssembly day yesterday, you've probably heard my colleague, Kate, who's down here as well, talk about support for component model and all of that. We unfortunately won't have time to get to that in this tutorial today, but that's something you can go and dive deeper into with Spin 2.0 if you want to at a later point in time. Okay, so our installation script installed by Spin, binary, you can if you want to just move it into a folder already in your path or add it to a path so that it will be easy to execute this as we go along. See if I can get my password right, which means that I now have access to Spin, and I can see that I have version 2.0 installed. Spin is really a framework around this whole developer experience, which means that there are concepts such as application templates and various clock-ins you can have here, and you can see somewhere installed with the installed script. If we go back to the instructions on GitHub, there are an additional set of templates that we want to install to be able to go through this workshop. So first of all, we have a few templates from the tutorial. So basically using your command in Spin to install templates, we can get templates from a GitHub repository onto our local machine. So we can see we now have an extra set of templates installed in here. As we move through the workshop, in order to be able to complete some of the steps further along, make sure you use one of the templates with the V1 at the beginning. Otherwise, you'll probably have problems when we get into the Kubernetes and the container world of things. But this also gives you an idea that this concept of templates is Spin is something that is very easy for you to work with yourself, like you can basically publish templates in GitHub, or in a file resource, or file server somewhere that's accessible, and you can start sharing templates for various types of applications you want to build with Spin. Okay, there's a bunch of plugins we also need, and just to be sure that you get the latest one, we can run Spin plugin update. It's actually both Spin plugin and Spin plugins, I think, because I guess this is one of these cases where we end up discussing so much whether it should be plugin or plugins that we just did both. We need a JS to Wasm plugin, and we need a Python to Wasm plugin. Quick note about the last two ones. So because we have all these various programming languages we support when we build WebAssembly, and because it's still sort of an emerging support for a lot of these programming languages, to be able to compile to WebAssembly, specifically for JavaScript and Python, we build some tooling that just makes that really easy for you. So you can use Python, you can use JavaScript TypeScript as we develop with Spin, and with the plugins here, we're able to create WebAssembly components that we can run further along. Okay, so that's basically all we need for the Spin setup. Additionally, you're gonna make a choice right now. This is a very important choice you're gonna make. You're gonna make a choice about what programming language you wanna use. And throughout this tutorial, when we do programming, I think it's only the first two modules where we actually have programming languages. We have samples for Rust, TypeScript, Python, and Go, but feel free to choose your own adventure if you wanna go ahead and do that. But there are links in the instructions here on some of the prerequisites for those programming languages to be available in your machine in order to complete the tutorial. I've chosen to go ahead with TypeScript here, and all I need in order to use TypeScript is to have NPM installed, so I can pull some dependencies in. So that's the one that I will be using. Kastin, will you be able to? Yeah, sure, good. Okay, yeah, so the question is, the JS2 wasm and Python 2 wasm plugins? I mean, basically what they do is they've enabled us to connect the JavaScript code or the Python code that you write into compile into the WebAssembly binary format that the host will understand. So there are two functions to that. One is how do you get from JavaScript to Python 2 WebAssembly binary in the first place? For JavaScript, we're using something called, I think it's JV, right? JV, which is Shopify, did a project for that that will help you compile to WebAssembly. That's one part of it. The other part is that we also have SDKs within the framework. So to connect those SDKs into the host runtime, there are some bindings we do in there. So basically those plugins will help you come from those programming languages into the WebAssembly format that we need to run our spin applications. Sorry? If, what, sorry? Yes? Yeah, I think as we go a little bit along in the tutorial, I think some of these things will connect and you can see how this all works together. But basically these are tools that are part of the build chain for those programming languages to get to the WebAssembly binaries that we will need. Sorry, could you use the microphone? I'm having a hard time hearing what you're saying. Could you please clarify exactly which platform architectures and all of this stuff is required to run the spin? Because I do have multiple platforms. On the older system, it requires really C-specific versions. So on the ARM, I don't think that it's installable. So I would like to have a clue exactly about the architectures and what is supported. So I think there are various pieces along the whole tool chain where you will fall into pitfalls. So depending on what programming language you choose, depending on the WebAssembly runtime you choose and so on and so forth. The tool chain that we're using here with spin, spin uses wasm time as the actual WebAssembly runtime. So the support there is linked to the wasm time support. Wasm time is built into the spin host that we run. From a language point of view, Rust is definitely the programming language that has the best support amongst everything. In terms of being able to use other libraries, in terms of being able to compile that way without any problems. JavaScript and Python will quickly run into issues in terms of which libraries you can use once you use the programming languages. So there's a varying degree of support across those languages. I think Melissa referred to a language support matrix that we maintain on our website. That will sort of give you an idea of the state of the support across all these programming languages. Just a quick clarification for the tool chains that we're using here. Does spin run on ARM chip? Yeah, spin does run on ARM. We do have a build for ARM as well. So if you have problems as you go along with the tutorial for which the curl doesn't work on ARM, do you wanna have a look at what he's doing? Yeah, so if you get stuck, we're gonna solve a problem. It'll work on ARM. Whether we have the curl or the install trick to make it work on his platform is one thing. It works on ARM, it works on AMD, it works on Windows, it works on Linux and a whole bunch of other places can be done in the future. Yes. So that's the tool chains that we're using here today. And just if you run into any issues with your setup along the way, just raise your hand. Justin is here, Kate is here, Ralph, Melissa are here to help you out and they will be happy to look at what. And if one of you has a matrix of things that doesn't work, like maybe this gentleman here or whatever, that's what we wanna know because we need to figure out the trick. In the GitHub repository, there actually is a development container that is also set up. You can do the spin part of the exercise using that development container. It should have all the required pieces installed. I'm not sure about all the language support in there, but that's also an option for you if you wanna go that route. Okay, cool. That was basically just getting some of the introduction of spin and installed. And I just wanna put a few words to what spin is and what it solves here. So we talk about spin as this developer tool for building serverless. WebAssembly apps with spin. Oh, yeah, with spin. And the reason why we talk about a serverless is because that's one of these scenarios that lenses up really, really well with WebAssembly. To think about the four big benefits that Melissa called out earlier as well. We have very small binary size that we need to run. We have portability across processing architecture operating systems. We have very fast startup time. We have a very secure way of running WebAssembly and that they're also contained inside the sandbox environments. There's a concept of capability-based securities, which means that a WebAssembly is only able to escape its sandbox if it at runtime gets the permissions to do so. So for instance, if you run a spin application and you wanna serve some files, you need as part of the spin application to define what part of the file system that particular WebAssembly will have access to. So all of these things lend themselves really, really well to these serverless type of scenarios where we build small pieces of code or functions. It's very similar to what Lambda does or Azure Function does. And basically, so they're event-driven and the operations model or the runtime model is that the individual WebAssembly is loaded on each request or each event that needs to happen. Now that may sound a little bit suboptimal, but we can do that because the WebAssembly starts so quickly. So basically, if you go to some of the websites that runs our documentation, these websites are hosted in spin. So these websites are being served by WebAssembly, which means that every time I go and refresh or someone sends a request to these pages, the WebAssembly that is gonna load the static files that's needed for my frontend is being loaded by the server. The request is handing over with all the content and the WebAssembly module is being unloaded again. So you can see how in a world where you just have functions that are infrequently used or where you don't know like you have a usage pattern like this, things like scaling becomes much easier because you can obviously use the amount of resources or the amount of requests that a single instance can do. But you never have to think about something like scaling down to zero because things are not running unless they're actually handling requests. So those are cool things we can do with WebAssembly. And all of this, we sort of leveraged in spin and we build this whole framework where we, for you to build full stack applications. So other parts of the things that we build into this framework is easy access to key value storage, to SQL databases, even to last language model and being able to do inferencing. And you can do all of that in your local developer experience and you can do all of that once you run either inside the cloud offering that we do or if you get into Kubernetes and other places, there are ways where you can basically plug in a Redis or a PostgreSQL on the back end of these interfaces so your workloads can move around and even though you use a sort of generic interface from within your code, you would still get access to key value stores and those things and you'd be able to decide how you wanna implement that in your setup. Okay, so that was just a few things about spin and the whole framework. There's a lot more to dive into there and we're just gonna do a very simple hello world at this part of the tutorial. At the end, I think we do have, I think I just got in here and added a link. We have this thing we call the spin up hop. So the spin up hop is basically a collection of application samples that you can go and take a look at. It's also a thing you can go and contribute to. So if you build a very cool application doing this tutorial, please go and submit. But you can see some of the samples that are in here and actually very recently, we just added a button in here so you can get them up running in the cloud if you want to. So there's some good inspirations to get there on what you can do beyond what we're doing in this tutorial right now. But we will mainly be focusing on just like the bare bones of getting an application up and running and then get into that scenario of deploying it into containers and getting it into Kubernetes. Okay, just a quick show of hands. How many in here had issues getting spin installed at this point? Few? Okay, if you want some help, just raise your hand and we'll have people come through. And now I need to figure out how many were successful and that's not necessarily those who didn't raise their head. So how many were successful? Because I assume there are people there. Okay, that's a good one. I'm a product manager. It's very satisfied with that result. Hey, Michael, really quickly, ask people how many are already running Hello World? How many people win it? Should I ask or did you just ask? Yeah, how many people are running Hello World already? Okay, cool. Okay, great. And if you don't want to like, if you had a different pace than I'm at, just go along the tutorial, just place along everything is in GitHub repository and keep raising your hand if you have issues that you go along and I'll sort of guide those who want to follow along or guide through from stage. Okay, so we want to create this spin application and sort of the way we do this in spin, we had this, I guess this tree, like the three steps, it's spin new to create a new application, it's spin build to spin the application, it's spin up to actually run the application. So we always talk about the spin new, spin build, spin up, sort of three step motion. So anyways, we can start doing a spin new and what happens when we do a spin new is you start to see all the templates that are available. And this is a little bit overwhelming with a lot of the languages, but also we want to make sure that we call out the attention to some of the language that you can actually play around with if you do spin applications. What's very important here to be able to successfully move these things into Kubernetes as we go along is to choose one of the V1 templates at the bottom. If you don't see them as you do spin new, please go back to the previous step, there's a command to get those installed. I will choose the TypeScript. Template here. I'm just gonna call my application, my application. I can add a description if I want to. I'm not gonna do that right now. And then the last question about the ACTP base, it's basically a path is basically, when we build these applications, we can have many different web assembly modules or web assembly binaries serving different paths, parts of the path structure in the application. So assuming when we run application, it goes and listen into a particular socket and then we can start playing around with reduce slash rust. We'll hit a rust based web assembly. If we do slash Python, we'll hit a Python based web assembly. And this is sort of a very good image of how the polyglotness of what we do when we develop all comes together in this unified format at execution time. But basically all that spins these is a bunch of web assembly. They don't know where, spin doesn't know where these came from and spin really doesn't care. So we've got a new directory now named after the using the application name. And if you just very quickly take a look at what's in that application in this directory, this will be a little bit different if you use a different programming language and mainly because of how those programming language templates are set up and how the tooling works. All of them will however have a spin.toml file and that is the manifest that describes the application we're building. And in this case, you can see I have an index just tears down here which is the source code. And I think we just wanna give a quick look at the TypeScript code. You sort of get an idea of how this whole programming model works. So this is the very bare bones, the low world of one of these spin components. You can see we have an SDK and there are some models we're importing from that SDK and the contract between the web assembly that we write and the runtime is that you need to implement a handle request function. Again, this is specific to TypeScript. If we were to do this in other languages, you find the same concepts, but they might differ in like what's the signature. Like for instance, when you do Rust, the way that we identify sort of the entry function in your Rust module is to a macro annotation. So this might be a little bit different from the very programming language but overall the concept is the same. So we take an HTTP request and we return a promise of an HTTP response. And in this case, we're basically just saying hello, fermion. The other part of this, well, let's actually just do this right now. So I can do spin build and I'll show you in a little bit what that actually means. And I just failed because I need to do NPM install first. Again, some of these things tie into the language we use. I can then do a spin build and we're now generating a web assembly and we can do a spin up. And we can go and curl, localhosts 3000 and we build a web assembly application. Well, thank you. Okay, that's easy. Let's take a little bit more look at what's going on in that Tommel file. So I said the Tommel file is a manifest or declaration of the application. And what I'm showing you here and what we're doing in this workshop is the version one of the manifest without the V2 release of spin that we did last week. There's now a version two of the manifest. However, we didn't get all the changes. I don't know if this is upstream or downstream. I guess that's downstream, whatever, streamed into the whole Kubernetes experience yet. So we need to stay on the V1 train for now, which is why we're using that here. Concepts are the same. There is some metadata in this manifest to begin with. And then basically you can see that there isn't a ray of components down here where I have my first component in. It has an ID adopted by the application name. And you can see the WebAssembly file, the wasm file that is serving. And you can see the route that it's serving on here. So again, this point that once we got into the spin application definition, and once we run these things, it doesn't say anything about that this is TypeScript because it's not TypeScript anymore. It's the WebAssembly binary format. So the only thing that spin needs to understand is how to run WebAssembly. We do not ever get a hint down here because we have some tool chain that we can set up. And basically what this component build section does is that when I run spin build, it bridges in and it reads through the manifest file what build commands to run to actually compile this WebAssembly, which means that if I added a I could add a Python thing to this, there will be a different build command in there and so on and so forth. So this is how we build spin applications. I think if we go through the actual tutorial, I think what we want to try and do is go down and we can change, we can change this to KubeCon. And now it becomes a little bit more targeted to what we're doing here. I can go back, I can do a spin build. And maybe this is a good chance to talk about what this JavaScript is doing. So you can see we run a build script from the packages.json file as part of running npm run build. And there's a bunch of things happening in here. So first of all, we use Webpack to actually create a single JavaScript module as part of this journey. Once we have that, we use that JS to wasn't plugin that we talked about a little bit earlier. And basically what the JS to wasn't does is that it takes this well-known single module file that we have and it turns it into the WebAssembly that we want to run. Okay, so there's a few tools involved in that tool chain. And again, a lot of this, there's so many details and they're so specific to the programming language and all of that. So we're not going to spend too much time on that right now, but I urge you to go back and read some of the documentation we have around this and you can sort of figure out for your favorite programming language how this actually works and how we get to those WebAssembly components. I'm just going to see if there's anything I skipped throughout this tutorial right here. If there are any sort of general questions or anything, please raise your hand or feel free to go to a microphone and we can, yeah. Yeah, so the question is, if we go from a Python source file and get into the WebAssembly binary, do we only need the WebAssembly? And yes, that is what we do. And I think Melissa showed a slide as well comparing containers. And we'll actually see that as we move into the next section. How the WebAssembly itself contained, right? There are no dependencies. Whatever dependencies we may have had are also carried with the binary itself, which means that if you compare building a container where you copy in dependencies, you have the framework, like you have all these layers that add up to a bigger unit. For the WebAssembly, these are very small. And I think these are low worlds. They are below one megabyte, even as a container. So you can sort of get an idea of what that means in terms of when we need to operate hundreds of these or thousands of these in big clusters and other environments. That because they are small units, because they start so fast, we can build totally different experience than what we have today with containers. I'm not saying we have all of that yet, but we are trying really, really hard. And we're on a good way. I just want to call out the last section here, and this is very context-specific to the V2, the version two released with it last week. Once you've built the spin application, the spin application that I have now, I have actually a lot of different places I can go and run this. And this is in part because there is a specification how such an HTTP function, we could almost call it, how that works. So within this ecosystem, a WebAssembly system interface, or WASI, we've started to build out interface definitions. One of them is called WASI HTTP. And what WASI HTTP defines as an interface specification is if you have a WebAssembly that takes an HTTP request and sends back an HTTP response, how does that interface look like? And there are ways in WebAssembly where we can describe this in a way that is understood across the programming language. This is called the WebAssembly interface type definition, WIT. So basically the same way that we, at an operations level, can consider these interoperable or the same format. Even when we develop, we can actually get to a place where we can have common interface definitions between WebAssembly. And that is basically what the component model builds upon. And there's a whole sidetrack on that, which means I can then have a, you know, WebAssembly written in TypeScript, but I import a library that will written in Rust, and so on and so forth. Those are the types of scenarios we can get to with that component model. But what you can do today is the things that you build would spin. You can run them in the Firmian Cloud, which is a hosted offering we have. You can run them wherever the spin CLI wants. You can run them wherever Docker container run. You can run them directly with WasmTime, which is a WebAssembly runtime. And you next unit web server to also support this component. WasmCloud, which is a CNCF project. And that is also the project that Cosmonic has built their offering on. So you can sort of start seeing how the same way containers are, you know, a runnable unit across a lot of platforms and offerings. These WebAssembly or was the HTTP WebAssembly are also a common runnable unit across these platforms. So this is very exciting getting to this point with the release we did last week. And it just opens up a ton of opportunities. So the last thing sort of to get you an idea of how easy this experience can be end to end is we're gonna just try and deploy this into the Firmian Cloud, which again, does the offering that Firmian provides. We have a plugin to spin called Cloud Plugin. So if I go ahead and log in, I'm now able to, I just need an authentication thingy. I can log into the cloud. I've already signed up for the cloud. You can do so if you wanna try this as well. All it requires is a GitHub account and you get a free plan in the Firmian Cloud. I think you can run five applications in the cloud if you wanna try that. But now I am authenticated and I can go ahead and deploy. And I hope that you all appreciate, I do appreciate how easy this is and how simple this is and how quickly this ones. I don't even know if I have time to get a sip of water. I didn't, damn it. Oh, with the deployment? Oh, okay. So the request was to slow down a little bit with the pace of the tutorial. Okay, and now I have this running inside of the cloud and you can see I run a web assembly on a public available IP and that's it. So these first two sections really got us from, you can see part of that here. Building a web assembly application using spin. We got the tooling installed. We created a local hello world. And if you wanted to, you were able to deploy that into the Firmian Cloud as well. And that sort of concludes the first piece. And really we don't wanna dive whole more into what you can do with applications in spin right now. I wanna call out again, spin up hop has a great set of examples. So please go there and you can check out some of the things you can do around persistent storage and using that for your applications. If you wanna do AI inferencing and other things, that's also possible. Ralph, you were waving? Am I on? I'm on. Yeah, so I've had my first person say, hey, Docker build isn't working with a bad platform string. Does anybody else discovered that? It's one, two, three, four, five, six, seven or so. Okay, we have a couple of workarounds for that. We're very quickly. Mikkel, do you wanna say anything about that or should I, we'll go ahead and touch the people? Well, I can say mine. I think part of, I know with the latest version 4.25 of Docker, I've noticed one thing. I haven't had time to figure out if it's a true thing or not. We, in the Docker desktop, like we're jumping a little bit into the next tutorial, but that's okay. We need two experimental features for doggers to support this. First of all, we need container D because that's how we get to run WebAssembly, which is, I don't even know if I can, I consume that. Everything is an electron up today, I guess. So you need to enable container D and you need to enable wasn't. The trick though is, I've seen if you enable both and then do apply and restart, it actually doesn't enable the wasn't feature. So the order in which you wanna do this is enable container D, click apply and restart, then enable wasn't, and click apply and restart and you can see a small installing progress bar down here. I spend a little time on this issue the other day. That might be the problem for some of you. There's another one, but we'll get to that in a sec. You got a question? Yeah. Go ahead. You did both of them once and it worked. Okay, so of the people who are having Docker build issues, how many people are not using Docker desktop? They're using something else. Aha, okay. Now, you're off the beaten path, which means you were geniuses because of course you're off the beaten path. There is a difference in the feature set between what is in Docker desktop and what is in upstream Docker. You can make it work if you can force the most recent Docker upstream build because the critical thing is the most recent version of Docker build X in upstream. For Docker desktop, click in the buttons makes it all work. But if you're using some form of upstream Docker, you have to go and find the most recent build of Docker X, install that, it will respect the platform choice. But if you're off the beaten path, somebody was using Rancher desktop over there and so forth, like those kinds of things we may have to dance with a little bit just to let you know. So now that we have that WebAssembly thing, we wanna get into container. Now, you can start asking the question, why do we wanna go there given what I just said about containers and all these things? Well, first of all, I'm pretty sure that I think Ralph asked the question even just a bit before we got started, how many were working with Kubernetes today? And I think it's no secret to anyone that almost anywhere you go today, you'll be able to get a Kubernetes infrastructure to run your application. So what we have been working on with the container integration here is to make a way where WebAssembly applications can start being operated in those environments that we have out there today so that we can have this WebAssembly containers living side by side in harmony and working together. So that's basically the scenario we are solving for here. All the container-enabled platforms out there make sure that they also can support WebAssembly so they can run together with dog containers. So that's why we're gonna do this. And Ralph is gonna probably show you some cool demos with the full Kubernetes thingy once we get a little bit further and I will just concentrate on the very basics of taking my WebAssembly application and get it into a container. So the first thing we need here is we need to create a Docker file. Just gonna use touch to create that file. And then we need to define how we're gonna build this container. Some of you are familiar with how Docker containers work and how these Docker builds or Docker files work. All of this seems probably fairly straightforward. I do wanna call out again this thing that the container base is scratch which means there are no file layers, file system layers that we depend on for this because we have no dependencies that we need to bring in. All we need to get in this is the spin manifest and the WebAssembly that we need. Obviously, had I had more WebAssembly that were part of my application, I need all of them in the container. If I was using something different than spin, I would just only need the WebAssembly. So you can run WebAssembly directly with Wasntime. So that could be a WebAssembly, a main module that basically has a main function entry point. And all I would have to do is just copy the WebAssembly into the container from scratch. I'm just using a two step build here so that I can, I think the copy command is creating individual layers. So instead of me having two layers with one file each, I just have one layer now with both files in because I'm gonna copy all of them in at a later stage in the build script here. Okay, and now we get to the build command of this. And besides enabling the features inside Docker, there are a few other things to call out in the Docker build command. First of all, the platform that this is a wassy slash wasm container that we're building. This obviously goes into our container manifest, the OCI manifest. So that whenever the runtime needs to pull these in, it knows eventually what runtime to use underneath. There's a feature in Docker that enables some provenance that is being created as part of the manifest as well. Some of the tools, Docker desktop will actually work without setting the provenance false, equals false, but some of the other tools that we'll use in the tutorial do not. So we need to disable provenance that creates a different manifest layout. I'm gonna tag this using my GitHub account. You can do a local tag name if you want to and you don't have to push this remotely. It's really up to you whether you wanna even push it to GitHub or not. You can decide that at a later point in time. And we now have a Docker image. So, whoops, I think it's list. There you have it, 838 kilobyte WebAssembly application image. Right next to a 90 megabytes K3D proxy. So again, the point about these workloads being really, really small, sort of comes in play here as well. And I can go and run this. So let me go and do a docker run command. The way that we can dive a little into how this all works later, but the application, the spin application inside of the container is exposed on port 80, so I'm mapping that to 3000 locally because it's muscle memory for me to get it to a spin application on port 3000. The two other things to note, obviously in the run command is we call out the platform, which is equal to the platform image we created, and then we call out the runtime. So in this case, it's IO container dspin b1, which means that docker knows that this is the runtime we're using. So we should be able to do localhost 3000 and everything goes well. Hello, KubeCon, we now have the same WebAssembly application running inside the docker container, doing the same thing as it was doing earlier. This runtime is provided through, I think we have a slide that shows a little bit of that. I know, let's just disregard the Kubernetes thing on top for now, but basically inside container d, that are these various shims that you can enable that are actually implementations of the run times that will run the workload that is handed over to container d. And run C is one of those that will use containers normally and what the dais labs team at Microsoft, which is the team that Ralph also represents, has built a project called run wassy. And basically what run wassy does is it implements various shims for various WebAssembly run times. And that's basically how we can get this into Kubernetes and into container workload. You can go and check out the run wassy project and you can go and check out some of those things if you want to, to sort of dive into how all that works. We've added some information about run wassy in the repository as well, so there are great references here to go and look at. AM, that was the getting this into a container part, which is a few steps are never less important. Sure. So the question is whether it's impossible to put an actual executable in there. It is, but the executable would be a main WebAssembly module. So we'll still be a wasm file. So I could build something like, let me just find something, I think. I am frantically searching my machine to get something that I can show you. That's not the one. So this is just an example of, this is an example of a Rust actual executable. I mean, it's just the main function, right? Returning something to standard out. The way that I would run this is I could build this using the cargo build tool and I will target wasm 32 wasy, which is what I do with Rust. And now I can use a WebAssembly runtime like wasm time that will basically understand these modules and that's running an actual executable. Does that make sense? Yeah, so here I'm running a main module. If I would use wasm time serve, which is a new feature, the component or the WebAssembly module that I would run cannot be a main module. That will be a module that would implement the was the HTTP interface, which means that wasm time sort of acts like a web server in that sense. That it takes the HTTP request and hands it off to the WebAssembly component and that's the exact same model as we use in spin. So if you would spin 2.0, if you build a spin component, you can hand that off to wasm time using wasm time serve and it will act the same way as when you do spin run. So that's basically implementing an HTTP type of backend. Okay, cool. And I got lost in my directory structure. Michael, can I jump in for a sec? For those of you off the beaten path, that's you, okay. Docker has just told me exactly which upstream versions work for the build part of this. So if you wanna raise your hand, I'll come give you that information. You can give it a shot. Okay. I'm severely off the beaten path. I'm not using Docker. Any of the hands up questions? That's fine. Are they calling for Ralph's attention? No questions? Okay. How many of you got into getting the container up and running and everything? So basically to the point where I'm at right now? Yes, beyond. Does anyone get in there all the way to K3D and have things up in Kubernetes? Yep. I'll try to move along because I promised Ralph that he will get half an hour and I only have eight more minutes. So let's see if we can get this thing. If you're cruising, you can keep going a little bit. I can keep going for an hour if you want, but I don't. You find your natural resting place. That's cool. Okay. So we did briefly, I think I briefly showed this before, but we can pull it up again. The way that these things integrate into Kubernetes is at this very low level, we basically just add another runtime into container D, which is that shim that can run spin, spider lightning or slide wasn't time, all these other things are various sets of WebAssembly models and runtimes. And because the integration for Kubernetes is at this very low, low, low level, the way that you would interact with Kubernetes to get these WebAssembly containers or WebAssembly components running inside is like you're used to. So there are two things that you would need to do. First, you want to, obviously you need to get the container D shims in there. And there are some pre-bills that we will be using in the tutorial. But what you need to do against the API servers, you need to register a runtime class. So basically there's a handler called spin here that is being handed over to container D. We'll call our runtime class wasn't time spin, which means that the deployment definition for WebAssembly is the same as the deployment definition for container, except that we would call out in the specification that the runtime class to use is wasn't time spin, which also means that, well, I think a very recent release of Ron Marcy, we can actually create these pod spec, pod specifications that is a combination of WebAssembly and I don't know what to call these other ones. Now real containers, darker containers, old containers, last year's containers, I don't know. So we can do that. And you can sort of, I hope that you can see this idea of how we can bring all these things together and how we can bring WebAssembly into this world of Kubernetes and keep going from there. So let's try this out. The first thing I'm going to do is I'm going to create a K3D cluster so that I have some Kubernetes cluster locally to work against. I did have the container images downloaded already so this is not going to take too long. And basically I'm going to start creating the runtime class and the deployment file. So this is my runtime class. I can add that in. As again, we can say we are naming this wasn't time spin and we are mapping it to the handler called spin. So now we can do a kubectl. We can go and apply the spin runtime file and we should be able to see the runtime classes. So we can see that we now have a wasn't time spin runtime class registered in our cluster. Then, so Kate and I had a discussion about this five minutes before this tutorial worked and I don't know if we concluded anything yet, Kate or not, but let me just touch on the next sort of wrinkle that might be in this journey. You can either have your Kubernetes cluster choose to pull the image we built from a remote registry. If you do that, you obviously need to push the containers image to remote registry, make it publicly available or configure your cluster to have a token to access that registry. There's also an easier way to do this which is using this image import command that actually enables us to take a local container and import it into the cluster so we don't need to pull it remotely. And the discussion is whether this will actually have some impact later on. I'm just gonna do the image import anyway and we'll see if we stumble or not. So we do remember that the container image that I built and if in doubt, let me go out. We can check that I have that container image here. Well, the Docker desktop is even as nice telling it that this is a wasn't based image. So what I'm gonna do now is I'm just gonna import this and I think it will default to latest anyway. So now my cluster nodes, all of them have this container image loaded already so we don't need to go and pull that from a remote registry. And once we've solved that, we can go ahead and create our deployment specification. So we're just gonna create that file. We can go into spin app. And we're doing all the Kubernetes stuff that we all know and love in here. Basically, we're creating a deployment specification. We're creating a single replica. We call out the runtime class that is wasn't spin. We want to change the image. And I see Kate already added the image pool policy which should make everything work. Thank you, Kate. I'm gonna go and try and find my, so this is called github container repository.io. I have my github name. Oops, we actually don't need the latest. Okay. If not present means that because we already copied that container image in, we're not gonna pull that. We set up a service to expose port 80 from the container and we configure the ingress. So that anything that comes in on the route path is gonna forward it to that service and so on and so forth. So let's go ahead, keep detail. Apply the app. Oops, let's call it spin app. We got the deployment creator, the service creator, the ingress creator. And you haven't worked with Kubernetes before if you're not always checking in on your parts. Okay, the part is running. So we should now be able to go to, I guess the cluster is, if you don't know whether, I think the cluster is set up to listen to port 8081 and you can actually see it over here that this host has 8081, which means that that's a bad gateway. Nice. Still a bad gateway. Let's go and check. So we have the ingress set up. We have the service set up. We have the part running. There you go. Hello, coupon. It needed some time. Okay, so we actually got all the way through to Kubernetes from building this spin application, get it into a container with Docker, getting it into a Kubernetes cluster locally. And now we can start working with all this existing stuff we have out there. One of the things I just want to call out for the whole, I put out the slide early on around the full stack thing you can do with spin. One of the, and I also mentioned the wasi ACP specification as sort of a standardized way of describing how a host can work with these type of functions. The same actually goes from what you can do with inside these functions if you need to reach out to external resources. So there's in, let's call it an Uber specification called wasi call cloud that we are partnering with Microsoft on defining that defines a whole set of interfaces that we believe that most people will need or developers will need to build cloud applications. So those would be interfaces defining how you interact with a key value store, how you interact with a SQL server, how you interact with cash, and so on and so forth. And all of these are implemented and spent today. So within a spin application, you can do something like, open a key value store and do a simple API like key value store set, key value store get key value store list exists those type of things. That interface is implemented based on that specification, which means that all these other host run times would be able to add a key value store implementation that will work with that interface. So part of what you can do here is you could actually build a definition in Kubernetes where the key value store will be resolved by a Redis container that you run inside your Kubernetes cluster. Locally, when you do this with spin, we just use a local SQL lite file to implement that. And in the firmware and cloud, we have other types of implementations. But I hope at least this just gives a little bit of idea of how sort of the abstraction and the interface between what you develop as a developer and what the hosting platform provide does not have to be at that operating system that Ralph referred to earlier talking about containers really being operating systems. It can be a much higher abstraction. And I think that's part of what gives us this ability to build great developer experience and great developer experience, operator experiences as we move along in this whole thing. Cool. I'll hand it over to you, Ralph, and you'll pick up on the more the Kubernetes stuff. That sounds louder. Michael, everybody. And Melissa and everybody else, Kate, everybody from Fermion made this happen. That's fantastic. How many people had success getting this running through Kubernetes? So one, two, three, four, five, six, seven, eight, a few more, okay. And I'm assuming that a couple of people will kind of tumble along. What I'm gonna do is I wanna establish a certain concept here. We're gonna take the same kind of simple application. There's a couple of repos that I will show you here and what I will add to the deck. The first one, there's a whole bunch. Can you see my tabs? I'm so happy with the tabs. All right. So there's a couple of things I'm gonna do here. I wanna show you that the whole point of our integration with Spin and WebAssembly into Kubernetes happens at the container dshim level. And so for those who don't really grok the inner gears, the real short version is that Kubernetes actually doesn't know it's gonna run a container at all. It merely figures out which node should handle, resource-wise, which node should handle the workload. And it says, hey, Kubelet, please schedule this workload. How we doing? Is it mad? What's that? Do you wanna use the hub on the other side? Yeah, go ahead and use the other hub on the other side. Too many hubs. That's a lovely concept. Can somebody see something? All right, does it say too many hubs? You're still getting a little flake. Okay. I'll let you work with this for a second because I'm gonna get a couple of moments. So whether we get a flake or not is one thing. What we're gonna do is I'm gonna run through one article that has a repo that does dapper integration with a full application. The point I wanted to make with container d is that Kubernetes, the Kubelet turns around and asks container d, here's the workload. Please go find it and run it. And I'm standing in front of the speaker so I'm not gonna do that. And container d has an inner implementation called a shim. And so when you hear things like Run C are the things that actually make containers go, the inner shim is called container d Run C in many kind of, in the vanilla Kubernetes experience. And other companies like, for example, Red Hat has a different inner shim for OpenShift and things like this, but the inner shim is completely opaque to Kubernetes, that container d shim. So what that means is we can actually implement a shim that's not Run C or C Run. We can implement a shim called Run Wazi. And Run Wazi just knows how to run WebAssembly workloads and we can even make that shim a crate and rust. And then we can build higher level abstractions that are able to run scheduled in Kubernetes as if they're native. And so one of those is spin, right? So we have a shim that runs spin. So your application that you just built will run in Kubernetes and you saw how that works. So I'm gonna build the cluster from scratch in K3ds. Hopefully the network will be happy with me and we can do it in just a couple of minutes. And I'm gonna take a basic run workload and we're gonna actually add containers and WebAssembly in the same cluster and we're gonna throw some service mesh on it. And the reason we can do that is because container D, shim, Run Wazi knows how to run both containers and modules in the same pod. And that's only cool because it makes the world transparent to users. So you can go ahead and schedule a container, you can go ahead and schedule a WebAssembly and you can drop Istio on it and it's all the same to you. That's what we're doing here, okay? So for those of you who got it running in Kubernetes, we're still having the flake. We should have done the AV check before. We thought this would be easier. Okay, so what we're gonna do is we're gonna go ahead and take that same thing. We're gonna run in Kubernetes from the ground up. We're gonna install Redis, a bunch of other things and a WebAssembly module based on spin and then we're gonna install Dapper. So you can see Dapper doesn't care, totally fine. We're gonna install Istio, so you can see Istio doesn't care, totally fine. These are all abstract processes to from the point of view of the service mesh. I'm also gonna show you, I don't know if Mikkel got a chance to focus, but we'll show you exactly what's going on. The only thing you're loading is the image and the YAML, the only thing that's different is the runtime class. Okay, now I'm gonna do this even though it's flaky. If it drives you nuts, let me know, okay? Okay? On this side, let's do Dapper first. This, we will add this link to the deck. If you can search for it, just search for it. Do the bingo of your choice, how to run WebAssembly, Wazze, Spin, Dapper. And you'll come up with a very talented person in Vietnam who made a really lovely, complex app run together. This is really the proof of the pudding. If I go to the diagram in the last of the series, this is it. Now, it may be flickery, but that's the topology of a real world application, right? I'm gonna install part of this application to show you that it's totally doable. And this works with containers and WebAssembly, it doesn't matter. So I'm gonna go back to the beginning and we're gonna start from the ground up. This is the article. And you can see this is all the kind of very similar stuff, right? Here's your spin, Tom all in the repo and the whole thing, blah, blah, blah. And so what we're gonna do, so I'm gonna bump this up. It may be flickery, but hopefully you can see it. Is that the case? Okay. Not flickering anymore, that's crazy. So I'm gonna do my, go my, it's flickering now. Excellent. Excellent. Let's get the create calls and we're gonna say we created a Wasm cluster here. Now notice with this, right? We're calling in an image in K3. That has the shims already in it. The spin shim is already there. So we're using K3Ds to do this for us, that's 1965. So we're gonna go ahead and 196, whoops, 1965. And in theory, this should happen relatively fast. I've tried it three or four times on this network and so we're doing live work. If the network fails me, it's not my fault. That's all I'm gonna claim. All right. And so then we need to let it set up, right? Now what's going on is K3Ds of course was delivered as a Docker image. So inside Docker, it has to go download the other images and get them running. That doesn't take that long, but we have to watch and let that set up. Okay, we've got some things already running, container creating, we're all doing pretty well. Now the traffic installation will come in and we'll get the traffic. So we've got routing and this is sort of the bootstrapping experience that you've already done. And we should be up and running here in a moment. Completed one more container and one more ready status. There's the ready status and the containers are completed so we're running. Great. And now this right here is the line if we cube control apply, we are adding shims that know how to run these applications. You can see that spin is right here, right? The shim, if you like the experience with spin, please grab one of the Fermion folks and have them do a demo to upgrade you to spin 2.0. The new experience, which we didn't get built into the shims, it's fantastic. And it also involves the component model, which means that the modules can be shared, not outside of spin. They can be given to other places and can be run. So we're gonna use these run times, we're gonna use wasm time spin. And to show you that that works, we're gonna go ahead and do this, right? And we're gonna show you that they all work. We're only really interested in spin. Again, we'll cube control get pile, right? And you notice that they're running all right away. Now, do you remember how small the spin module was, the container image that had the module in it? Super small. So these things start like immediately. And you can go ahead and curl it, right? Right here, we're gonna do the spin one. And we're not surprised, that works no problem. And just for fun, let's do this. We'll use hay to hit it 200 times and you get a distribution right there. Slowest, the total is that pretty fast for number of requests per second, reasonably fast. So if you don't know hay, it's a cool thing to use for load testing, you can. Now let's do this. So we've got dapper and we're gonna do dapper and knit. In this particular case, ignore the runtime version. It's just gonna give you a mismatch. And it's really that you wanna do dapper and knit for Kubernetes, that's what the K means. And dapper's just gonna install. Now, if you were running in a professional environment against a production cluster and so forth, you would just take these steps. So there is nothing different that we've done here. Let's make sure everything comes in. Get po, well let's watch all the namespaces. And they're already up and running. That's good that I ran up before, so I have them cached. And for fun, let's install a Redis container. This is a container, so let's go do. There we go, down a little bit. Are we still flickering? But nobody's getting nauseous yet. I think we'll be able to stumble through. In this particular case, it's already running in that sense, but it has to hit the ready status, which it will shortly, so I'm gonna control out. We'll apply the container, the components. And we're getting close to time, whoop. Because I'm not in work, dapper labs, polyglot. This is the repo, so I'm fine here. Then I've created components in dapper. And if you're familiar with dapper, you know what that means. These are bindings to component definitions and so forth. And then we're gonna go ahead and deploy this. This is the spin application for that larger API. And we're gonna go back to watch. And the container's creating for the product API. Now notice there are two containers there. Two containers. So product API is one of two running. One is the spin was a module, right? But the other one is dapper. It's the sidecar. So now it's two of two. So we are running a container and a module on the same pod. Nobody cares. The experience is just Kubernetes, right? And so at this point, you can get Po, and you can see them both. And you can go ahead and do the tail logs. So let's make sure it's running, right? Except for I've got to get the pod name right. So I'm gonna skip that, right? And we're gonna go ahead and get the service. And then we'll go ahead and wrap up. But I can do the same thing with, there we go, there's a little lineup. So it's all working. Product dapper, product API. That's all working the way it should. If I jump over here, I can go ahead and with dapper, I can go ahead and do this to hit the spin service. But even more importantly, I wanna be able to go ahead and do this. Copy, and you can see that I have the Istio commands. So here, I should be able to go up here and do local host. What do we have, 8080? Boom. Right? This is a web assembly. Metadata configuration actors logs the whole thing. That's web assembly. That is not a container. Can you tell that's provocative? The answer is dead silence because it's no, you can't tell. Now, it is now basically time. We're two minutes over. I don't wanna be rude to anybody or the incoming presentation, everything. I wanna thank you very much. In the next two minutes while everybody's walking away, what? Do we have a little bit more? Do you want me to do Istio? Okay, we'll do Istio. Same experience, I'm gonna walk it up from the ground. All right, so that you can see the whole experience. Let's do that. Tear that down, clear. And I'm going to show you that basically, it's easier for these kinds of demos and K3Ds is really great just to torch the cluster and bring it back up. Like just get rid of the whole thing. And we're done. Okay, so if I was gonna go backwards, right? Do, do, do, do, do, do, do, do, do, do, do, do, do. There it is. Same thing. I'm gonna go ahead and create it. Now, the good thing here is that all the images are still there. So it actually creates pretty rapidly, but I still have to let everything come up inside the container. So I will do that. We're doing, again, for everybody who remembers, we're gonna basically build the same app in spin. We're gonna attach it to Istio, right? So watch, you control, get, whoop, get, pull. And we're still doing the container creating. Istio, Dapper, any service mesh you want. Modules, in this view, modules, you can take advantage of the modules along with your current workloads. So we have customers who are using things that they can't move. They're a big monolith and they wanna refactor and they love the size and the speed and the agility of WebAssembly, right? But they don't wanna jump a tech stack. They don't even wanna create a new cluster and then have to wire the cluster across. Like, they don't even wanna do that. And they don't wanna touch nodes any differently. They don't wanna have different node pools for this stuff. And so this enables them to just schedule WebAssembly right next to their monolith, we'll call it a monolith, a microlith, right? So we're all running here without worrying about it unduly, okay? And so for this one, we're gonna go back to the Istio. Now this is Istio's glaci demo. That's because I forked it from my developer, our developer at Microsoft, Keith Maddox, on the Istio team. I asked him, hey, can you get this up and running? He's like, yeah, no problem, this is easy. So I've got the most recent version of this. Oh, what did I forget? I forgot that each time you bring it up, you have to install the shims, boom. All right, so the shims are installed. And you'll remember this, we're gonna just install workloads so that you can see them. They come up right away, cube, control, get, po. And these are the same shims. They're just running, okay? And I can curl them and they all work in the whole deal. Now I'm gonna go ahead and install Istio. This is just the most recent Istio. There's no special version of this. And again, I'll probably bump this up a little bit and then bump it down and bump it up and bump it down. And gateway stuff is coming. So this like beautiful waiting period is very common in Kubernetes demos. Do forgive me. In this particular case on Linux, this will happen very, very rapidly. But in Mac and in Windows like with WSL, you're basically using nested virtualization. The networking has to wire up. And so there's a lot of stuff that has to happen before it really kind of kicks in. There's the watch. Now notice these are pending. You can ignore those. That's actually an Istio artifact with K3ds. Doesn't really affect us here, but it's really only a demo situation that you can think about there. So we go back here. We've done that. Let's throw in Prometheus because what would Istio be without Prometheus? Yes. Wouldn't be anything without Prometheus. We gotta have an endpoint to pump out data. Okay, that's created. And we'll install Keali because you'd like to see what Prometheus is pumping out. And you're actually gonna have to label the in. Dapper does that step in dapper where we did dapper init. That labels the namespace for dapper. So this is really just kind of a manual step. Whoops. Come back. Gotta do that. And you remember the product API. This is actually the same exact example. Now in this one. Boop, boop, boop, boop, boop, boop. Okay, and we're gonna do Istio. Right, there's the product API. It's taken from that article. And then we're gonna deploy the Istio sleep application so that we have some ongoing invocation, right? Okay, that's so that we can run this and it kind of just generates stuff. We'll do a cube control, get po, a. Whoops, that didn't work. Cube control, get po, a. And everything's running. So we've got a reasonably big set of things here, right? And in this, we're gonna actually just go ahead and port forward and so you, one of the things we did is we actually did it this way. I'm gonna do it a different way. I'm gonna say code. Do it here, code. So open VS code. If you haven't seen the visual VS code Kubernetes extension, this is what it is. And so I'm gonna go find the service, right? Services and then the product API and so forth. But that's the wrong namespace. So we go ahead and do Istio system. We're gonna use namespace. And you can see we've got a keali. And we go ahead and right click that to port forward, get rid of the metrics because we're not gonna look at that. And just go ahead and do that. We've now port forward, right? And we can do this, open in browser. There we are. Istio's running in a Kubernetes cluster. Istio does not care. You don't care. The only thing you know is that module is like that big. And this was built with spin. So now you can go over here. You can see like there are tons of, you notice the bottom four, they are missing the Istio tags because we installed them before we installed Istio. So Istio didn't know about them. They didn't get tagged. We could go back and remediate that, right? So if we go into the product API, let's go ahead and get the traffic last one minute every 10 seconds. Traffic, inbound metrics. That won't appear till yet. And we've got the graph. Graph, and we've got that. And display. You can even display the security. This is empty graph because we're doing last one minute. Let's do last 10 minutes. And it doesn't happen yet. I'm upset. Well, we know it's coming. So it's a question of waiting until the graph picks up the data. And so I can stay here. And this will just keep running. Now if, yeah, there was a question there. I'll repeat it, just go ahead and yell. I understood. Here's the answer. Let's see. I'm gonna go to my favorite. Here's the answer. This demo may be small. I will stop it and start it. This is gonna run a slight application instead of a spin application. But I can actually compile this and spin and it'll run great. We don't, slight still exists, but it was really a research tool. So if I redid this demo, in fact I will redid this demo pretty soon, this demo will be a spin demo. So I would actually have you build this here and run this in AKS. This is an AKS cluster. And if I run it, originally there's nothing there. No resources, default namespace. This is in Azure Kubernetes service. And if you look here, you'll notice that I have several node pools. One is Windows in AMD. One is Linux in AMD, not surprising, right? And one is Linux in ARM. So the gentleman over there was interested in ARM. Here's ARM for you. This is just one cluster. And so in this case, you're thinking, if you're thinking containers, I'm gonna stop it for a moment. If you're thinking containers, you're thinking, okay, you got a container. Okay, that's three node pools of different SKUs. So I've got to do a multi-arch build. This is just Rust. So all of a sudden, all of the overhead of containers is staring in the face, even though the container in the end will be pretty small. And of course, it'll be pretty small if you build it as Scratch and give an entry point. But it won't be small if you build it from Jesse Slim. Right, like, and what do most developers do? Do they build Scratch and give an entry point? No, they don't. They Google for a Docker file that'll make their stuff run, but a bing, but a boom, I can move on. And I don't blame them, right? It makes it easier to run than it does to actually understand. So in this case, if I deploy, I'm gonna deploy the Azure voting app, which is a, you know, the CNCF Kubernetes voting app. It's the same app. So there's two containers here. And it's compiled for Linux. ARM, excuse me, AMD. So what happens? Those containers that get scheduled to Linux AMD, they just run, but the ones that arbitrarily get scheduled to other SKUs, they don't run. So here I have the same YAML for a WebAssembly application. Ignore the slight, think slight spin, okay? What's different for the Kubernetes people? What's different about this file than a Kubernetes container deployment? Can anybody see it like that? Yes, we did the right thing. You can't see it like that. It is identical with one exception and that's the runtime class reference. All we're saying with runtime class is find slight spin, slight shim, and use that. If we omit that, the default shim is container DRUNC. Or in OpenShift, it might be something else, right? Like in another, depending on the distro. Whatever the default shim, if you don't put runtime class, you're gonna get the default one. Here, we've said, look, no. But that's it. It's just an image. How are we doing, Michael? Are we about ready? Perfect timing. So here, I'm going to apply this. I'm gonna give you five instances of this WebAssembly app. Remember, your question is about why not rust? It's small. So what happens? Where'd that deploy? Every single one running. Every single one. And I'm not lying here because I'm gonna convince you that this is true and I'm gonna do that by grabbing one of the node pools and destroying it. Now normally, now I gotta deal with node paints and tolerations. I gotta make sure that those arch builds are pointing to a different type of container and so forth. Did you see what happened? I don't know if I showed it fast enough, but I will show it this time. They just redeployed and now I'm gonna delete the other Linux node. Now watch what happens when I delete the Linux node. That's the ARM node. Boom. Right there. WebAssembly just used the native Linux rescheduling facility to redeploy across SKUs. So think about this in the reverse way. So I'm using Windows as an example and the reason was I built this demo for .NET people, right? And so they wanted to feel comfortable moving from Linux to could I do that, right? But think about your Rust example. If you were working for somebody who had Linux nodes, your Rust example runs, but do you have to compile multi-arch for Rust? No, you just compile once. It's the same module. In this case now, your operations team can sit there and go, well, if we're running WebAssembly, I can actually save 20% off the top just by grabbing the ARM nodes. That same application can run on RISC five nodes, on RPIs, on all that stuff and you can get the same feeling. That's why WebAssembly brings us incredible agility and size of tremendous security advance over containers. If you're curious about that, you can talk to us afterwards. This is the feeling that Kubernetes almost is a pass and lets your operational team talk about things like nodes and SKUs and stuff like that and you can build WebAssembly and it's much, much smaller, faster, safer and you don't have to worry about the SKUs. That's the answer to your question. Anything else? We good? I don't think so. I think we're good. We used all the time. Thanks everyone who stayed with us for the full hour and a half. We do have a survey that's in the workshop thingy. We'd like some response on the survey so we can make things better. Also, both Fermi and Microsoft have booths at the showcase area. Feel free to come by. If you can show a WebAssembly running in a Docker container, we definitely have some special swag for you out there. Cool. I think so. We actually, I think we still have the keycap. If they don't, hold them to it. Make them go make swag. The coolest swag we have is a custom keycap that's a little slaps cat logo that we have. So if you want one of those on your keyboard at home, come and show us a WebAssembly running in a container and you'll get one. Go out. The webpage for the dapper here. Let me turn this on. I'll let you. Thanks everyone for joining. Everybody go. Enjoy this tutorial. Thanks for toughening it out. You made it all the way through the 90 minutes, which means it must have been interesting. And I'm glad if that's the case.