 All right. Thank you everyone for joining us for today's live webinar with CNCF, AKS, and Spin Integration. I'm Lydia Schultz and I'll be moderating today's live webinar. I'm going to read our code of conduct and then hand over to Ralph Squillis, Principal Product Manager Azure Core upstream at Microsoft. A few housekeeping items before we get started. During the webinar, you're not able to speak, but there is a chat box. Please go ahead and pop your questions in there as we go Ralph is ready to get to your questions. This is an official webinar of the CNCF and is such a subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct and please be respectful of all your fellow participants and presenters. Please also note the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They're also available via your registration link you use today and the recording will be on our online programs YouTube playlist. With that, I will hand things over to Ralph to kick it off. All right, can everybody hear me? Okay, somebody give me a thumbs up or the equivalent of a thumbs up in the chat just to make sure we're doing okay. All right, we've got one. That means we're doing okay. Thank you very much Libby and I'm really, really pleased to be here. The high order bit is talking about web assembly and seeing how easy it is to get started and make things work. But we also want to talk about the areas where web assembly is difficult so that you're aware of them, don't get frustrated when you bump into them and so forth. And we can also talk about the future of these areas, like when will they improve, when will this be super ready to use, where you won't even think about anything and so forth. We'll just cover the whole gamut and we'll also talk generally about web assembly itself and the component model. These are the questions like why we do these things and so forth are very, very important as engineers and as people in the business of software, depending where you are in the entire ecosystem of the world. Sometimes the business questions are more important than the engineering questions and sometimes the engineering questions are more important. Business questions and those are very different things. So we can talk about those kinds of things as well. And I am going to walk you through a good portion of the spin Fermion workshop that we did collaboratively with the Fermion team at KubeCon in Chicago just to give you an idea of how easy this is. So I did drop a link into the chat but I'll reiterate it and I'll see the links when we get to the demo part of the session. And so you can actually install this and follow along. It's really easy to do. It takes about maybe five, ten minutes to set up your machine depending on whether you have Docker, desktop installed, depending on whether you need to install spin, 2.0.1. And also things like let's see what else. The language that you intend to use. And so we'll go through those various areas. We'll build a couple of languages and we'll also show you the difference in performance which is really, really critical between native spin modules with different languages because different languages have different performance characteristics. We'll show you that. And also the comparison between WebAssembly, standalone and things like containerized WebAssembly, like in Kubernetes. Very, very different experience. And there are little bits of overhead there that are important to be aware of. All of these are just normal engineering differences. They're not a result of anything going wrong. They're a result of very different choices at the engineering level, at the technical level. So let's get started. To do that, I've got a little bit of an introduction on WebAssembly. I want to make sure that we're all in the same boat, more or less. And let's see, I got that started. So let's go back to the webinar page and I'm going to share my screen. And then first you're going to get Inception. And then we'll jump over to the actual screen. So we're going to do the entire screen here. In theory, that should be okay. I'm going to hide my screen. We now have Inception. So I'm going to go over here. And so in theory, everybody can see AKS and Spin, 101 to Zoom. So we're going to go from the very beginnings all the way up to burning a whole bunch of cores really fast. My name is Ross Vellacci. As Libby said, I'm on the Azure Core upstream team. I'm also the Microsoft board member for the Bite Code Alliance Foundation. That foundation provides the legal and engineering resources to ensure that more than, I believe now, more than 2,000 developers are in the ecosystem that we know of messaging, communicating, and committing. It's quite a substantial group of people, all of whom are committing to one of the repositories, things like runtimes like WebAssembly, MicroRuntime, or WAMR, and WasmTime, in addition to all the tooling necessary for the WebAssembly component model. All of that work ends up, if it's a specification, ends up in the W3C, or is a tool that is generally available for anybody to use. We'd love to have you do it. So here's the abstract. This was on the front page of the site for the webinar. But first, we're going to talk about WebAssembly itself. And then we're going to talk about how you make these things into Kubernetes. And then we're going to actually do it. And we're going to use Spin because it's really one of the more fantastic serverless platforms to run WebAssembly. And we're going to show you the differences. So first, let's talk about why we do this WebAssembly thing. The problem space that we're in doesn't initially seem like a problem space unless you've been really working on containers for a while. And it also doesn't seem like a problem space if you've been doing native code. Like, especially in particular, there's a lot of artisanal native code still on the edge, like not in hyperscale cloud native, not in Kubernetes, for example. And both crowds might think this isn't really a, they don't really have a problem. There's so many things that they have yet to improve on. And that's probably true. But for a lot of people, we've been working on containers for 10 plus years now and Kubernetes for quite a while. And what you realize is that you want to do things with containers that are really, you just can't do. And a lot of them revolve around the fact that containers are essentially too much VM-ish, right? And we'll get to what that means. So before we get to that, though, we want to talk about the core feature set of WebAssembly. And the first one is that it is actually essentially a cloud native binary. If we think of containers as sort of a cloud native application, that's the way we sort of thought about it for a while. But it's really a cloud native operating system with an application on it. WebAssembly is extremely small. It's essentially a binary format for an abstract virtual machine. And as a result, you compile it to a binary. Like, you compile to a WebAssembly module. It's very, very different than the Docker container experience that we're used to with containers, for example. And it also makes things very small. And I'll give you an example. So, and you're thinking about containers, some of the containers are very, very large. That's not really the important thing. Those do get smaller. But let's take a very small container, like a Go language container that we have. We were compiling one of our Go language operators, which is only about 12 megabytes. And we compiled that down to WebAssembly and we got that down to 176 kV. And that's a substantial reduction. If you have a big language with a big runtime and things like that, the reduction might not be so much. But at the same time, something like .NET, for example, which is a substantial runtime, can be contained at about 50s, 50 or 60 megabytes for a very optimized container. But compiling down to WebAssembly, we immediately dropped within the realm of 10 megabytes. And we believe we can get it down below 5. And so there really are scales of difference between size here. The second bit is that we have a by default deny security stance. And we had to have this because WebAssembly was born in the browser. Was it born in order to enable complex languages, advanced languages, C and Rust and other languages like that, to compile and be able to host in a JavaScript engine? And it turns out it works really, really well. But to do that, the browser has to have complete control over what the module is able to do. And that means the browser, the host of WebAssembly, is able to opt in to behaviors that the module may wish to perform. Unlike containers where you have direct access to the kernel, the container essentially can be prevented from doing things in the kernel. But otherwise, the container believes and by default has access to the kernel, which is an extremely large surface to attack. And so that becomes a very, very difficult thing to secure, even though the industry has made tons of changes. The other one it has to do with portability. WebAssembly's language agnostic, as in abstract virtual specification, right? Any language could, in theory, compile to it. And many, if not most, do in some way, shape or form for better or for worse, depending on the languages. And I'm expecting some questions about that kind of thing. I'll be happy to discuss it. But it's also OS agnostic. Think about running inside a browser. You'd have to be able to run on Windows and on Mac and on Linux and things like this. And so the operating system can't be relevant at all. So it isn't. And that allows one module to move from one operating system to the other. And finally, it's CPU architecture agnostic. Think about, again, browsers. They have to be able to run on ARM. They have to be able to run on AMD. But more importantly, they often run on very strange chips indeed. And so as a result, so does WebAssembly. And that language OS and CPU agnosticism, if you will, can bring it to almost any location because it's very small. And so it's location agnostic. You can run it almost anywhere. So these are the key feature sets. But what do we mean really? What is WebAssembly, right? And why? I've stolen this keynote slide from Luke Wagner at Fastly, a distinguished engineer there, and a key designer and contributor to the component model of WebAssembly. And the link, you'll get this deck. And the link is the YouTube keynote from WasmCon, which was in September. And Luke laid it out like this. WebAssembly is a binary instruction format for a stack-based virtual machine. And that's abstract, stack-based virtual machine. In other words, there is no physical machine that's designed this way. You need a runtime to present this machine to your WebAssembly and execute. And Wasm, then, is designed as a portable compilation target. So you shouldn't think of it so much as a virtual machine in the sense of hypervisor, right? It is a generalized virtual machine. But the specification for executing one is not a hypervisor. It's, in fact, a WebAssembly runtime. And from the language point of view, you should be able to compile your native code directly to WebAssembly to get the advantages of WebAssembly, portability, OS, language, and runtime agnosticity, and so forth. So how does that look like? So if we think about regular languages that we use in developer, right, Go, R, JavaScript, C, C-Sharp, PHP, Java, all of these things, right? Normally, they would be compiled out to a specific operating system or a specific architecture like you see here. And some languages that do interpretation, like Python or JavaScript, will have, actually, the compilation target will take place inside the JavaScript runtime. It'll be parsed and interpreted or ahead of time compiled for the specific operating system and architecture, right? So the glory of WebAssembly is that because almost all languages can target it, you could compile directly to WebAssembly. And if you did that, a browser or a Wasm Engine or a Wasm Engine on a phone or a browser on a phone or a device can, in fact, automatically handle the translation between architectures and operating systems for you, which means your language can be compiled once and can run in all of those places. Now, that's a very interesting experience, right? And Luke had said the wins are portability, determinism, and control flow integrity. And those two determinism and control flow integrity, those have to do with the fact that the specification for WebAssembly does not allow arbitrary creation of memory. And so things like go-tos and infinite loops and so forth don't really exist. That's actually a good thing because it means you can programmatically examine a module or even test a runtime for conformance to ensure that the sandbox is not broken out of or that you're not leaking memory anywhere or violating somebody else's memory. And that's really cool. And that, along with the specification for a sandbox, which was required for browser hosting, as you can imagine, gives us the ability to start and protect any module from the memory of any other module or the host. And do it lightning speed. This is really fantastic. So my modification, this is my modified version, is the wins are portability, security, size, and speed. By default, nobody can do anything with your module. And to get an idea of what this really means, if you think about last year's log for J issue with malicious code that basically assumed it had operating system-wide access. But if you didn't notice that it was compiled as a dependency for your application, it would go ahead and use the operating system to exfiltrate data and credentials. That can't happen in WebAssembly by default. You can only opt into that ability. And that's something that you can control. But in a container environment, it's very, very hard to do that, a priori for all the containers you wish to run and all the dependencies therein. Okay? So when we get concrete, what you want is a list like this. This comes from a presentation I saw that was actually pretty good. And you can see that for several aspects, WebAssembly might be several megabytes. In reality, we target for well under a megabyte if we possibly can. And it depends what you're trying to run. If you're running something like a Postgres server in WebAssembly, it might be more than a several megabytes. But if you're running a serverless function, it might be as little as several hundred kb. And that's fantastic because it's really what you want to target. Container, on the other hand, extremely large and laughing at the hundreds of megabytes because they routinely see containers that are well more than a gigabyte. And still fully functional as a service because the developer doesn't have to pay for the big network and the big machines. And if they don't have to pay, they don't have to optimize. And so there's really kind of what you might call a disciplinarian operational mismatch there. Startup time for WebAssembly used to be referred to as milliseconds. That's not actually true. For a no-op WebAssembly, you can start from cold and enter a WebAssembly function in low nanoseconds. And you will see that milliseconds start up time is actually not considered very good. We're actually going to show you that. And I'm going to describe the problems with my machine that result in mere microseconds instead of nanoseconds. It's possible that you can get into the nanoseconds yourself with spin. Containers start in seconds, but the bigger they are, the more those seconds stretch out into even minutes. But they're still much better than VMs. In performance speed, WebAssembly is usually said to be kind of about 10% of native speed, but it really varies on what you're trying to do. If, in fact, you want to run at native speed, you can certain kinds of workloads, especially if you ahead of time compile. But you're not really going to get close to 10% of native speed if you're doing all kinds of heavy work that WebAssembly is not really leaning into yet. So WebAssembly also runs in the browser, cross-platform portability is extremely high, all the standards are in W3C and OCI, and also the CNCF. So we're going to show you the RunWazze project in container D. And system interactions, we use Wazze or the WebAssembly system interfaces or standard interfaces to model and underlying OS and virtualize it. And we'll show you a little bit what those mean later on. This isn't exactly, as you can see, the current way of interpreting, but it gives you some idea of the differences that we're looking at here. Now, what and why this WebAssembly thing? Let's get back to the whole VM-ish problem. So what do we really mean with that? So what we really mean is that essentially VMs were a tremendous advance on bare-metal code, code that was native, running on a bare-metal installation of an operating system. And that code was built and created, especially if you're talking about Linux and open-source code. Often the core code bases were created 20, 30, 40 years ago. And what we're doing is bringing forward all of the assumptions that that code made. And much of our security and supply chain issues that we're dealing with now in the container ecosystem really are a result of the fact that all of that code assumed it had full OS permission. Everybody loves root. If I could get my daughter to behave properly when I called out sudo, I would love that, but that's not the way humans work. And it really isn't actually the way we want code to work in the future. It is true that containers essentially sold themselves based on like containing an application, but that application was actually bundled up with an operating system in it as layers and so forth. And the only thing that was really shared was the kernel. And that meant that the assumptions of the code you brought with it were essentially the assumptions of full operating system permission. With containers, if you don't have to boot the kernel, you come up pretty fast. And that's really nice when you're comparing yourself with VMs and so forth and native code if you had to boot the OS in the beginning. But in reality, these features really impact Kubernetes. And it turns out that Kubernetes is limited in its usage scenarios, not because Kubernetes has any inherent problem, but because the features that you need are not container features. So to make Kubernetes useful in more scenarios and at less cause, what you really want are smaller, excuse me, more responsive clusters. And that cost, that will drive class down. And by smaller clusters, you'd like to be able to have fewer nodes. You'd like to have those nodes running on smaller SKUs, for example, or maybe even an ARM SKU. So that, in fact, you can pay 20% or 30% less, depending on where you're doing your cloud hosting, for example. You'd also want to run clusters in small heterogeneous spaces. Maybe you're a restaurant and you want to run one cluster with one node for each point of sale cash register that you have. Maybe you want to just have a serve HTML5 and push buttons. And a regular Kubernetes distribution doesn't really lend itself to that environment. There's very little IT support. It's hard to keep things running. And so when you do, you want something that's a lot more flexible. You also have older machines. You have different architectures. Not everything's the same and your networks are weak. So you can't use a big, huge container. So that's really important. WebAssembly helps address these situations and it brings Kubernetes to new places where otherwise you wouldn't expect it to appear, if at all. And finally, you can ignore nodes and pods and let the ops team choose SKUs. Now, the interesting thing about this is that it's hard to know what that means until you see it. And I'm going to show you in just a second. All of this stuff I'm talking about right now is purely open source. It's the best of the CNCF. Container D is the vanilla shim with Run-C inside. And there is a Run-Wazzy shim project there, which you can use. And we're going to show you how that works in a little bit. And most of the WebAssembly stuff that we're going to use is hosted or related to the BCA or the CNCF. For example, Wasm Edge is in the CNCF and Wasm Time, which we're going to use here and Whammer and other tools to run and use components are in the BCA. And we're also going to highlight, I also want to highlight K-Wazm, which is a great open source project that allows you to install the Kubernetes shims, the Run-Wazzy shims, almost anywhere, which means you can run what we're going to run tonight in AKS in Azure, Azure Kubernetes Service, or you can use Google Kubernetes Engine, or you can use EKS on Amazon or anywhere you wish, whether it's on prem or whether it's in a cloud hoster that you prefer. So we'll show you how that works. So we're going to do this with K3ds and AKS. If we have time, we may or may not have time, but we'll figure out whether we do. But before we do this, we want to show you, I want to step back a moment. Come to the chat and I want to look in, see if everybody's got any questions. I see Jeff, you seem to be helping people out, which is great. We are going to, in fact, use 2.0.1. What I want to do is show you a little demo. This is Fermion Spin right there. Let's get back here. Here's the demo I want to show you. And this is a demo that gives you an idea of what kind of features the WebAssembly gives Kubernetes. This is an AKS cluster. And it's a special kind of cluster. So I'm going to play this. Let's see if this works. It's a little fuzzy at first, probably because of the upload and download. There it goes. And you can see this cluster doesn't have anything, but it does have a Windows AMD node pool. It also has a Linux AMD node pool. And it also has a Linux ARM node pool. So this is a very different kind of cluster. And normally, when you deploy containers to something cluster like this, you'd have to use paints and tolerations and node selectors with labels and the whole thing to make sure that your containers got deployed to the node that supported them. And I want to demonstrate that here. This is a simple two-container app. And you can see that we get a lot of misscheduled containers. The image won't pull because the container de-shim will not load anything. If we sit there and deploy the same kind of application that is built with WebAssembly, you'll see a different effect. I'm going to show five different applications. And they're all running and they distribute. Now, look at the node pools. You can see that in fact WebAssembly, the same module, will deploy on Windows AMD or ARM. And we're going to prove that to you. We're going to go ahead and destroy one of the node pools, just delete it right out from under the cluster. And you should expect that we would see Kubernetes native understanding that in fact we need to terminate things and redeploy them. And you can see them happening here. This one happens fast. We only killed one node pool. But let's kill the other one. And in this case, the only node pool running will be the Windows node pool at the end. Now, let's watch what happens. And just for a moment, there goes terminating and creating. And you can see at this point, right, the only container running happened to be scheduled to the master node pool. And that's okay. But all the others are not working at all. And yet all the other WebAssemblies have now been rescheduled to Windows. Now, there's nothing Windows specific about this. And so what you can imagine here, what you're really looking at is the way that WebAssembly abstracts the details of the node away. There are no node pool selectors here. There are no taints and tolerations. Now your operations team, if it wants, and you're running WebAssembly, it can actually deploy to nodes that are armed and immediately see a 20 or 30% savings. So that gets really fun. So I'm gonna go ahead and stop sharing this. And I'm gonna go ahead and move here. And what we're gonna do, let's move here and then move here. I'm gonna drop through the incredible thing. What we're doing is you can see that this is the getting started. I wanna show you how this works from the setup. And I don't know if you can see okay. I hope you can. You can look at the setup and you can see what we're gonna do is we're gonna install this. This installation right here, I believe, will give you spin 1.51. I actually go ahead and just do spin 2.01, which is the latest. So that's what I'm gonna demonstrate for you. And so you should be able to follow along without too much trouble. I'm also gonna use the TypeScript one and the Go one, which requires, and the Go, excuse me, TypeScript, if you'll notice you click here, TypeScript requires NPM. So get node and NPM involved. And the tiny Go one will require tiny Go. And so you're gonna need tiny Go, go, right? And the version you want is latest and you'll need 0.30.0, okay? Oh, you like to follow along? This is the way to do it, but you don't have to. I'm gonna actually walk you through everything. So if we're gonna do this, that's the installation steps. You can use the dev container. This does provide you with 1.51 and compatible dependencies. But as I say, we're gonna do this live on 2.01 to see whether I can get in trouble, right? So, first to get started with Spin and WebAssembly. Now what I'm gonna do is I've cloned this over here as you can see, right? And if I do this, we'll go to the code and I can go ahead and here, I can open the preview of website. In fact, it opened right away. I'm gonna get this out of the way and you can see we're here at the setup again. So instead, I'm gonna open up the preview to this. This is where we're starting. And this is a quick reference to get up and going. Now I'll walk through this, but we'll also make sure we're checking and see if we can catch any errors or documentation errors because I'd love to submit a PR if anything is mistaken. Now the easy way to do this in VS Code is just to open this up here so I'll do so. I hope that you can see that easily enough. So I'm gonna move this over here and the critical thing is we're gonna talk about SpinNew. Let's do SpinVersion just to make sure you can see that I'm using this SpinVersion 2.0.1 and it says, hey, SpinNew, let's do that. And in fact, I said I was gonna do JavaScript so I'm gonna do JavaScript. HTTP REST request handler. All of these are request handlers right now and that gives you the idea that in fact, this is a fantastic serverless platform that Thermion has here. And so we're gonna say, hey, new JS function. Description, you can just delete through those or excuse me, enter through those. And when you get done, you can actually see that we've got a new JS function. We'll go to CD to new, okay? I'll get a little bit more room. And if we do tree, tree is apparently having a great time not running. So I'm gonna clear that and just say LS. Whoops, I didn't clear that. Wow. Clear, there we go. So in fact, why we're not reading, I'm not really sure, but I'm not gonna hang up on that problem for a moment. You can see the SpinToml file is described here, right? And we're gonna actually end up building an application and that application is going to be WebAssembly and we're telling the component, they were telling Spin that the component will be called this and it will be located here. You will exclude that is not build into the module, the node stuff that will all get compiled down into WebAssembly, there's the route. And when you do a build command, you can specify custom builds commands in here. So one of the things that we do with the versioning, I'm gonna skip over the language stuff but we are gonna in fact use the TypeScript application. So you notice it's NPM run build, so we're gonna have to actually do NPM install first. We'll let that come in and in fact, there we go. Coming in relatively slowly, I'm not necessarily surprised that my network connection is waiting for a kind of fun. We've got no vulnerabilities, which is a miracle and we can now do Spin build. And you'll see that it's going ahead and building the module and optimizing by using the wiser tool and then smashing the size down using a wasm op, you don't have to do this. Spin does this for you to make it as easy as possible for you. So if you do now a Spin op, in fact you get a new service and we open a browser, sure enough, hello from JS SDK. Now let's give you an idea of what we're doing here. So notice that we've got this in VS Code, we've got this running here, okay? Local host 3000, so if we go ahead and curl, HEP, local host 3000, we get it, hello from JS. That's fantastic. But I wanna use a tool called Hey. And if you don't know about Hey, you should, it's a great load tool, very basic and very easy. And it gives us a sense of performance density, right? This is super useful. So we're gonna say I want 5,000 requests and I want them on five connections for the same endpoint. Local host 3000. Now the first one involves downloading. Look at this. So just out of the gate, that's 10th of a second, that's a hundredth of a second, that's a millisecond. And we did not even reach a millisecond. This is five microseconds for the fastest one. And you may think that that's not very good or maybe if you think that is very good, but notice that the average is eight microseconds. Now, I'm running on Windows inside WSL and I'm streaming network connection from it. If we add all the overhead and remove that extra process and then we happen to be doing, you can imagine that this is gonna drop down and is that repeatable? Actually, it is not only repeatable, but it actually gets better in many respects, right? So that's fascinating. Look at how good that is. That's really nuts. So great stuff. Now, if we go ahead and take that back, let's do something else. Let's do something like go because it gives us a clear idea of the differences that languages have. Now if we do spin new and we say, hey, we're gonna use go with Piney go, the go language component support is coming along. Sometime in the next half year, I would hope that they support preview two, which is what we're using. So this is my go and no description and so forth. And remember for this one, I have to have tiny go installed because we need a compiler for our go. It turns out to tiny go actually compiles go code. So you need go involved installed as well. But once you do that, you can just do spin build. Whoops, I gotta now go into my go directory. My apologies. And I can do spin build. You see that I'm actually building tiny go, build target wassy. Go ahead and leak GC. There's a reason for that. Eliminate debug symbols and go ahead and build main.wasm. And there we are. So now I can do spin up. And sure enough, I open that up. We got hello, fermion in go. Okay, great. But how does this stack up? So here we are. We're gonna run the same program in go. Let's let it run a couple of times so that we got all everything cached. Do it one more time. I'll look at our histogram. The fastest was in fact down to 002. Now that's actually unexpected. I actually, usually the go takes a little bit longer. So this is super thrilling to see. And remember, we've got nested virtual machines going on here and so forth. That's really interesting. So I'm glad to see it. We go back, we go back here and we go ahead and tear that down. And we're good to go. So far, so good. I'm gonna jump back in here to our webinar and see what we got. There you go. Yeah, Jeff notes that the downloads, the binary is all really unique. You need the signature file if you're gonna validate the download and make sure you do that. Documentation for spin has the cosine command to validate that. When you do do the tree, as Jeff seen you, but you actually get tree in your console. I'm not really sure why tree is hanging up in WSL right now, but we'll figure that out later. So if there's any other questions, remember to dump them in here. You're not seeing the response from the serverlet? No, I'm gonna look, yep. Okay, so let's do a couple of diagnostic things. We'll work with you here, Jeff. No obvious content. So here's what the serverlet is. If I do this, so let's see. I'm gonna clear, if I list, we got new JS function. So I'm gonna cd to new JS function and I'm gonna do spin up. So if I hit this, okay, with, are you hitting it with a browser or are you hitting it with curl? Right, so here's what it looks like with curl. Curl, HDP, local host, 3000. And so what you get is like no return. If you don't notice it, it's like there because there's no extra new line in the return. And so you can actually say, if you wanna do curl, you can do curl v and then you get the whole return and that probably will help you out. Yeah, you can add new lines too, right? So in fact, I'll do that while you're doing this, right? I'm gonna go here and I'm gonna get rid of that and I'm gonna go back to my code. Let's see here was my go to jnews function source and I'm gonna go like here, hello from JSK, but this is Ralph and we save it and we go ahead and cd new JS function and we go ahead and spin build, right? And spin up, spin up. And it says already in service. So that port, I got a kill. Yep, that's the spin trigger, auto forwarded. So stop forwarding port. Okay, there we go. Let's do that again. Terminal and up, I wonder if this is a, do I have it running over here? Yes, I do, bad man, right? So we'll spin up over here. Now we have it running. And if we do this again, we're gonna get that but you can see that this is Ralph, right? So that should be the experience up here. I'm gonna come back here. Jeff, are you having that experience? Okay, we're at 743, we're doing fine. I got other things to show but I'd like to get you running here. Ah, okay. So, but the truth is the spin build failing, you need to do npm install first. Did you do that? Because we're in JavaScript, remember. If you're in tiny go, you don't need to do that because tiny go, go doesn't need you to do that. Ah, go ahead and try it. I'm gonna move on to the container. Right, but build won't make a difference. It's gotta be npm install first. Oh, did Jeff, let's just say me talking to you about it made it magically happen. And as a developer, this has happened to us all the time. So like, fine. Yep, npm will eventually get called. But it's like spin build. If you look at spin build, right? If we go over here and look at our spin Tommel, right? It should be npm run build. But in fact, whether npm run build actually calls install first really depends on your configuration. So npm can get called there, right? If you look at the, we're using TypeScript One, what they really want you to do is make sure you install the application dependencies first and so forth. Now, what we really wanna do next is run this in a container. And that's kind of interesting. Now, the thing I'm gonna show you is that I'm working in WSL. And whether you're on a Mac or WSL or on Linux, the easiest way to get this all running for yourself requires a special configuration of tooling. And the easiest place to get that is Docker desktop. And that's just because Docker has done a really great job here. I'm gonna clear this up. Docker version gets you exactly what you need. You can see that this is Docker desktop. And you'll note that the actual versions are really good. You're gonna need a container D version of 1.625 or above. You also have Docker wired up to build kit. So Docker, we're gonna do Docker build X, which requires, and if you do a version of Docker build X, you're gonna need at least, I think it's 11.6 or above, might be 11.8. But Docker desktop gives you the newest version. And it's very, very nice that way, okay? So moving to the container space, in other words, to actually be able to deploy this in Kubernetes, it gets packaged in the container. We're gonna go and do this. So let's go ahead and do this. We're gonna say running in a container. So if we go ahead and run in a container, we're gonna do this with our thing. We're gonna actually create a Docker file. And so to do that, copy, we're gonna go ahead and tear down our thing. We're gonna use our JavaScript function and we're gonna create a Docker file, okay? And we're gonna create the new Docker file. And Jeff, I can't wait. If you got this running, you should be able to follow along if you have Docker desktop installed. We're gonna go ahead and drop this into the Docker file. Now note, we actually have to modify this to make sure that the targets are correct. So the target is actually newJ function. So we gotta do newJ function, newJS function wasm. And in this case, we gotta write it again because why would we not have to write it again? NewJS function wasm, okay. So we've done that and then we can use this. Now, this is Docker build build X. When you do a Docker build build, a build build X build, you're using build kit underneath the hood, but Docker desktop takes care of that for you. So you don't really need to know about the build kit part. However, notice a couple of things in here. One, we need to tag it and it needs to be tagged with the name of our thing. So I'm actually gonna call it the same thing just to make sure it's clear. We're in newJS function. And my GitHub ID is Squalachi, my last name, which is nice. It's always good to have a great strange last name because then you get to use your last name as an alias for almost everything. It's really nice. I feel bad for the Joneses, Johnson Smiths of the world. So there's one other little bit you need to do here for sure. And that's this one, whoops. This provenance equals false step. There's a reason for that and I'll explain it, but basically this tells Docker build kit to not attach provenance information in the image, in the image manifest. And the reason for that is because Kubernetes does not yet know what that provenance image is. It gets tagged as unknown platform and Kubernetes will refuse to pull it even though the module is in a container and that's fine. You're gonna build this as a type of platform wazihwazim and you're gonna go ahead and use the build context that is right here. So we're gonna go ahead and do that. You can see that that's all done. Yay. So if I do go over here just to have more room and do Docker image list, right? You can see that I've got new JS function up here and it's packed for a GHCR. Oh, I don't want that actually. I'm gonna go ahead and use Docker hub just for fun. And so I'm gonna build it again and now I should have another one which is just Docker hub. It makes it easier, but you can see that you can tag it any way. It's just a container. And I want you to point out the size of your container here. Look at that. 837 KB. It's really quite amazing how small this can be. Yeah, so you don't need to think here, Doc Garvey, yeah, you'll skip ahead and you'll just watch this. And it's possible that editing might have provoked the correct dependency sequence. It's possible, yes, absolutely. Okay, so back here to this, no, to this. So you can see the size of the container we just created. Now let's do this. If we go here to running in the container, okay, we built and now we wanna run it. And we're gonna run it as a background item and we're gonna actually use not the GHCR. We're gonna use the image we could in this particular case, right? But instead we're gonna go ahead and use the image we just built, which is, I gotta type my own name correctly, new JS function, okay? And there it is, ooh, spin binary, not re-involved, ah! Okay, here's the interesting thing. Look at this. If we go to the Tommel, excuse me. No, no, no, not to the Tommel. You see right here, we did this. Okay, this is a Docker desktop and a Docker thing. What we're doing is telling Docker, go ahead and use this shim to run this, but this is spin V1. We actually don't have V1 loaded, we have V2 loaded because we're using spin 2.0.1. And so in fact, now it's running. How do we know it's running? Well, in theory, we should be able to do this, 3000, and sure enough, it's running. Well, great, can we do that whole thing that we do over here with, hey, N5000, C5, HTTP, local host, I always forget the L3000. Now watch what happens. Do you remember how fast this ran? It ran in low single digit microseconds, right? Look at the difference. Now we're into milliseconds for the fastest, right? We can rerun it. We're still doing pretty well, but we're into milliseconds. We've dropped from 11,000 requests a second to 2,600. That's really quite amazing. So if we do Docker PS, you know, that one's going, so Docker kill 5902, we're gonna go ahead and kill that. But now let's do spin up, okay? And remember, so we just did this one. We just did this one. We're gonna do the same command, boom. Do it again, incredibly lower. In this case, we ended up at 66,000 instead of whatever we were doing, 6,000, no problem. If we go back here to the container one, we're looking at 2,600. So in an environment that is not virtually looped, right? Where you don't have a bunch, that difference is really amazing at scale, right? Let me make sure we're plugged in here. That difference is really amazing at scale. So now we're at the very end of our hour. Do people have general questions or should I try and pull off a Kubernetes installation? Installation. Installation. Libby, you're provoking me just in case truels didn't jump on it. Okay, so we're gonna do it. Okay, here we're gonna do it. We got five minutes to not only build a cluster. Okay, let's do it. We're gonna build a cluster and we're going to run clear. Okay, we go over here. Now we're gonna deploy Kubernetes, right? So Kubernetes, we're gonna create and configure the cluster. Now notice, I'm gonna change some things. There's a newer thing here. For this one, you have to have K3ds on. But we, K3d, but we know that there is a newer, brand new shim. So we're gonna go ahead and create our cluster. Now watch what happens. Let me create K3ds and then we're gonna go ahead and watch it. Come on. Oh, it's almost there. Okay, now I can do cube control, get, pull, and we'll look at all of them. And in fact, I'm gonna go ahead and do a watch so that we, now what's happening now is we have to pull in inside of Kubernetes all the internal containers that make Kubernetes run. So this is going to be our big hit, our time hit here. But let's go back and map out our thing. So deploy, we're gonna create a run time. We're gonna add the shim. And once we do that, right, we can go ahead and import our image to the K3ds cluster that we can do. And once we do that, we can actually, this is the only, we can go ahead and match the image name, right? And the only difference between a container, YAML, and WebAssembly, YAML is that run time class. That's it. There's no other difference, right? So this is the one where we're pulling in brand new, brand new images. Oh, we've got it. We've got one running. Let's see if we can pull it off. We're at 757. And we're gonna do this. Okay, we've got that. Deploy the cluster. We're gonna touch the spin run time while we're doing that. We're here. Okay, and we'll touch spin run time YAML. And we'll add this to the file. And I'll spin run run time YAML. Great thing about me is I come with sound effects. I come with my own beep sounds. And we're gonna go ahead and add that to the cluster at some point, right? But we gotta wait till the cluster is up and running. We're almost running. You can see we're getting in our traffic service load balancers. That's almost there. Two minutes left. Almost, almost, okay, we got traffic running. The jobs are completing and we're waiting for the last traffic manager completed. It's done. We got running, running, running, running, running, running, running, great. Fantastic, okay. Now we can do things like this. You get over here. We're in nanom run time. And so now we want to install where this shim command already has the container dshim involved in it. And so what we need to do is tell Kubernetes that it's there and so we do K, cube control, apply, f, spin, what spin. I'm typing as fast as I can, I promise. Okay, we've got that, right? And then we do this. And this, my application was in cluster import, my application is scolacci, whoops, scolacci. I'm typing as fast as I can, new js function. Okay, and it says, okay, I'm importing that. So now we're adding like this image is now in the cluster's cache, right? And then you can touch, you can create a spin app, yaml. Okay, we did that, whoop, got the star. I've got one more minute. Can I pull it off, touch spin app, yaml, make the, yeah, okay, image import. Yeah, we already did the image import. So we're gonna do this, nano, spin app, yaml, boom, extend, do, okay. And then once that's ready, we go ahead and apply the spin app right here. And I'm so close, will it work? Created, watch, cube control, get, oh, and container is creating five seconds. Please do not blow up because what we did is we imported the container, error, image pull, boom. Oh, that's unfortunate. All right, we almost did it. We almost did it. What I'm gonna do is go ahead and jump to this just to show you that we can do this, no problem. If you're deploying from Kubernetes, from outside Kubernetes, let's see, we had wasm time spin. Let's see what was the ingress. Oh, I know what it was. We didn't change the application. Ah, maybe I'll pull it off. Maybe I'll pull it off, nano, spin app. Okay, let's get down here. This isn't the name of the image. It's squalogy, new JS function. Text, boom, nah, nah, nah, spin app. Okay, unchange, configured, apply, nano, watch, and it's running. Oh, thank goodness. And now if we clear and we run, let's do, we got a hay. So this one is at 8081. So now we do 8081. And in theory, in Kubernetes, one minute late, notice the lag time. I wonder if this is WSL or whether it's actually something else. Should be something else. If we get this, I get this, troll and curl, 8081. Yep, it's there. So in fact, what we're doing is getting blocked by the Kubernetes infrastructure. How many do we get? We got much lower requests per second. You can see that there's something that went on in the cluster, but I'll wrap up with, we're looking at 15 milliseconds there for the fastest request. In fact, once we get this, I'll get this figured out. You can see the fact is just chewing through them much more slowly. So we did it, but not perfectly. I'll stop sharing. Thank you very much. And I'm gonna keep working on this if you're interested. Go ahead and ping me on Twitter. I'm Ralph underscore. Hey, there you go. I'm Ralph underscore Scolacci on Twitter and Scolacci on hackaderm.io for Mastodon. And I don't know, there's blue sky and there's email and it's the whole thing. Thank you Libby very much and thanks everybody else. Thank you Ralph so much. Thank you everyone for attending. Again, thanks for bearing with us with the timing and we will see y'all next year. Have a great rest of 2023 and we will see you for CNCF live webinars in January. Everybody have a great break.