 All right. Hello. You are in WebAssembly from the browser to everywhere. So if that is not the session that you wanted, now is your one moment to run out before it becomes embarrassing. So I am Matt Butcher. I guess, if one is to say what one is best known for, for me it's the Illustrated Children's Guide to Kubernetes. I wrote that along with Karen Chu back when we were at a small startup called Deas. Since then, we went from Deas to Microsoft through an acquisition. And then about a year and a half ago, we started a brand new company called Fermion. And what we're really doing is trying to build what we see as the next wave of cloud computing. And we think WebAssembly is kind of the key technology to unlock that for us. And I'll get into that a little bit more during the presentation. But yes, in addition to Illustrated Children's Guide, I've actually written some serious books. And even though I started coding when I was about 16, I'm an academically trained philosopher. I have a PhD in philosophy. My parents were right. I would never use it. So if you want to find me anywhere on social media, I'm Technosophos pretty much everywhere, including Twitter and Blue Sky and Mastodon. Of course, there's no better place to start a conversation about WebAssembly than with Java and Ruby. I think that I really started my career right around this particular time. And so there's a sense in which for me personally, the stories about Java and Ruby sort of resonate. But they teach us something about ourselves and how we view technology. So Java gets started as a language called Oak, a project called Oak. And the intention behind it was to build a virtual machine-based language, a language virtual machine for embedded devices. So if you went to any of the early Java stuff, they would be talking about the Java ring. I still have no idea. I mean, I was at the conference where they unveiled the Java ring. I'm still not entirely sure what we were supposed to do with it, but it was so cool. It was like, ah, you can get a VM on top of a ring. And so it was very narrowly focused originally on building an embedded language. Now, if you think about where Java is today, about the smallest embedded devices where we tend to see Java are things like Android phones, right? Or maybe set-top boxes, all of which have hundreds of, if not more, thousands of time that computing power that any device that the original Java developers had envisioned, right? So Java ended up sort of, and where do we see Java actually today? We see Java in enterprise computing, right? Really the extreme opposite when you've got large servers and huge amounts of computing power. So there was a sense in which the original vision of Java didn't necessarily play out, and yet Java's a tremendously successful technology. Let's look at Ruby. I didn't realize until just recently that Ruby actually started around the same time that Java did. In fact, Ruby 1.0 actually came out a year before Java 1.0. But Ruby was, and when I say Ruby, how many of you think of Rails? And how many of you think of Chef? I mean, neither of these technology, I'm excited to see Rails, because that for me was the big game changer, right? And I think it was for Ruby as well. Ruby was originally conceived to be maybe a little more of an academic language. I think Matz cited early on that he was inspired more by Lisp and some languages like that, but also really oriented toward system-level shell-style programming. But it found its big breakthrough 10 years later in web development and became sort of the web developer's core development language for quite a while. In neither the Java case nor the Ruby case, did the core underlying technology really have to change very much. But what we saw was that the technology itself had an application that was far beyond what the original creators had sort of foreseen and intended. And we could list off a litany of examples from our industry, right? The web being one of them. It was originally a technology for transmitting physics papers, and I'm guessing that a very small percentage of web traffic today has anything to do with physics papers. So there's a sense in which successful technologies actually break out of the original design intent and then find, end up flourishing in areas where the original creators may not have ever foreseen it. That's the story I wanna tell today about WebAssembly, a technology that was created for the browser and now we're starting to see some really cool applications of it elsewhere. So I guess the right place to start would be to say, hey, what's a wasm, right? And I'm going to give you the most boring answer to this question, but it's also the most honest. All WebAssembly is a bytecode format. So when you think Java, you compile a Java source code into Java bytecode and then you execute that bytecode on a virtual machine, a language virtual machine. Same with .NET, right? This is a language, or this is a binary format that falls in line with that particular tradition. But it made a couple of assumptions. As the WebAssembly developers began building this, they made a different set of design assumptions and those assumptions are what make WebAssembly such an interesting differentiator, opening so many new possibilities. So I just highlighted the ones that are my favorite and we'll hit these over and over again today, right? First one, the security model for WebAssembly is designed so that the runtime does not trust the guest code that it's executing. When you think about the way Java and .NET work, by default, they take the opposite security posture, right? I as the runtime trust that the developer is providing code that I can and should execute. When they ask for a file, I give them a file. When they ask for a network socket, I give them a network socket. WebAssembly was built for the browser. You don't want that layer of trust, right? You don't want to download a random binary off the internet and it says, hey, give me a file in a network socket and I'm like, sure, here you go, right? So the security posture of the WebAssembly sandbox is actually tighter than that of the JavaScript sandbox because you even want to be able to protect your JavaScript layer and your browser layer from bad acting binaries and WebAssembly. There's some really cool projects where they have exploited this to basically be able to show you that you can build a version of JavaScript that runs in a WebAssembly module that runs in isolation from the version of JavaScript that's running in the browser. So you can essentially run untrusted code inside of... Anyway, I get excited about this stuff because it's just such a novel way to apply this technology but you get an idea from examples like that of how strict the sandbox is and what it was intended to do. So security, default security posture is a good one. Cross platform, cross architecture. Again, when you're building for the browser, you cannot anymore say, like we did back in the days when Ruby and Java were first invented, this website only runs on Internet Explorer on Windows 95 on an Intel architecture, right? Those days are long gone. And when you've got all the web browser vendors in the room designing a specification like this, they're gonna push toward cross operating system, cross architecture. And so that was a core feature of the bytecode format that WebAssembly has. Another one is startup time and execution time. So, and there are two aspects to this. First of all, none of us like to wait for anything to load. But second of all, part of the design goal for WebAssembly was to be able to do some high performance computing in the browser that JavaScript might be able to execute but at a slower rate than something like C++ or Rust or a lower level language. And so speed, both startup wise and execution time was a big focus of the way that the instruction format was designed. And then finally, and this one is really important, they wanted to build a bytecode format that lots and lots of different languages, ideally all languages would be able to support. Sven Fenig, I think wrote a really good article maybe December of last year, I should have put the link in these slides, where he compared the Java bytecode instruction set with the WebAssembly bytecode instruction set and showed how inherently in the bytecode, there's some preference for one type of language design versus another and that the WebAssembly one is more generic. So you got kind of these four key elements that are strengths of WebAssembly. And ultimately it's those elements that have made WebAssembly interesting beyond the browser. So I'm gonna talk about sort of four different domains where I think WebAssembly is gonna flourish. I'm gonna lean in a little bit to one because I'm particularly passionate about that one, but I wanna cover all four and they look like this. We got the browser of course, the place it was born and it really does have a lot to offer there. Then internet of things and sort of this embedded space and then plugins and extensions. And then finally, and the one I'll probably spend a little more time on because I love it, is cloud. So let's talk about browsers first. So what exactly was the core intuition behind why we needed something like WebAssembly? Was it to defeat JavaScript? No. It was to basically provide us with some new ways of executing things inside of the web browser that we can't currently execute and then tie those in with JavaScript. So here are a couple of examples. One of them is legacy code. I don't know if this story is mythological or not, but I heard it even while at Microsoft that there is supposedly an Office 365 in Excel. There is a library that was written sometime in the 80s that nobody totally understands what it does anymore, but it's necessary for the execution of certain kinds of formulas and spreadsheets. And so if this, and again, this story can be entirely mythological, but the story goes that they just took this C library, they compiled it to WebAssembly and then they just interfaced with JavaScript to be able to provide 100% compatible behavior in the browser that they had in the desktop app going all the way back to the 80s. Now, mythological or not, this is one of the design intentions of WebAssembly in the browser that we should be able to take legacy code and be able to execute it inside the browser sort of strategically where we need it. We also get high performance. Figma is a great example of this. Figma does vector-based drawing in the browser and it's, I don't know, how many of you have used Figma before? It's remarkably fast for the graphical operations it does and part of the reason why is because a chunk of the code base is actually written in C++ and compiled to WebAssembly. So they can do very fast numerical computation in a language that tends to excel and that kind of thing when you really wanna optimize for it and then use JavaScript to interface with them. And then the last one, I don't know a good name for this one, but I've seen a number of use cases where people wanted to take an application that was running in some other context and compile it and execute it in the browser for reasons ranging from everything from behavior testing, behavioral testing to just wanting to do it because it's cool. Like somebody ran, compiled the PHP engine and the PHP behind Drupal all into one WebAssembly module and you can execute Drupal inside of your browser. Kind of a neat little project. I played around with it. I'm not entirely sure what I would do with it in production, but it was really cool to see something that's been around for decades and decades, suddenly running entirely in the web browser. So I think those are some of the cases where we'll start to see web browser usages evolve, right? IoT is sort of a different story altogether, right? So when I'm thinking IoT, I'm thinking very small devices, not necessarily phones which are, have the computing power several times greater than the last PCI ever bought, right? But when you get down to these sort of like small embedded things that are running in light bulbs or power socket, electrical sockets or things like that, that landscape has been dominated by C in assembly for a very long time. I worked at Nest for a while. I worked at a startup called Revolve that got acquired by Nest. A lot of our code was written in C and a lot of it we had to tailor individually for each of the different things that we were building, each of the different devices we were building. And I remember even at the time sort of lamenting, you know, it's too bad. There's not kind of a generic thing that we could compile this to and then be able to just write little shims. So I was poking around about a year and a half ago into what people were doing in IoT and stumbled across articles from the likes of Amazon Prime team and the BBC team and they were saying they all used WebAssembly and the BBC wrote a great article about why. They have 9,000 different devices that the BBC player has to work on. That would be, and my Nest brain is like, oh my gosh, writing C for 9,000 different devices just sounds totally overwhelming. But the way they do it is they write a very small C shim that interfaces with the hardware layer and it has a WebAssembly runtime in it and then they can port the same player back and forth across these with none or very minimal changes. And for an IoT developer, I'm like, oh, that's fantastic. Right, I wish this had been around a couple of years ago, you know, 10 years ago when I was working on the Nest stuff. So I think there's a lot of potential there and one of the reasons why WebAssembly is a good fit here is because of that bite code format. So WebAssembly can run in sort of the mode similar to Java where you can jit compile it. And so you load it and you speed up the execution by sort of compiling as you go and then you can optimize that way. You can also ahead of time compile it. So as soon as you know what you're gonna run it on, compile it directly to the native format. But in a case like this, when you're dealing with very small devices, being able to run in an interpreted format actually works out very well where you can just pull a little chunk in, start to execute it, pull a little chunk in, start to execute it and you can do it with a very low memory. I was kind of hoping Matt Fisher was gonna be in here. He's also from Fermion and he experimented with a bunch of very low powered devices and had some great results from using WebAssembly on these very exciting thing for IoT developers. Then the third model of the four is this plugin model. So a friend of mine, Steve Manuel, likes to say, you know, WebAssembly is the last extension mechanism we're ever gonna need. And again, you know, I know a number of us have done this before where we had something that we built and we were happy with what it did and how it worked, but then users came and said, hey, I wanna extend it to do this thing for me. I wanna extend it to do this thing for my customers and so on. And then you start saying, oh, okay, I need to build a plugin model here. What's choice number one? What scripting language am I gonna use? You know, what interpreter am I gonna grab off the shelf and wire up here or am I gonna write my own? Is it gonna be a Lua? Is it gonna be JavaScript? And Steve pointed out, hey, the new question is, you know, can I just drop a WebAssembly thing in here and expose the core services to the WebAssembly layer and then your developer can use whatever language they want, whether it's JavaScript and Python or .NET or REST or whatever, right? It opens up a lot of possibilities for the developer, reduces the learning curve and of course that's attractive to us as product builders because then more people use it and more people get a lot out of it. So I think, you know, Flight Simulator has a plugin model like this, Shopify has added a plugin model like this on their web offering. I think that's another area where WebAssembly is really gonna take off. But for me, the one that I'm most excited about is what it is going to do in the cloud world and part of that is because I am, you know, through and through a cloud developer. I worked on OpenStack at HP, at HP Cloud, worked at Azure for a long time, worked on Kubernetes, worked on Helm and so I'm very familiar with the problem space and what we've done and how far we've advanced in only like seven or eight years but also can kind of see, okay, there's another set of problems that I really wanna solve and I think that the kind of combination of the security, the cross-platform, cross-architecture, speed and language support, that same group of four that we started out with all come into play to tell a really remarkable story when it comes to cloud. So start out with this quote and then walk my way back into the explanation. It was really cool to work at Microsoft and be able to just talk to customers, then go out to a conference, you know, KubeCon, Open Source Summit, whatever and then talk to internal teams at Azure and just start collecting stories and one of my favorite leading questions has always been, so, you know, what's hard for you given your job as whatever, right? Hey, Azure Functions Team, what's one of those things that keeps you up at night? Hey customer, what's a hard thing that's preventing you from moving to Azure and you know, those kinds of questions and you learn a lot about what people desire versus what they're given and one of the stories that we heard all the time when we would talk about this and it was always Lambda. It's never has your functions come to think of it. I'd always say, you know, we really like Lambda. I love writing serverless functions, but and every single one of them without fail added a but at the end. I mean, you can imagine how your relationships would go if you said, I love you, but you know, you're immediately kind of saying, oh, so you've got it and the story that would unfold after the but was very similar. I love writing serverless functions because it's so easy. I've right into the business logic. I start building what I want to do. I like that model of just handling a request and returning a response, but it's slow, but it took me 47 minutes to get through the Lambda startup, easy start guide, but you know, I can only use these four languages and there's no hope that they're gonna support anymore, but and this one was a huge one. I'm stuck in vendor lock and I have to pick one of these serverless providers and all my code forever and always will have to run in that environment which I don't control. So we listened to that. They're going, okay, well, it's rare that you get feedback like that where they tell you something they absolutely love and then enumerate a product feature list for you. Can you solve these things for me? And so here we are, you know, looking at the world of cloud computing and going, there are two big buckets into which cloud computing falls right now. There are virtual machines. A virtual machine runs from the kernel and drivers all the way up to my application layer, big giant chunk of code and they're kind of slow to start up. They tend to take minutes to start up but they are power houses, right? They can just anything you can imagine you can get done in the virtual machine, right? Doesn't you bring your own operating system even if it's a totally proprietary operating system you can do it there. Then you have next to that containers and containers provide you a nice packaging for a long running process. And then you have to make some design trade-offs with the big giant virtual machine layer in order to get there. You know, you have to pick your operating system have to choose your architecture but then once you're there you can pull off things off the shelf just package them up in a container and run them. But again, so virtual machines are designed to be super powerful and very long running. Containers are designed to handle long running processes that are lasting for days, months, quarters, maybe even years. But the kind of workload that people were telling us we love it but we're saying, hey, we like this idea that I write a handler that takes a request, executes it to completion, returns the result and shuts down. And that is milliseconds to seconds maybe up into the minutes range, right? AWS Lambda currently limits you to 15 minutes. Most platforms limit you to five minutes. So you're going, okay. The shape of this workload dictates that it's never gonna be greater than 15 minutes to execute and it's always gonna fall kind of in these parameters where the developer is telling us very specifically what they want. Neither of these existing compute run times match up with the model that the developer says they want. So what can we do to give them what they want? So we started looking around and saying what run times are out there that might be this kind of third class, this third wave of cloud computing. And WebAssembly very quickly bubbled up to the top because of those same four features, right? It comes with a great security sandbox model. Absolutely prerequisite for the cloud because when you think about what the cloud is at essence, it's somebody running your code. They're running an untrusted workloads and they're running them at scale across tens of thousands of customers. So the security model is important. Cross-platform, cross-architecture story is important. Ideally, the closer you can get to serverless, the less somebody has to know about what the server is that you're telling them is less, right? And so if you can do cross-operating system, cross-architecture, cross-platform, you're solving a problem. And then performance, remember browser, nobody wants to wait. Nobody wants to wait on the serverless side either. In fact, that was one of those but stories that we heard frequently and then the kind of no lock-in. They want lots of languages and they want broad support and they want to be able to run it wherever they want to run it. So I put this up on here to just sort of focus in really quickly on one of those and this is the performance story. And we'll focus on that bottom part first. And Amazon Lambda tends to take about 200 to 500 milliseconds to cold start. So what that means is from the time the request is received by the Lambda framework, it takes 200 milliseconds before it's ready to start executing the code that you provided. So that's 200 milliseconds of time that you can't do anything about. Okay, do we really care about 200 milliseconds? Yes we do, why? Because how long does Google wait before they start dinging you for performance in your search results? 100 milliseconds or less? So if it takes me 50 milliseconds to execute my application and 250 milliseconds end-to-end to execute my request response, then I'm already lowering my own Google search rankings. So I want the high performance and it's not just because of Google. Google did it because they know that human attention, we are so attuned to speed in our environments that at around 100 milliseconds, the first little bits of cognitive trigger that we don't like delay are starting to kick in. And so there's a sense in which this is a very visceral human experience that we want to assist. So we started saying, okay, so if WebAssembly is gonna be a compelling platform, cold start is gonna be a big deal. We were very surprised when our first couple of iterations of this proved out that we could run in about 20 milliseconds. Then we realized, oh, we haven't turned on all the optimizations and when we did that, we actually started to get down to sub millisecond cold start times. In some of these cases, and we can talk about this in Q&A or later, we could actually get a cold start faster than some of the native environments could because of some of the little tricks you can play with WebAssembly to pre-optimize for execution. And so we started to say, okay, this is real. This is solving the but half of that. I love serverless functions but. And we're starting to build something that can answer those questions. So we created an open source project called Spin. And this moment actually is kind of momentous for me because we introduced this idea one year ago at the last open source summit. And so we've got about a year since we first released Spin. And it's fun to see this also kind of cool that the whole conversation I told you about where we first got together and talked about this happened on Vancouver Island, not far away from here. And so this is really fun to be here and reflect on the last few years and see that we've kind of begun building a tool that is defining a next kind of generation of serverless cloud computing where we can start to really lean in to solving some of the problems we've heard from that first generation. So Spin's a very easy piece of software to use. Basically type Spin new and it'll scaffold out your program in the language that you select. Then Spin build and it'll compile it to WebAssembly. Between there presumably you're editing your code a little bit and then you can do Spin up and it'll start up a local instance for you so that you can test it out locally. Once you're ready to put it somewhere else you can do Spin deploy and you can deploy it to Fermion Cloud. You can deploy it to an Azure AKS cluster if you turn the right switches you can deploy it into Docker desktop at this point and more and more Kubernetes-y kinds of things are coming in along with things like Fermion platform which is an open source installer where you can do a Nomad and HashiCorp based stack and be able to deploy it there. So once you've built this then you start having options about where you can deploy these kinds of serverless functions. I put a slide in here to just summarize all the reasons that I'm excited and they're all things that I just said. So I'm actually, I'm gonna skip this slide and just kind of summarize it by saying, we set out with this idea that there was a problem that we could start solving and WebAssembly just ended up checking a lot of those check boxes and it was very true not just for cloud but it's solving problems in a wide variety of areas like IoT. There is however one little piece of that story I told that I didn't dive into at all and that was languages because that's the sticky bit. WebAssembly will only be successful if we get a lot of programming languages that are gonna support it. And it's kind of funny because when I would say this kind of thing a year ago the intention was to kind of throw out a warning hey we gotta get in touch with our language communities and start moving things along. Now it's a total in like a year and a half it is a totally different picture. So we like to use RedMonks top 20 language rankings and say okay that's a good collection of languages and I don't know if you've ever looked at the RedMonk rankings but they're brilliant. They use Stack Overflow and GitHub to figure out what are the most active languages. So they really found two great queries against active user data sets that we all kind of participate in. And they said okay so these are the top 20. Of the top 20 languages as of March when I last ran through my update of this 17 of the languages have made substantial progress toward WebAssembly. Some of them are already production grade like Rust and CNC plus plus. Some of them are very very close to production grade and moving very quickly.net is doing some amazing stuff with WebAssembly right now. And some of them have just sort of taken their first baby steps toward it. Dart still hasn't released but I know they're working on it. They've been very open about what they're doing and they've got some really cool stuff in the works. Kotlin is almost ready for their first release and has done sort of a beta release. So we're seeing these big languages all go one after another and really the only three that we're not seeing move are cascading style sheets which I'm not really sure they'll ever ever move. There's no reason to compile a CSS to a WebAssembly module. It's a style language. Objective C which is on the other side is very very surprising to me. I talked to a couple of Objective C developers and they said, well yeah, I mean we should be able to do it. It's just LLVM to WebAssembly but I have yet to find anybody who's even experimented with doing it and I would admit I know nothing about Objective C. And then Shell, Unix style Shell. I've been looking for Bash or C Shell style implementations in WebAssembly and I have yet to find one. But those are the three sort of outliers and the rest are moving some of them more slowly than others but all of them moving toward being able to support WebAssembly as a compile target. Swift by the way is very cool. The project that's doing it is not part of the official upstream Swift project. It's a huge group of the community that has done it and they have done an extraordinary job and they're hoping to get that merged into the mainline and I think that would be just one of those great win-win kinds of stories to see that community go that way. So there is one little caveat here though. Compiling to WebAssembly is one thing but being able to use that WebAssembly to interface with the host environment is a second set of challenges. And so as we wrote Spin, we realized that at that time we were gonna have to have two different classes of supported languages, two different tiers of supported languages. So we would have some tier one languages where we focused on building the right set of supporting libraries so that we could add database support so that we could add outbound HTTP, add key value storage and things like that. And so those are languages like Rust Go, JavaScript TypeScript, Python and so on. But we realized that we couldn't write that for all of them and don't really want to write that for all of them. So we built a second style of runtime that's really works more like CGI programming from the 90s. So instead of getting that full functions as a service experience, you've got kind of the CGI style where you read environment variables and you read your standard in and you write the standard out and it goes back. We could support a huge swath of WebAssembly languages by doing that without having to write any particular SDKs for them. Ultimately though, and I'm hoping that the way you're feeling as you hear me say that is, yeah, but that's not really the way any of us wants this to be long-term, right? We want it to be the case where once the language community has done their job and built the WebAssembly compiler and the WebAssembly toolkits, then all of us in WebAssembly LAN should be able to start using these things to assemble very interesting applications. There is a specification that's in-flight. I'm very optimistic that we'll see it this year called the Component Model. Very boring name for a really, really exciting technology. The Component Model essentially makes it possible for one WebAssembly module to say, these are the things I need to import, these are the things that I export. Now as soon as you can do that and you have the host system be able to read that information, then you can start saying, hey, this thing needs a YAML parser, this WebAssembly module is a YAML parser, we can plug them together, now we've got a thing that has a YAML parser and this thing can provide an HTTP service and this thing over here needs it so we can plug this in here and now we've got an application. So this is good because it starts to move us away from having to do everything in either one single guest module and a host environment and we can start building aggregate applications. But the really cool use case of this is that we can solve one of the most annoying problems in all of these language ecosystems we built which is we keep having to re-implement the same exact functionality in every single language and then it's like, oh, this JSON parser and this JSON parser differ in very, very minor ways. What if WebAssembly is just, with this imports and export system is a way to start stringing together libraries? Then suddenly you have a scenario where a JavaScript app can be pulling in three different libraries, an unbeknownst to it or the developer using it, one of them's written in Python, one in Rust and one in Swift because they're all just WebAssembly binaries that expose this kind of interface. And that's where the component model is really going. So it'll solve both the SDK problem that Spin is having but also is gonna open up this brand new kind of cool and exciting way that we can start building these applications even more rapidly and ideally get rid of a lot of wastage of re-implementing the same algorithms over and over again in every single language ecosystem. So I am very excited about this as a big possibility for the languages. So that's a little bit of a futuristic story. I did wanna come back and say today WebAssembly in this kind of serverless environment works very, very well. At Fermion we like to say this has been our kind of guiding user story. If we're successful today then you should be able to go from a blinking cursor to a deployed application in 66 seconds or less. We should streamline the process of doing WebAssembly so that it is that easy to do. No more 47 minutes to your first Lambda. We want it in 66 seconds. Originally it was two minutes but the CTO was like I can do it in 66 seconds and it's stuck. But that's kind of our vision or that's kind of the reality of it today and our vision is as this component model lands we'll be able to expand that into a very rich story. And that story is a language level story which means I'm passionate about the cloud part but it's also gonna manifest in the IoT space, in the browser space and in the plugin space and anywhere else people think of implementing WebAssembly. So from here if you wanna get started and just play around with the open source spin stuff you can head over to developer.fermeon.com and give it a try. I write a lot of blogs about this stuff really kind of around the broad ecosystem because as you can tell I enjoy this. This is one of the things I'm passionate, almost as passionate about this as I am about coffee and if you wanna drop into our Discord and chat I put a link in there too. Or you can stop by our booth and just chat. A number of engineers, one of whom is standing in the back are all kind of hanging out at the booth and we're all happy to talk WebAssembly however you wanna at whatever level you wanna talk about or play the finicky whiskers cat game to sort of illustrate how the system works. And then I like including this part at the bottom. We decided it was an important thing for us as a company to create an open source pledge that just says when we say we're releasing spin as open source what we mean is we're releasing spin as open source and it's always gonna be open source. So it's an important thing to me personally to point out that that was something that we take very, very seriously. Especially as a startup it's also trying to figure out good ways to pay paychecks. It's important to be able to say look we're not gonna pull any kind of bait and switch or anything like that. So that's an important one for me to call out especially at a place like this. With that I am happy to answer questions. I know we're kind of pushing time but I think we're okay for a few questions. Five minutes, five minutes, all right. And there's a microphone right there if you wanna come up and grab it. Thanks for a great talk. So I'm a backend developer and kind of I always hate the JavaScript. So I'm kind of looking right for like the ways like Swift made you like not need to learn Objective-C. Now finally it looks like you don't need to learn kind of that kind of the other stuff but to me what I'd like to learn more and see right you explain the generic runtime. So basically I think all the pitch fits .NET like basically multi languages. So how is this different from .NET for instance, right? What makes web apps, IoT apps, plugins different? What needs to evolve? What is missing in this ecosystem from as like JVM deployments or .NET on the backend? Like is this thing gonna be working on the full stack? Is it gonna be running from backend to front end, right? So can you talk a little bit on what does the Fermion platform do because you didn't touch on this much to simplify this deployment across the full stack? Yeah, yeah, that's excellent. You said I had 35 minutes to answer. No, let's start with the .NET one, right? I think an interesting way to turn that question because it's such a good question is to say why is the .NET team looking into ways to compile .NET applications to WebAssembly? Because they already built an amazing language runtime. And there are two real answers. One is I think they like the way that it works and they like the possibility of even extending their own ecosystem for free by being able to pull in Rust developers and things like that. But the other one is that security posture thing that is different and kind from what they do. We're all trying to figure out how to do cloud better and cheaper. And for the .NET team being able to reduce out big layers of protection at the virtual machine layer and replace them with small layers like the WebAssembly runtime, that results for once we're talking scale, right? That results in taking massive pre-allocated pieces of computing power and starting to reduce those farther down without giving up much in the way of functionality. Second part of that question though is what's missing? Well, if we've got a security model like that, some stuff better be missing because there are things you can do that you shouldn't be able to do. Right now, WebAssembly is building a capabilities-based security model in which all these features like access to the file system, things like that are turned off by default. The name of this model is Wazzy, WebAssembly system interface. As that model gets built out, it's a little bit piecemeal and it's a little bit post-ex-oriented. So they've added file system access, environment variable access, clock time, random number generator, essentially the four things you really cannot do without. But that specification is evolving. So one thing you probably noticed that was missing out of that is network access, right? The modern-day cloud application does have a lot to do with network access. And so for us in SPIN, we basically shimmed in protocol layer network access like HTTP and relational databases and things like that. The next version of Wazzy, which is due out in a couple of months here, that will include network access. So we'll start to see an increasing set of features, but those are all gated by default. So it'd be like if you ask for a socket server, it's gonna say no unless the person operating it said you can run a socket server. Last part of that question, if I'm understanding right, is sort of like what's gonna differentiate here, the kinds of applications that we can build. And I think so something happened just in the last few weeks that I'm very excited about. And it really happened with Fermion, Dino and Vercel starting to kind of break away a little bit from the way other people have been doing serverless. In that what we're seeing is at KubeCon we released key value storage built in. The idea here was without ever setting up or configuring any part of a database, no username, no password, no connection string, no pool to manage, you could within your code just start doing gets and sets and lists and deletes in key value storage. And when you were running that locally in SPIN, it was just accessing things locally in SPIN. When you deployed it out to Fermion Cloud, it was using Fermion Cloud's big distributed key value storage system. But you as the developer didn't have to make any operational decisions at all about that other than saying, hey, I'd like a key value storage please. A week later, Dino released something very similar for their JavaScript only platform, right? And a week after that, Vercel released the same thing. So when three people do it, you start saying, wait, what's the intuition behind this? Because nobody has been doing this for the last 15 or 20 years and suddenly three people did it in a month. Well, I think the story we're seeing here is that serverless is now making a transition to kind of a V2 serverless style in which we're really learning how to accommodate the two different personas. You got the operator over here who wants to be able to control the platform and deploy to AKS, deploy to Fermion Cloud, deploy whatever, right? And they want to be able to make those decisions and they don't want to involve the developers in that decision making because that causes friction. Developer side is the opposite. The developer wants to get the job done and the developer wants to use tools that we're comfortable with. We don't want to learn Kubernetes. We don't want to write 3,000 lines of YAML manifests. I wrote how I very much apologize for that. I'm being sarcastic a little bit there, right? The idea that I had then was developers would love to write lots of YAML because then they could describe the infrastructure they wanted. Well, it turns out, no. We don't want to know anything about the infrastructure. We just want to tell you here's how the code should run and something else will figure out how to actually make it run. So in that case, my apologies, I suppose this is sincere. That's what we're starting to see evolve in this kind of next gen of server list. It's just now, I mean really just now in the last month starting to play out. So I'm really excited about that and I think what we're gonna see is the increasing easing of integration of data services directly into the code layer and then keep ops in ops land and help resolve that friction of going back and forth between dev and ops in order to try and get something running and then operate it on day two. So I'm very, very excited about that. Did I miss any part of that question? Do I have time for, okay. Maybe one more, all right, all right. What kind of trade-offs are you making for your serverless pre-optimization? Like when I hear that I think of some sort of warm caching and I'm wondering what trade-offs they're making there. Okay, so the question really we'll talk about a little bit about what the startup of a serverless function looks like here, right. So we compiled a web assembly. Web assembly is a byte code based format. We already talked about you could run it in an interpreted mode, a jitted mode or an AOT mode, right. The fastest is gonna be the ahead of time compile. When you're running in your local system, ahead of time compile is probably over optimization. You really care about fast compile times and being able to run it there. But once you're starting to deploy this and run it in production, the faster you can get it going, the better. So in Fermion Cloud we tend to AOT compile things on first execution and then from then on outward there you are always getting a native or even faster speed. So the faster one I think is what you were kind of keying in there. If we're gonna do it faster, how do we get faster than native? So there's this really cool project out of byte code alliance which is one of the standard Z organizations for WebAssembly called WISER, W-I-Z-E-R. And the idea is when you think about a scripting runtime or .NET style runtime, the first initialization is to load a lot of external sources and then get them ready for execution. Every single time I execute an identical set of scripts that in it step is gonna be the same and then I execute user land code. So I take a performance hit for essentially repeating the same operation each time I start up a Python script or each time I start up a .NET app. So WISER basically is a WebAssembly tool that takes it, initializes the WebAssembly module and then freezes it out as a pre-initialized WebAssembly module which basically eliminates a lot of that boilerplate. It is super cool. It felt like magic the first time I ran it. And that's one way that you can kind of speed it up. The downsides to some of that, as soon as you AOT compile, you're really locking into an architecture. That's another reason why when we did it on Fermion Cloud, we said, okay, you don't AOT compile until you know the exact node it's gonna execute on. Then you know the operating system and you know the system architecture and neither of those things are gonna change so we can AOT compile. If you prematurely AOT compile, essentially you're taking the cross-platform, cross-language, or cross-platform, cross-architecture story and saying, yeah, don't really care about it. This is now an Elf binary or this is now a Windows XE. So that's one of those trade-offs you have to make in performance. You have to figure out when that right moment is to start layering on those optimizations. Did that get it really what you were after? Okay, thanks. All right, well, I know we're now officially out of time. Thank you very much, very much enjoyed it. Take care, have a great conference.