 Hi, everyone. Welcome to Cloud Native Live. Today is Tuesday, April 9th, and I am Taylor Thomas, and I'm a CNCF ambassador and one of the hosts of Cloud Native Live. In Cloud Native Live, we dive into the code that's behind Cloud Native, and so every week we bring in a new set of presenters to showcase how to work with Cloud Native technologies. They'll build things, talk, break things, talk about things, answer your questions. And this week, we have several different people here today, and I'll let them introduce themselves shortly. As a normal note, this is an official live stream of the CNCF, and as such, it is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. It basically boils down to, please be respectful of all of your fellow participants and presenters in this conversation. So with that, I'm going to go ahead and hand it off to Luke Wagner, Bailey Hayes, and Liam, a fellow CNCF ambassador, to go ahead and run this. So Liam, why don't you go ahead and introduce yourself and pass it on to everyone else? Great. So my name is Liam Randall. I'm a longtime Cloud Native practitioner. I've worked on Kubernetes, Cloud Custodian. I'm one of the creators of CNCF WasmCloud, and I'm currently the CEO of Cosmonic. And I've been a Cloud Native ambassador for a long time. I serve as chair of WasmCon and WasmTay. Bailey, I'll pass to you. My name is Bailey Hayes. I have been an end user WebAssembly for a long time, starting out with some of the early work from Luke Wagner as early as AsmJS back in 2012. But I've recently moved to be the Waziko chair within the W3C Web Assemblies WAZI subgroup. I am also the technical steering committee director of the Bicode Alliance Foundation, which is a nonprofit foundation for building out secure foundations for WebAssembly. So really making a scalable, secure implementations of the WebAssembly standard. And then I also work with Liam as CTO at Cosmonic. Luke. Hello, everyone. I'm Luke Wagner. I work in the at Fastly in our technology research and incubation group, specifically on WebAssembly standards and evolution. I'm a co-chair of the WebAssembly W3C working group. I'm one of the co-creators of WebAssembly before working at Fastly. I worked at Mozilla for 11 years, adding the WebAssembly work on the WebAssembly implementation in Firefox. Sorry, I accidentally hit my mute button. That's great. Well, our goal today, if you can't tell already, is to break down all things WebAssembly and to really make sure that everybody has a new and improved understanding of the Cambrian explosion of WebAssembly that we've seen. Depending on some stats, Torsten Volk, an analyst, mentioned that EBPF and WebAssembly were the two most discussed topics in cloud native at the most recent KubeCon in Paris. So I'd love to get started. And for the uninitiated, maybe we could just start with a little bit of the basics around what is WebAssembly? It's been around for almost a decade in some forms at this point. What is WebAssembly at its most fundamental level? I'll hit it high and maybe you can hit it low, Luke. The high level, like the simplest definition that I think just about everybody understands is that it's a compilation target that many languages support. You say dash, dash, target, maybe wasm32, wazip2, and alcomsadotwasm. So that dot wasm is sort of like your executable bit. But to run it, you need a WebAssembly runtime. And there are a whole bunch of those. We're probably running one right now, almost definitely, for this stream. And that's in most of the browsers that we have today, but also on server side. What's the real definition, Luke? No, no, that's great. I think the real definition or what people give us the real definition sometimes goes into too much minutia. And it's like, also, that's the high order bit is that this is something we can produce from a lot of languages. And then we can run it in a bunch of different places in that way where I run my code doesn't determine what languages I can use. I think the other important thing is it's a WTC official standard, which means it's in the elite set, a small set of languages that run in web browsers, alongside JavaScript, HTML, and CSS. So it has some real staying power to it. And I think that's also what excites a bunch of people. And then also, it's formally defined. So it's defined in like, you know, mathematics. And in that context, it's been proven, you know, sound. And so a lot of folks working on verification and security analysis and a bunch of other tooling are really excited about that. So yeah. Yeah. So WebAssembly, from a lay perspective, you know, is it incorrect for me to think of WebAssembly as a really tiny virtual machine that runs across browsers, across Linux distributions, and even on embedded devices, if we sort of think of the three maybe broad distributions of compute and in the Linux Unix that obviously include all my mobile devices and things like that. So you mentioned that, you know, this isn't really a new idea that we're running different programming languages in a sandbox. But it's different because this is a W3C standard. So the other three W3C standards are HTML, CSS, and JavaScript. Now, are there other organizations that are involved in managing and creating these standards and proposing this? You know, how are the tools created? How are they organized? Describe the landscape for us a little bit here. Sure. Within the W3C, there are actually lots of other standards efforts, beyond just what necessarily, you know, what we just enumerated here, but specifically within the WebAssembly side of the house, the WebAssembly community group, that is where we focus on enhancements to the core WebAssembly specification. And then within the Wazzy subgroup that I'm co-chair on, that's about being able to run WebAssembly really well both in the web and outside of the web. And we're finding different ways to make it possible that we can expand the APIs that WebAssembly is allowed to call. And so when I talk about what a WebAssembly module is or a WebAssembly component, in a lot of ways, I say it's a bunch of numbers in a trench coat, basically i32s and floats as far as the eye can see. And the reason for that is it's pure compute. And to be able to do anything to have that sandbox property that Luke was just talking about, you need to have that provided by a host. And coming up with the APIs that hosts can provide in a standard way, that's one of the goals of the Wazzy subgroup. And so where is this work happening? Obviously, we just mentioned W3C is the place where we're working on the standard side of things. For implementation, that a lot of that is happening within the Bicode Alliance Foundation. We have two different runtimes inside the Bicode Alliance. We've got wasm time and Whammer. And then we have a whole bunch of tools that are pretty great for for being able to get on boarded working with different languages, depending on your ecosystem, finding ways to inspect a modular component. There's a lot there. And I really recommend checking out the Bicode Alliance. If you're totally unfamiliar with it, it works and plays well into many other CNCF projects. So the Bicode Alliance is a specialty organization that just focuses on creating some of the core tooling to enable WebAssembly across a huge range of platforms and devices then if I heard you correctly, Bailey. So it builds the reference implementations and engines that are then adopted. So it sounds like the Bicode Alliance is very compatible with the broader mission and values of the CNCF. And it makes sense then that we've seen this huge adoption of WebAssembly across projects in the CNCF. You gave the keynote, Bailey, at CNCF, a wasm day. And I love your slide around. I mean, it's already here. Folks haven't seen it. It was phenomenal. It was the CNCF landscape. I'm except what Bailey had done is highlighted all of the projects that were adopting it that had adopted WebAssembly inside the CNCF. So, you know, when we think about this Cambrian explosion of WebAssembly across all the cloud native, we have projects, existing projects that have been around since the early days, Kubernetes, Istio, Cryo, Envoy, IngenX. And we have new platforms like the one you work on, CNCF, BosmCloud, that are pioneering WebAssembly. You know, why are these organizations adopting WebAssembly if, you know, WebAssembly and the CNCF and Kubernetes were originally built around containers? You know, help me understand this story here. Yeah, I, you know, I get to talk to a lot of different folks in this space. I think that one thing that I'm most excited about is because WebAssembly is so portable, and it has the sandbox of property, it's effectively the last plug-in model you'll ever need. And so in many of these use cases that we're seeing across the CNCF landscape, it's really fitting into that plug-in model or embedded type use case where Envoy, for example, uses WebAssembly to extend what it's able to do. And it's nice because I can write in any language that supports WebAssembly and target and create an extensible, you know, use case like, hey, let's parse the headers in a different way. Maybe add a specialty authorization rule. And being able to do that in the language of your choice is really powerful. And not having to support the SDKs for every language under the sun is also really valuable to a lot of these CNCF projects. But Luke, you know, you're one of the creators of WebAssembly. I would love to hear some of the conversations you've had for why you're seeing it spread across the CNCF landscape. Yeah, what's interesting is it shows up in two very, you know, in two complementary, but different ways I find. And one is, you know, is plugging into the actual machinery of Kubernetes saying, I'd like to customize this point, I'd like to customize this point. And there's, I think over time, there's going to be just a bunch more places where people want to tap in and say, hey, instead of having to like bake that into the core, I'm just going to write a WASM module and I can like plug that in. So that's as part of the machinery of Kubernetes. That's one direction. But the other one is as the workload itself, you know, alongside of containers. And so I think the use case there is, you know, we want to be able to, you know, scale up and down, you know, these workloads, you know, and in response to, you know, the traffic they're seeing, ideally scale them all the way to zero, you know, when nothing's running. But when something shows up, I want to be able to like spin it up and WebAssembly has like really great cold start time. So, you know, being able to scale it all the way to zero, but then cold start really quickly, you know, that's a huge value add. And that allows me to save costs because I'm scaling it zero and I don't need it. Or, you know, I'm not having to provision for this max amount I might need under a spiky workload because I can say, well, I'll just scale up in, you know, in the case of that spike. So ultimately, cost savings. And then because these sandboxes are so small, we're able to pack a bunch more of them into, you know, the same set of hardware. So the density improvements, I think, is another big driving. So ultimately, again, reducing cost. And then because it's portable across different instruction sets, you know, maybe I want to be able to run it on arm, you know, as because it based on different cost characteristics and whatever platform I'm running on. So just all these different, you know, flexibilities, which ultimately, you know, reduce cost or improve latency, you know, in various situations. So, yeah, that complements some of these also security benefits, you know, of the tight sandboxing. Be saying, I want to reason about these workloads without having to like read all the code or do very ad hoc kind of filtering things. I just want to say, like, I know what capabilities I gave to this module. It can only access this one particular database. And I just know this by construction. So this is kind of denied by default. Sandboxing is also, I think, can be super valuable and increasingly valuable for just understanding security characteristics of my whole very large application. So, so what I heard is, is that many of the projects that are in the cloud native computing foundation, when you look at landscape.cncf.io are in themselves de facto platforms. And as a platform, one of the common problems that they need to solve is that they need to enable customization by their users. Bailey, you gave a great example around maybe customizing a filter for HTTP header, or maybe it's an admission controller for Kubernetes, but you want to give empower your users to contribute and write their own code. And what I'm hearing is, is that this driving use case is, is that these unique properties that WebAssembly introduces and what it solves very well, make it the penultimate choice for, for running other people's code. Luke, the examples that you gave were that it makes huge progress in solving the cold start problem so that we can go from no instance of running to one very quickly, or from one to a thousand very fast. It's polyglot. It supports multiple languages and it has security properties such as being run in a sandbox. And I believe there's some new additional security properties around capability driven securities. And I think one of the things that I would love to really push on is, is that we've really seen this hockey stick moment in WebAssembly. You know, all, all new technologies go through this hype cycle and I think we're past maybe the peak hype for WebAssembly. But now we actually seem to be seeing that real adoption. We saw it in customer stories at KubeCon this year and we are seeing it in real usage and adoption across the product landscape as we're discussing now. What is it about the standards that are new and recent that have changed? What are those standards? How do they impact, you know, Bailey, your frictional, frictionful point that you were discussing earlier? You described WebAssembly as three integers in a trench coat. You know, if WebAssembly is just this little CPU and I can put numbers in and numbers out, that doesn't seem very easy to use. How do I handle things like strings and pictures and data types and all the rich metadata that I would have in a standard API if I want to make this easy for my customers to adopt and use? What are those new standards that are out? I had the honor of hosting that vote January 24th. So I'll forever remember that date for launching wazzy.2. It's had many names. I like to say wazzy.2. Some people say wazzy preview to you. But effectively it's this next iteration on top of wazzy preview one but it pulls in some really amazing innovation specifically designed by Luke here for having a component model and a way to modularize our interfaces, aka those APIs that let you do lots of amazing things, including streams and you said strings. Strings is also something that we didn't have a basically until now and what we had before is drawn up everywhere. It's been wildly successful. It is part of language tool chains. It's been stable for a really long time, since basically 2019. And what I believe we have now after launching this new iteration is a way to speed adoption, a way to meet unique use cases that are specific to those use cases and also be able to make something that we can build on top of each other a lot of composition. But Luke, I would love to hear you describe basically the component model and all the work that we put into wazzy.tu for about several years actually. Yeah, I think there's a couple ways to come in at one of the most practical ones is just factoring out what's otherwise a whole bunch of duplicated effort across everyone who's today embedding Wazm. And y'all know this with Wazm Cloud and we know this with our Fastly compute platform which is if you take Wazm as currently standardized and you wanna run that Wazm, a platform, you're like, okay, sounds great, all the work's done, it's a standard, right? It was like, no, actually, there's a ton of work left. You have to ask a whole bunch of ABI questions, whole bunch of, all right, how do I make an API and make it visible to programming languages my users wanna use? At the moment, you have to hand roll your own SDKs for every language for your specific platform and every platform is doing that and a Wazm module that I can run in one place doesn't, you know, that same bits don't run the other one. Like the definition of Wazm's portable but particular Wazm modules are not yet. So we wanna factor out work so we can pool efforts, work on common open source tooling and with all that focused effort, get these tooling to be really high quality and then upstreamed in the language tool chains. Like if I want to be able to take my language, you know, JS, Python, go and just say, use the upstream tooling to just compile some Wazm and say, okay, well, these are the interfaces I would like to use, you know, this is maybe these interfaces are present in the platform I'm gonna run it on but the tooling to actually produce code that runs on that platform, I would like that to just be totally upstream maintained by that language community so it's like really high quality and like has a really native experience. So at the moment, you know, everyone's doing the best they can kind of working with raw Wazm but we wanna factor out that work and you could say the component model is that factoring out and saying, all right, let's have an IDL called WITS. This gives really nice high level, very easy to use interfaces. So I can just kind of say what I mean and then derive with automatic binding generation the bindings to that IDL in different languages and, you know, factor out all that common work and now designing a platform is like, okay, write some WIT or take some WIT that's already standardized and pull that in and augment it with whatever is unique to that platform. So now designing a platform is easier. You get all these SDKs for free. Just one of our little taglines is bring our language bind to WITS and now you can run on whatever platform is defined in WITS. And then in addition to all that cool stuff, I wanna maybe subdivide my application to smaller parts and then maybe that's a whole separate topic that we go into. Let's, I think we're going really fast. Let's maybe compare to where we are today in Cloud Native with containers and the ABI that we deploy to and talk about first how we're kind of breaking that into components. So let me try to repeat what some of what you said and maybe a more lay perspective. So let's talk about today if I'm just building, you know, an elf for Linux, you know, I have a Go program and I wanna compile it down and I'm eventually gonna put this in a container. The ABI I'm building for today would be, you know, the Linux system calls, right? There's 530 of them. So that's my ABI. It's this big monolithic set of POSIX plus a few Linux specific things that we find that are out there today. And that is, you know, a big monolithic thing that Docker rides on. And what I'm hearing is, is that with the component model, what you've done is we've taken that big ABI and we've turned it into a subset of smaller building blocks. So what are these groups of building blocks called first? Let's just maybe talk about this one at a time. So if I've got a set of related building blocks, how might I describe those and give me a couple of examples of the building blocks that have been launched. And then we'll talk through it from there. We have a few primitive types. The first one we'll talk about are types. So those are your things like strings, enums. We have records which are a lot like structs. And so those are your types. So you can define an API with those. If you're familiar with any other IDL that won't look very surprising to you at all. Then the next part is we have interfaces. And those interfaces are what expose sort of these functions and that operate on these types. But the way that you make a building block, the way that you bring all of these things together is with a world. And you can think of a world as a way of describing what comes in and what comes out. And those are our imports and our exports. And so you can take individual worlds, you can union them together, bring them together. And you can also make it so that when I build a thing, it's basically my definition of a world, right? This is everything I know from in and out. And so those are my building blocks. And what we're doing within Wazzy is effectively trying to standardize and have a common set that everybody or for a specific use case or a specific runtime should support. And so the first set was Wazzy CLI. It's very similar to effectively what was in Wazzy preview one. Wazzy currently stands for the WebAssembly Systems Interface, but I like to refer to it more as the WebAssembly Standards Interface because it makes a little bit more sense. When we're talking about Wazzy CLI, it does feel low level. It does feel like the Systems Interface. But we standardize Wazzy CLI, that feels like a lower level thing. If I build my Go application as the example you gave before and maybe I'm targeting some file system or something like that, that's what I'm able to get out of the Wazzy CLI world. But if I target Wazzy HTTP, this is actually a higher level API. It actually doesn't use sockets. It says, I'm making an incoming request or an outgoing request over HTTP. And then it enables the runtime, the WebAssembly host to be able to say, this is the right way to sandbox this. This is the way that I pass this in and out. This is how I potentially optimize this type of request. And so that's like very, very powerful. And so those are those two worlds that we standardize, but we expect many more to come like Wazzy embedded for embedded use cases. We want to see more things like the Wazzy Cloud world. And Taylor was actually, he popped in at the beginning and kicked us off, kicked off this call. And he's a champion on a few of those proposals within Wazzy Cloud. Those are things that basically whatever we would need to be able to run a typical CRUD service, a typical microservice that runs in the cloud landscape, like something that needs key value in Blobstore and being able to send event messages, being able to set runtime configuration. All of those are kind of housed within this concept of a Wazzy Cloud core world. Okay, so let's think of this as layers, as sets of abstractions. So in typically in Linux with a container, when I'm building an application, I would target POSIX or the Linux system calls and I build an ELF. But what I'm hearing is, is now with this new standard called Wazzy P2, I have the ability to, instead of target this monolithic ABI application boundary interface, I can target just a subset of Lego blocks. And the examples that you gave were HTTP and sockets. Now, Luke, earlier you had connected this to this powerful idea of security. And this, I think the word for this is capability driven security. So capabilities are something that I think we've all adopted without really explicitly thinking about it, the idea that when my phone says, hey, this app wants to use your camera or microphone or your browser asks some more instructions or if you're on a recent release of Mac OSX, applications need access to particular directories. So now with this new capability driven security, we have taken the whole ABI and by default, I would assume that I would get no security permissions, is that right? I would have to be specifically granted permission to use HTTP or sockets or some particular file on an application by application basis. This sounds incredibly powerful from a security perspective. Now, are these, if I think of these as little Lego blocks, how do these things start going together? You described earlier a language or an IDL to describe these interfaces. What was the language and why wouldn't we just use something like OpenAPI or something like that, something we already use? Yeah, that's a good question. So, what's this IDL that we're defining as part of the component model? And I think that the Lego block here is the interface because when you define an interface, you're like, here's a collection of functionality. Not every component is gonna import this functionality, but if you do want it, here it is, here's a name for it and here's when you import like HTTP, what actually is HTTP? Well, it's these types and these functions that you can call. So if you import HTTP, here it is and then we give it a name. And now when I'm defining my platform that actually runs real components, I get to say, well, what are the actual interfaces that I provide, components running on my platform? And what do I expect them to export? Cause that's how I call them. So the two sides, the imports and the exports, what do I give to the components and then how do I call the components? And I can customize both sides of it because the way I call a component, if it's speaking HTTP, I pass it in a request because that's actually what I have natively. And that's the most optimal thing for me to give you is just a request. But if I'm doing, say, GRPC, I may call you with a different interface. If I'm doing raw sockets, I'm gonna call you in a different way. If I'm doing database notifications to a change to a key value store, I'm gonna call you in yet a different way. So as a host, I get to take these Lego blocks, which are the interfaces and then choose which ones actually make sense in my host and then define a world. And that world is like, that's the whole thing, this one world, like that's the contract. When you're building component, you can target this world and you'll run on my platform. And any other platform that includes this world or a bigger world. So to getting to the point Bailey said earlier, which is the world's composed. I can say, all right, my platform, I wanna run that world, that world and that world. And they can just union them up and say, okay, I can run any component targeting any of these worlds. And so it encourages us to make nice small worlds that perfectly fit just the problem we're targeting. And then platforms can kind of pick and choose and just pull in all the ones that actually make sense on that platform. And so some folks, when they talk about this, they call it deny by default. But that actually wrinkles me a little bit because the reality is that there's just literally function call by default. There's nothing to grant there. If you target this world and that host runtime supports that world, then they're able to link together. And the way that linking happens is basically that import. Let's say, when I say that import, let's say that my component wants to run with a file system and the host decides how to sandbox that. And so I say, hey, I'm importing the file system API. Help me run it. And what happens is that I get an import, which is basically the interface that Luke was just talking about. And that's how I'm able to instantiate my component and run. And it's not this like dance, something that's super complex of like saying, this is how I grant this is how I deny this, all of that type of things. It's really coming at it from first principles with these key primitives that we just walked through. Well, we've really seen this whole idea make sense across enterprises. The move to microservices, has been some organizations now in your decade long transition. And organizations, I think, have widely seen benefits from this idea that you can take the monolithic APIs and applications and break them down into smaller components. The Tom Kill-Lay is a pretty famous technologist and he published this incredible article in the ACM probably eight or nine years ago now called the Hidden Dividends of Microservices that I remember socializing to help people understand the idea of permissionless innovation. So breaking down these APIs in a smaller groups loop, we can now iterate on these APIs independently instead of getting it right on a first shot. So there's sounds like there's a lot of advantages to this approach. Now, I love Legos as much as the next person and I can already think about all of the things that I'd like to build. Now, am I locked into your set of Lego blocks or am I allowed to build my own Legos? Good question. Well, the cool thing is that the same IDL we used to define standard interfaces, which is wit, is the same IDL you can write to define your own completely custom platform. So in some sense, one of our guiding principles is try not to privilege Wazzy and give it extra special powers that anyone doesn't have when they're defining their own platform. So the reason to use a standard interface is simply because you're like, well, there's a bunch of code out there that uses it. So your incentives are use things that make sense when they're already there and they're fit your needs. But if they don't, if you have to do your own custom thing, you can write that interface in wits and get the same kind of high quality bindings generation that we have for the standard interfaces. So if I think of the broad things that I might typically find in system calls and POSIX, threads, file, IO, users and groups, things like that, it sounds like you're really enabling this long tail of standards to be built out here. Now in the cloud native space, we obviously care a lot around building with cloud native building blocks. And there is a lot of analogous interfaces that span or become essentially universal components that show up repeatedly. HTTP is probably the canonical example, but things like a key value or messaging databases, these show up repeatedly in application after application. And sometimes I think you would wanna be tightly coupled, but most times I think you'd just wanna be able to plug in the right Lego block. And it wouldn't really matter what the Lego block is. Bailey, you had mentioned something called WASI cloud and initiative to start standardizing building blocks around cloud native devices. Tell me a little more about that. Certainly. I would say one of our guiding principles for this is that we wanna solve at least the 80% use case. So with key value systems, there are some pretty obvious APIs that just about all of them implement. We wanna capture those. Now, each one of these systems may also introduce something that the others don't have. What I intend to see with WASI cloud and with the WASI key value interface is we see that broad coverage and applicability. We're able to take components that target that WASI key value world and be able to run across cloud vendors, across varying different services in a way that we can just loop in something like, hey, today I'm using something from Azure, next day I'm using Redis, the day after that I'm using Vault. All of that is available to you if you're using a standardized interface. And as you said though, if somebody say implements the standard interface, I would also really love to see people build on top of that and say, okay, yes, we implement that world, but we also have the super specialized world for maybe this more high performance use case for this type of workload. They should be able to do that and build on top of these other types that we've come up with in a more standard way across industry. So it sounds like while I would have the option to create an abstraction for say a generic KV key value store, if I wanted to do a tight coupling to a specific key value store, say AWS ElastiCache or Redis or something along those lines, nothing prevents me from doing so when I build and compose my application or build my building blocks, is that right? Exactly, yeah. But it sounds like there's some real advantages to aligning on these standards. If I align on some standards, what would some of those advantages be? If I program to a contract as opposed to programming to a specific implementation, what are some of the things that I then get for free? Wow, so many. I think the one that, you know, I used to work as a platform engineer and one of the biggest challenges that we had was updating basically our providers, updating all of our dependencies because I worked in a regulated environment and we basically had to roll and bump the whole world every month as part of our contract. And to do that in today's, you know, conventional use case that you gave earlier, I have a Go binary, which is exactly my case. I have a bunch of dependencies that I'm pulling in, many different clients that let me connect to different sources. Let's stick with Redis in this case. And there was potentially a CPE or there was some update that moved it to a new LTS. Then what it effectively came down to is that we had to just turn this crank every month and make sure we released everything. And it was extremely costly from a CI perspective but especially from CD. And by being able to build a component that only says, hey, I need this interface, I'm able to do things like runtime update, whatever's on that other side of that interface. And it's totally, you know, opaque to the user. They don't know and they don't need to care that this various version changed. So long as it's able to meet what I need out of this interface version because these interfaces are versioned. We forgot to mention that because they, I say exactly what I need with exactly the version that I need to support for that set of features. The platform is able to abstract away all of these different types of problems that used to be in the court of the developer's domain and part of that churn that they had to run through. And what I expect us to see is more secure, more stable software because of this. And it really raises the abstraction for what developers have to code to. Well, this sounds like a markedly different use case than what we were discussing at the top of the call. You know, we started the call talking about how many different platform type projects in cloud native were adopting WebAssembly as a plugin. But now it sounds like we're talking about this huge opportunity to use WebAssembly to build applications that are running on top of platforms. So I would say this is WebAssembly for platform engineering. But let's maybe start at the beginning of the story. Can I use WebAssembly on my case? You know, I mean, is this a us versus them story? Or I mean, this is the CNCF, you know, cloud native call. And where does this, where does this start? There are so many ways to run WebAssembly on your Kubernetes. And there are a lot of different integration points. There's really a great working group within the CNCF called the WASM working group. That's a wonderful place to get involved if you're very interested in how exactly we can integrate some of these cloud native technologies with WebAssembly. Obviously, the talk that Liam mentioned, I pointed out where it's running everywhere. For at least the way it works with the project that I work on, WASM cloud. CNCF WASM cloud has a WASM cloud operator. And so we deploy basically the WebAssembly host runtime. For us, that's WASM time, we deploy it, and then we're able to have a components native, effectively a scheduler, but really it's just a reconciliation loop that integrates in really well with Kubernetes and understands how to distribute these component workloads across a Kubernetes cluster. That's the way that we think is best, but there are many other different efforts that are happening in this space from extending basically things like, there are past efforts called Crosslet that was about modifying basically the Kubelet. Now there are different ways for like, let's see if we can change the runtime class, effectively at the container D level. There are plenty of different CNCF projects out there that are doing this. WASM edge, for example, is another runtime that integrates really well with that approach. I find that what platform engineers are looking for is something that makes it very easy and slots into their current workloads. And I think an operator paradigm is exactly the way that, if you look up how to extend Kubernetes, that's right there for you. And as far as the operator is concerned when they're running this, as in the platform engineer, it feels like any other type of generalized compute workload. Okay, so this sounds like this is not an us versus them story. This sounds like this is a better together motion that WebAssembly and Kubernetes can work really well together in their use cases. And it sounds to me that the WebAssembly WASI P2 standard really helps to close some of the gap that today exists between developers and their applications, where developers pick up today golden templates and they're left to deploy and keep their applications up to date. Luke, you were talking earlier about how if we're programming these contracts, we could think about keeping these applications up to date automatically. Now you work for an edge provider. So there are obviously incredible use cases for WebAssembly even across Kubernetes. Could we talk a little bit about how WebAssembly becomes compatible with but not dependent upon and maybe how WebAssembly is a circle that crosses Kubernetes but includes a lot of other domains as well. I'd love to hear more about what you're thinking with the work you're doing at Fastly, for example. Yeah, I know that's a great point. And I think circles that intersect is a good way to make sense because there's a bunch of places I'm gonna run WebAssembly, right? People are running inside databases. People are running inside of streaming services. We're running it at the edge. We're running it in nodes, in little data centers that have petabytes of SSDs and right next to them. So we can access this massive cache. That's not a Kubernetes cluster but we wanna run Wasm there and we want that Wasm to be able to speak standardized contracts so that when it makes sense, I can run the same Wasm module there or in a centralized cloud platform. So I think to say, what does it take to get there? We need to, first of all, have a language of standardized contracts. So that's WIT and a context to standardize them. That's WASI, who can now define those contracts using WIT. So I think these are the prerequisites to attempt to even start to have this portability is let's define these contracts. And of course we need these contracts to be able to be language neutral. So again, that's part of the requirements that we're now, we have. So yeah, so I think this is what will enable us to run the same Wasm modules like in Kubernetes and outside and in a bunch of other contexts too. So if I'm a developer today or a platform engineer for that matter, I might have certain standards around, what tools and languages I can support. Talk to me, can you tell me a little bit about with these new standards? What does support for various languages look like? Does this just work with Rust? Where are we with language support maybe across a few different domains? Yeah, so I think language support starts with, you bind two WITs and you say, okay, given these high level types, what does that make sense? What does each of those WIT types look like in my language? So when I see a WIT record, it turns into a Rust struct. When we had streams to WITs, that will turn into some Rust streams and all sorts of nice idiomatic bindings in your language. And so these are bindings that you get from WIT kind of for free, like they're just, once you build this bindings, it's generated for any WIT interface you give to it. And then some extra like hand written love can go into play and saying, all right, well, let's take some WASI interfaces and integrate them with our standard library. So let's build whatever the canonical packages for doing or standard library for doing HTTP, let's implement that on top of WASI HTTP. And likewise for all sorts of other WASI interfaces. So it's one of these things where you start off and you have callable interfaces, but then people can do extra work to integrate it with all the existing ecosystem packages. And now those people doing that work, they're not having to go really low level and like poke at a bunch of I32s and think about linear memory. No, they just get to implement this in terms of high level binding. So it makes even this job way easier than it was before. So right below, we've linked a new blog that talks about how we're bringing in the WASI.2 target, just like we previously had for WASI Preview One and to upstream Rust. So Yash published this this morning and he gave me a tip off about it. So I definitely wanted to point folks there. We're seeing the same type of paradigm show up in projects like GoLing and TinyGo, which is a specialized compiler for GoLing and SDK. But basically what I think we'll see is integration of many of these WASI worlds into upstream language tool chains such that when I'm writing my code in a typical Go application, I target WASM P2, WASI P2 that is and outcomes adult WASM that has the right imports and exports of these standardized interfaces that I'm now able to run portably basically everywhere in a browser and deep inside a database inside a user function. Let me push on. Let's start with maybe static languages. Let's go through a list here. So Rust has always been, I think, very well supported in WebAssembly. And what about related languages like C and C++, like static languages? How's the support across the board there? We'll just go through a list. We'll start there and then I'm gonna move to maybe languages with runtimes and then we'll go to some of the more advanced languages like Go and Java and things like that. We'll just go right through the list. If you imagine sort of a gradient of support, that's where basically 17 out of the top 20 most popular programming languages are. They all fit on here. They all have some level of support for WebAssembly today. There are the folks that are very much very stable, used in production, fully robust. That is typically your statically typed languages because it's a lot easier to add this type of target for it. There's also the fact that Rust and WebAssembly had this really wonderful co-evolution starting in 2015. So Rust is one of the languages of choice because a lot of the concerns for memory safety kind of also applies to the folks that are really excited about sandboxes. But I would say Rust, C, C++, that's in that tier super robust level of support. Within the By Code Alliance, we have several different SDKs and efforts that are happening across many of these other languages that you would think are harder. We also are working directly in cases of C-sharp, Python, Ruby, upstreaming it directly for the case of Python. We have C-Python, that's the interpreter that we've compiled to WebAssembly. And that is part of the upstream project, upstream inside C-Python. But we also have tooling around it to make it a little bit easier to work with it. And that's called componentizepy. And that's inside the By Code Alliance. So what projects are there? Wow, I could just name all so many different languages. I think going from that gradient scale, I guess the ones I would highlight, TypeScript and JavaScript, super well supported with JCO, the JavaScript component tooling. Moving, and then also for Wazzy Preview One tool chains, that's also well supported with things like Javi, which build with QuickJS. So there are a couple of different options that folks can choose from. JCO is the one that I would suggest if you wanna get started with Wazzy.2. And then moving down that stack, there's going support that's actively happening and our goal right now is to be able to have this in the next major go release so that folks just are able to target Wazzy.P2 directly. Wait a minute, hold on. Let's push on that one in particular. That's probably the most exciting one for Cloud Native. That's true. I don't have data in front of me right now, but my guess would be that most of Cloud Native is written in Go. Probably 80 or 90% of it is written in Go. So tell me a little bit more about this developer experience. So let's say I've got some Go program and I'm using OS and syscall and time. And I would normally compile that down to an elf. What are, tell me how many steps do I have, Bailey, to sort of get this working on Wazzy.P2? How bleeding edge would you like to be? I guess would be my first question. Let's describe this near future world that you're working on now. Because we, I think we all understand that WebAssembly is an emerging target and it's exciting. But it still has been, for certain use cases and we're now in the Go use case, we're pioneer. We talked about some of the settler use cases where things are very mature. Settlers are moving in and they're building big communities around JavaScript and Rust and some of these other languages. But here on the Go community, we were talking a little bit more around something that's a little bit more pioneering. So the roads are maybe not paved yet. There's still a few things to work out. But tell me about the experience that you're targeting. For the experience, and actually kudos to the Go folks. I guess I should say Gofers here. They are the ones that coined the target that many of these languages are following suit with which is Wazzy P2. So building the target, you'll say, I target Wazzy P2, that's just target. And when I'm coding, the idea is that if I'm using things part of the Go laying SDK, many of those APIs can be represented or virtualized using Wazzy. And so when I'm writing my code, I'm still writing idiomatic Go Laying. And I'm able to build things the way that I expect. And I would also be able to think about after that first phase of experience, right? Cause what everybody's gonna do is take their to-do app that they revise every time they try a new tech or whatever their app of choice is. Maybe it's like a log book, but they take that app, they lift and shift it to this new technology. So they're gonna just add dash a target. They know they're not, they don't have a lot of dependencies or anything too weird. So outcomes.wazm and then they run it and we'll simply run time and have that aha moment off of their first curl. That next phase of also what I expect us to have this year and the experience on top of going is to be able to also say, hey, I expose this functionality. I have these exports. And if you wanna interact with me and compose with me later on so that I can slot into these other systems, then these are the APIs that I expose. These are also the APIs that I need. And that's where we can start getting into one of the things that you're hitting on earlier about being able to define my own interfaces with wit. And so I expect both of those experiences to be available. Okay. So that sounds pretty exciting. So Luke on this gradient that Bailey's talking about one of the hidden dividends that it sounds like we get is the ability to virtualize these interfaces. Tell me a little more about this vision for virtualization underneath the hood here because it sounds like something that in the platform engineering context would be incredibly powerful from a, if you haven't built large platforms before there's typically a lot of steps that happen in testing and in the CI and in the continuous deployment. What's this virtualization aspect that's here? Now that my application isn't a monolithic app that's tied to POSIX, it's a bunch of sub Lego blocks. What can I do? Yeah. It's a hugely, hugely enabling technique. And it's kind of one of the unique kind of guiding goals when we started WASI was we want WASI to be virtualizable. I should be able to implement a WASI interface not just with host powers but as other WASI modules. And what that means is I should always be able to as a platform running a component say do I implement this natively or do I link into components? And like one of the really cool things that we're seeing directly is for a single interface the way I want to implement it in production where I'm interacting with billing and all sorts of internal observability APIs is gonna be different than when I wanna locally test it. So what I can do when I develop a new bit of functionality is I write a winner interface and then I write two components the one I use to implement it in production and then the one I use for local testing that just targets the CLI world and just uses a local file system stuff. And so this can like completely change what the actual developer workflow is to develop new features making it way easier to add and push out features faster. I now don't have to manually touch the production trusted computing base and also like the big, you know giant monolithic CLI local testing thing I just get to make two components and they can kind of slot in. So it just enables whole new ways to think about and build a platform again out of components. So we're both running components on the platform but also using them to build the platform which is just a pretty big shift and I'm finding pretty cool as we actually like work through examples. Yeah, this sounds like a really pivotal abstraction that is much needed in cloud computing. You know, there's a slide that's floating around that talks about how the first big epic was the virtualization of CPUs then the virtualization of data centers with public cloud, the virtualization of operating systems with containers and doing air quotes here. I know it's not real virtualization here. Sharing orchestration targets across clouds with Kubernetes and the next big abstraction would then be these application libraries that we have the components that we use to build our apps and the web assembly component model seems to be being adopted everywhere for that particular use case. This feels incredibly powerful. I would love to just maybe get a quick status there's one big language left that we haven't talked about, Java. So I'd love to maybe hear a little bit about not just Java but JVM type languages maybe a quick update on that. I'd love to go to questions and then maybe we can wrap on some of the upcoming exciting events around web assembly. And I'd love to just give you guys give the two of you both a moment to think about, you know some of the future things that'll happen in web assembly over the next year. So maybe let's go to the JVM. So I would say that there is already amazing support for Kotlin which is a language in that ecosystem. So if you're interested in getting that more robust support, please check out Kotlin. And if you're interested in the more pioneering side I think that there's a lot of techniques that we can use. Java is certainly a little bit more complex as we've seen the evolution of what GrowLVM has been doing and moving things towards compiling to be ahead of time, compilation and those types of evolutions. A lot of that type of work is also very relevant to having a very efficient web assembly output and target. And I expect that a lot of that work can be shared to support it. There have been different efforts from many different organizations. Like there was a fork of TVM that added support where if I was running with Java, I could use that. But I think in a lot of ways, Java is one of those big languages that I would love to unlock and partner with anybody in the ecosystem that works on this and we could totally build something. I know it's possible. It's about, you know, doing the work. I expect VMs like GrowVM that has ahead of time compilation to give us the best leg up in getting that implemented. Got it. Oh, that's wonderful. Let's maybe turn to some of the great questions we've gotten. So Sean Wilson had asked about should web assembly be used instead of Lua for DSLs like in Nmap and Wireshark? Luke, do you maybe want to take that one? Yeah. I would, you know, you don't want to take away any existing language that already works, but I think what you can say is we could generalize that and we could say, yes, and run that Lua, but also let's let me write those plugins in a bunch of other languages. And what I think this is kind of leans to the strength of Wasm as a plugin. Yeah, you know, the last plugin model you'll ever need saying, you know, what does it mean to actually do that? And with all this with tooling we've been building, it's like kind of easier than ever. It's like you say, all right, what's the world that my plugins fit into? I can define exactly what flows in and what flows out and it's gonna be different for each system. Like are these packets flowing in? Are they requests? Are they something totally different? So write a world and then you get all these SDKs for free and then maybe you even take the existing language that the one you started with and maybe just read under the hood, compile it to a component and then now actually you've kind of reduced your, you know, attack surface and trust the computing base and gotten a huge security boost from your architecture at the same time. So I think a good argument is all these. And Sean added even systems like EBPF, I would maybe add if we can link that there is a team at Cisco that's been doing public work on that, Sean. There's a talk calling OPPA from EBPF through Wasm and the kernel that you may wanna check out. So there's a lot of people that see all the same incredible opportunities that we see in the landscape to take this emerging and fast moving technical innovation around WebAssembly and apply it to all of these existing domains to make magic happen. It really reminds me of the early days of Kubernetes and containers when you think about transforming every single industry in every single space. Finally- I think also EBPF and Wasm in particular have a great opportunity to tag team. You know, EBF running, you know, just really injected into certain course parts of the kernel running really just tiny bits to say, do I even care about this thing? And then when I do care about it, instead of kicking it up all the way to container, maybe I can kick it up to Wasm for further and more in-depth processing. So I think combo use cases we're gonna see, I expect a lot more. Being able to specifically partner for something that's running in the Linux kernel space, but also being able to run user space type programs. I mean, that's huge, huge. Yeah, I think it's a design pattern we've seen across BroZik, which I used to work on for a long time. It's a design pattern that we've seen across Syracada, lots of the network monitoring tools, and obviously Syllium has a ton of great use there. And EBF is obviously limited in that it is non-blocking because it's in line and I'm in the kernel. So the sort of asynchronous passing I could see is gonna be very powerful use case there. And now Joe CH asked a little bit about garbage collection on WebAssembly. And I think this is directly related to both the Go and the JVM story. And maybe we could get a quick update on how things are with garbage collection. Yeah, well, and he specifically was asking about being able to reclaim memory. And there's two different ways to interpret that. One is we're talking about GC. When GC got to stage four, it is actively being in the process of being shipped in browsers and being used by some pretty major properties and giving speedups in those contexts. And so that's doing really well and will be a good target for the languages that are able to target WasmGC. There's also some related things about even in the linear memory space, can we give back pages that we've touched? I think the really, the optimal answer there is in kind of a short-lived ephemeral instance setting, the best GC is the whole memory goes away all at once. Very, very inexpensive. People used to call it like Unix style garbage collection. It's just blast away the process. So that's one way. But there's also been some research into instructions that let us, MAdvise don't need certain regions of memory, let the OS reclaim those physical pages. It's difficult to implement portably in a whole bunch of different platforms. So more research is needed to find like an ideal Corwasm proposal to even help us out reclaim physical pages of memory in a Corwasm setting. So I do love your suggestion of that because of WebAssembly's really low cold start time, just you can restart it. It is a great way to reclaim the memory. That certainly brings a smile to my face. And I know there's a lot of work and innovation in this area. Well, I think I'd love to just highlight some upcoming events and then we'll get the last word to our two wonderful guests today, Bailey and Luke. Incoming or upcoming in the next few months, we have the Linux foundations, WasmCon. Make sure to check that out. That'll be in Seattle in June. I'm program chair and we have an incredible list of talks that we are reviewing right now and we'll get the schedule published in the next couple of weeks for that. It's sure to be an exciting event. We had the inaugural kickoff last year that included talks from Bosch, Siemens, Marsk, Adobe and lots of other incredible customers of WebAssembly that are out using the technology and are excited about it and sharing their use cases. But maybe we could close with just a quick note each from Bailey and then from Luke. If there's one thing that you're excited about in the next year, Bailey, in WebAssembly, what is it and what would you like people to know is coming in WebAssembly? Wow, one thing is so hard because I think that there are so many different amazing fronts of evolution. But I guess if I had to pick one, I'm most excited about being able to create vendor lists and portable cloud native components. And I believe that because of the composability nature of components, we're gonna be able to have these modular and tiny and also effectively the best solution for any given problem. And those will be commodity that anybody will be able to run anywhere. And so, yeah, made the best component one. Wow, Luke, what about you? I'm very excited about high performance, composable concurrency and the work towards Wazzy 0.3. Okay, that's incredible. Well, thank you everyone for listening in today. We really appreciate your time as always. And Taylor, thank you so much for running today's show. Thank you to all the CNCF staff for helping to make this happen. And thank you to the entire cloud native community, all of the web assembly developers, people work on projects and languages. This is an incredible time and a huge opportunity for everyone everywhere to start building better together.