 We are no strangers to the next speaker. We're going to hear from Brendan Burns, Corporate Vice President of Azure OSS and Cloud Native at Microsoft. Brendan is a co-founder of the Kubernetes open source project. He is also the author and co-author of several books on Kubernetes and distributed systems. He's an amazing person to learn from. Please check out his videos online. Prior to Microsoft, he worked on Google Web Search, Infrastructure, and the Google Cloud Platform. Lately, he's been working on the Waze HTTP spec. Today, Brendan is going to talk to us about Web Assembly in a containerized world. Please help me welcome Brendan to the stage. All right. Hey, how's it going? I have a clicker probably, right? They haven't started my timer, which means I get more time, but I only have eight slides, so we'll promise to be a little while there it goes. Promise to be a little bit quick about this. Thanks for having me. I feel a little bit like a fish out of water. I've got some teams working on Web Assembly, and last December, I sort of decided to go do a little bit of a deep dive, get a little bit more sense for where the community was at, a little bit more of understanding of the technology, and I got mostly because I wanted to write code that made web requests. I got involved in figuring out how we could make some web requests out of wasm time and a few other Web Assembly runtimes and did some work with other folks on the WASD HTTP spec, so that was pretty cool. I wanted to talk about, because I had a lot of experience over the last decade with containers and containerized technology. I wanted to talk a little bit about how I see this and why I think this is an important moment. I think a lot of people have compared Web Assembly to containers. I wanted to highlight why I think that comparison is actually a pretty good one and why it's interesting and important. Hopefully, all of you are pretty familiar with containers. When I look at containers, this is what I see. First is my stuff, that's what I'm interested in, and then a bunch of OS stuff that I don't care about, and that's what the kernel is providing that the container runs on. But in fact, when we dig a little bit deeper into that container, what I see is inside of my stuff part, AKA the container image, there's actually a whole bunch of other stuff that I don't care about. Or I mostly don't care about. It's there for a reason, but I'm not really gonna touch it. This could be things like a HTTP client library. It could be things like the code necessary to talk to the file system. There's all kinds of stuff that's there. It's important for my program to operate correctly, but it's really unnecessary, although it is up in user space, which means it needs to be inside that container image. Just to give you an example, in terms of image sizes, if you could imagine, I compiled all of my go code into the caddy web server. It's very easy and quite nice to be able to sort of plug in things into caddy and then compile them into an executable that I can run in a basically empty image. The resulting image size is still 17 megabytes. Pretty small by container image sizes, but still relatively hefty. And in fact, the only piece of code that I care about inside of there is my particular HTTP handler that I've written. If I take that HTTP handler and I instead compile it into WebAssembly and teach caddy how to run WebAssembly, suddenly the code that I actually care about goes down to about two megabytes. Now, why is that important? What can you do with that fact that it's a two megabyte binary? Well, that suddenly means you open up the option of doing things like push on save. I'm not gonna be able to build and push a 17, even a 17 meg container image every single time I save a file. But it's conceivable that every time I save that go file, I can recompile a new wasm image, I can kick it up to the server, and I can actually get to a place where I'm doing dynamic development in a compiled language out on my cluster. And I can even do things like load multiple versions of the same code into the server at the same time. Again, because I have that WebAssembly sandbox, I can actually have three or four different commits, maybe, active inside of my web server at the same time. I can choose them with a header. And in fact, this has all been built out as an experiment that I was playing around with lately. And you can do all of this stuff. So the fact that we can shrink down the image size to sort of a more developer friendly, more focused on the actual code that I'm writing component is a huge win for a lot of development use cases. The other thing that I wanted to talk about is that in addition to the fact that there's this large piece of what's in the container image that is stuff that I don't care about, if we look down into the OS stuff that I don't care about, what we see is a bunch of APIs that were literally designed in the 1970s. So we're in 2023 now. That's 50 years ago that those APIs were designed and they were not designed for the world in which we live in. They were not designed for a cloud world. They were not designed for a distributed systems world. They were designed for single processors and single machines. So what we have is the ability then to actually say, like what are the APIs that I care about? And I do care about HTTP. But as we've seen with things like functions or other kinds of platform as a service environment, I might care about something like setting a key value store. I might care about something like receiving an event when somebody posts a message to a Kafka queue, right? Those are kind of the kernel system level APIs that I care about as a cloud distributed system developer rather than opening a file and getting a bunch of bytes. And honestly, those are not primitives that make sense in this world. And so the other side of the WebAssembly sandbox that I think is particularly exciting to me is this opportunity to rethink what is the code that most developers actually want to connect with? What are the kernel calls, the system ABI, that we want to connect with for cloud-based code? And so I'm excited about the work we've done in the Waze HTTP spec. I'm excited about the work that we are doing out to define sort of this cloud interface with a project called Slight that is happening inside of one of the experimental areas inside of Azure. And I'm really excited that we can bring the ability to run WebAssembly inside of an orchestrator because of course all of that had to do with writing individual programs. But when you wanna deploy a distributed system at scale, you need to deploy it across multiple machines, across multiple failure domains, and you wanna have things like load balancers and other kinds of technology to take user's web requests down into the code that you happen to be writing. And so you still need orchestration. And so we brought the Waze implementation into the Azure Kubernetes service so that you can run orchestrated WebAssembly containers today. And additionally, in order to make it really useful, you still need to write the code. And unfortunately, as you've heard through some of the keynotes, getting all those tools together into the right place with the right version so that it all works correctly with examples and all that kind of stuff is still a little bit of what works in progress in this community. And so there's also a whole bunch of dev containers for Wasm in pretty much your favorite programming language, including Pascal, available to you if you wanna have a really easy quick start to get your feet wet writing some code in WebAssembly for the Waze expect. So thank you so much. Enjoy the opportunity and hopefully we'll see you again soon.