 Good morning, and thank you for coming to this KubeCon EU 2021 keynote, Cloud Native and WebAssembly, Better Together. In today's keynote, we're going to show the story of the rise of WebAssembly. We're going to give you a high-level overview of what WebAssembly is all about, and why you should care, and how WebAssembly and Cloud Native are better together. I'm Liam Randall, and I'm the co-founder of an open-source framework for microservices called WasmCloud. My co-speaker Ralph is the principal program manager for Azure Core. Together, we collaborate on a number of Cloud Native projects leveraging WebAssembly. Today, Ralph and I are going to take a few minutes to explain why Cloud Native and WebAssembly are simply just better together. For the uninitiated WebAssembly abbreviated WASM or WASM, is on the path to becoming as important to the enterprise and as ubiquitous as the container. Whether you realize it or not, you're likely already adopting WebAssembly in your environment today. The security, portability, and embedability has developers throughout the Cloud Native ecosystem adopting both its power and potential. We'll take you there, but let us first set the stage. Over the last 30 years, each wave of innovation has built upon and further scaled the previous generation's wave of technology. We can observe that each new epic of technology has been dominated by two dimensions. The first dimension is its format. The format is how we store our applications, the what we deliver, what we run, and the what we move around. The second critical dimension is the technology we use to orchestrate those formats to create, delete, scale, and connect them together. With the format and dimension in mind, let me explain the parts of the picture in green are the responsibility of the application builder, and the parts of the picture that are in blue are the responsibility of the orchestration layer. On the left, the standalone PC started with an image and was manually orchestrated by the data center. This gave way to the virtual machine and the virtual machine was orchestrated by the public cloud. And we've segmented our formats even further with the rise of containers and portable, pluggable, multi-cloud orchestration with Kubernetes. Now let's observe a few general trends and patterns. First, each epic is solved for the emerging challenges of the waves that came before it. Second, in each progressive wave, we increased the coupling of applications from their specific underlying environments. The format itself embeds fewer assumptions than the format that came before it. Think back to the data center computers, for example. Specific applications were tuned for specific hardware on which they resided with specific drivers. Third, a further theme emerges. The underlying layers become increasingly portable through plugability and extendability. The virtual machine, for example, freezes from a specific size and type of computer. And Kubernetes freezes from a specific size and vendor of cloud. Fourth, each wave continues to freeze further from the assumptions built into the previous generation. The variable container lets developers ship and reproduce environments quickly. Fifth, things today are hardly perfect and there are abundant assumptions baked into the way we design, develop, and deploy software, even with containers. For example, we're all now living through the painful upheaval in CPU architectures with x86 and ARM in the data center. So as we consider what may be next, let us then project these trends forward into the next wave. So the fourth great epic of technology will continue a trend of further decoupling from underlying layers, portability and extendability through plugability and freedom from some of today's assumptions. So what's next? The jobs to be done of the previous epic shaped the next epic's design. The need for horizontally unscable compute led to the rise of the public cloud. The need for shippable environments led to the creation of the container. In the design of everyday things, we would recognize that we can use the clues in our environment to identify what we need next. A well-designed product, something that has real fit would have these intentions, these intuitive cues that make it obvious how to use it. And the solution to today's broad challenges will be less of a wow experience and feel more like a boring, of course. So here we are 15 years into the launch of the public cloud, eight years since the dawning of Docker and seven years since the launch of Kubernetes. Now the Linux Foundation Edge has helpfully provided us with this wonderful continuum for what the coming world looks like. And at first glance, it feels pretty straightforward. The next epic of technology, therefore, may account for the world from big to small, from centralized to distributed. Now this broad summary can hardly show everything. Themes of infrastructure are bundled. On the far right, we have not a cloud, we have multi-cloud. And in the middle, we have not just an edge, but a multi-edge. And at the user, we have not just browsers, but multi-browsers. And it is clear that the world of tomorrow accelerates an existing challenge, the incredible diversity in CPU architectures. Now let us take this world view and envision the great epics of technology laid over it. If we consider the cloud-native landscape as we understand it today, we have so many great fits. Kubernetes on the edge and increasingly on the edges. Containers riding along with them into parts of the consumer edge. We have service meshes like Envoy. We have policy engines like OPA and dozens of other related projects. So where in this view do we see those three tenants emerging? Where is the progressive decoupling? Where is the plugability? Where is the freedom from the broad assumptions that we embrace in today's models? As we've accumulated these design criteria, where might this happen in the cloud, at the edge? It is in fact, not just one place. WebAssembly builds upon the entire ecosystem and not just on the ecosystem. WebAssembly is inside the ecosystem and in some places, it is the ecosystem. It is both compatible with and freeing from the assumptions of the previous generation. It transcends our landscape and WebAssembly is poised to fit everywhere. Now, if you haven't heard of WebAssembly, let me start with what may seem to be an all too familiar promise. WebAssembly was begun as a portable Polygot compilation target for the web. An idea like Java, Silverlight, or Flash that promises right once and run everywhere execution. And wasm difference differs. Both open source and free, it's a community-driven W3C standard created in close collaboration with browser engine vendors and shipping in all major browsers since 2017. WebAssembly is a compilation target and not a programming language. This is a technology that enables developers and organizations to choose their languages, their libraries, and to deliver them with a consistent set of tenants. And like many great technologies designed for the web before it, such as JavaScript, WebAssembly too has found home outside the browser, on the server, on applications, on the devices and even embedded in other platforms themselves. And while the future of WebAssembly is simply dazzling, today it already brings much to the table. It's efficient and fast. It runs at near native speed. It's safe and secure, not just sandboxed, but it operates in a deny-by-default mode. It's open and debuggable. It's polyglot. Choose your own language. It's portable from servers to browsers to embedded. So today, we already find WebAssembly runs in, runs on, and runs around cloud native, as applications executing on our big servers, as pluggable engines embedded within our applications, as platforms in their own right on the edge. It's inside our browsers, and yes, it's even inside the IoT. WebAssembly is already showing up in our applications. It's running in and around the cloud native ecosystem. Certainly, most of us use WebAssembly on a regular basis whether we realize it or not. It's speed and efficiency is part of the magic behind both Google Earth and Microsoft Flight Simulator, and the next crop of open-source projects are already building with WebAssembly. WasmCloud, for example, a distributed application runtime for the enterprise, and Kubewarden, a flexible and powerful admission controller for Kubernetes, both leverage Wasm. And these are just two of the hundreds of cloud native applications building on WebAssembly today. And WebAssembly is being incorporated inside of existing CNCF projects. As an embedded engine, WebAssembly's key value proposition around speed, efficient size, and security make it an attractive choice as an embedded engine where we might execute code from third parties. Where once you may have turned to Lua or JavaScript, we are now starting to find Wasm. For example, both OPA and Envoy both rely on Wasm at their pluggable cores. And as a platform, WebAssembly is not only one of the core technologies leveraged by companies like Shopify and Fastly, but it's also showing up with the Kubernetes Rust Kubelet or Crustlet as a native payload. And so it is as an application, as an embedded engine, as a platform, and the browser, or on the edge, a new epoch of technology has emerged that decouples us further from the limitations of our previous world view. WebAssembly's security, portability, and decoupling of concerns transcend and are part of our cloud-native landscape. And both cloud-native and WebAssembly are better together. Thanks, Liam. As you just heard Liam outline, WebAssembly is perhaps surprisingly already here. You may not have noticed it in Envoy or Istio or Glue in KubeBorden or Crustlet, but it could already be running in your Kubernetes clusters. You may not know that Flight Simulator or Shopify uses it, but that it's the way Fastly does their compute at the edge. Already cloud-native and like any good technology, it's boring because it works. It's the coming future that might add a little spice to the boredom. Have a seat while we talk about what's coming for a moment. Do a demo and tell you how you can get involved. WebAssembly is already sandboxed by default. The first step to untrusted code. But we want to take more steps in that direction. In Container Land, we continue to apply all the best pod identity practices. We use our back. We prevent privileged containers, among other things. And these best practices are essentially continuing to find the holes and plug them. WebAssembly is taking the next steps too. And it's happening right now. So if you're interested, you can help out upstream or start experimenting either with some of the bytecode Alliance projects or other run times and application models that are using WebAssembly. For the past couple of years, the members of the bytecode Alliance have been collaborating on what they have been calling the Nano process model, something that brings portability and composability to the security and speed of local processes, but with much lower overhead so that they can create a radical increase in agility. The objective is secured shareability. This is the genius of open source and containers without sacrificing performance or a native developer experience. If containers are the big gears in cloud native, WebAssembly components like this are like smaller gears in between the larger ones so that the entire mechanism can do more work. Let's click through all of this quickly. First, sandbox by default means the module has no access to anything. Default no, not default yes. If a runtime wants to offer a get random function, it offers that function to the module. The module does not otherwise have access. This default stance is a great foundation. The isolated memory model means that modules get their own memory. They have no access to anyone else's memory and this is great. Third, interface types bring two main features. First, they ensure that complex types can be passed into and received out of modules and that the languages will do this work for you, not a complex manual chain of tools. Second, they mean that this passing can be highly optimized and does not involve complex marshalling or serialization costs that occur in other inter-process communication environments. It's very fast, but still enables the typing and cross-language functionality that a cloud-native world needs. Now, some things are done. We can mostly pass complex types in and out. That's really great. But there are some things that are still in flight, like handles to files and buffers. These will likely appear very soon and I absolutely can't wait. Because there are some things that you want to do with files and networks. This is part of the WebAssembly system interface work, which is commonly called WASI. Now, WASI does several things, including enabling languages to target a single binary specification, giving developers their current native dev iteration experience and ensuring that they just create their code the way they do now. And securing memory internally and ensuring that modules can export and import things is great, but code also does things against shared resources like files and networks. And like processes and operating systems, WASI ensures that calls have a fine-grain set of permissions or capabilities that they must have in order to use those resources. It becomes possible to deny to code any outbound network permission, for example, and do it with the application rather than modifying the entire platform. Let's go to a demo. First, let's establish the environment as this is a deliberately simple example to ensure what is happening is clear. You'll note that there are two Crestlet agent nodes here, one with WASM time using the current WASI features, which we had used several hours ago, and one WASM cloud node so that you can see that with Crestlet, you can run any WebAssembly runtime for which there is a provider. Let's have a quick look at the YAML for our WebAssembly. If we've done things well, it should be immediately understandable. Note two important things. First, the image is a WebAssembly module stored in Azure Container Registry. A project called WASM to OCI makes this possible, and it works for pretty much any OCI registry. But for Kubernetes users, this is irrelevant, as it's just the image you want to run. Second, the node selector is asking for a WASM cloud node, not Linux or Windows or anything else. Operations just got simpler. No multi-arch images, OS-based images, it's always the same WebAssembly binary. Even if you change the WASM runtime, let's start a simple file server just to watch it work. Cue control apply, and we ensure the pod is running, and then curl to the service to create a new file on the server. And because we're running locally, we can list the directory and see the file. Now let's go and print it out. First locally, and then by curling the file and finally delete it. This is a simple application, but the experience is the same whether it's simple or very complex. Now that application was a simple WASM cloud file server hosted on a Crustlet node running in Kubernetes. Not only were no OCI containers involved, we didn't even touch the advanced messaging and capabilities-based security of WASM cloud. Nonetheless, it should be clear that whether you're working inside Kubernetes or building applications that communicate with it, WebAssembly is likely to be a part of your cloud-needed future. So what can you do? First, get involved. The bytecode alliance is a great place to join the WebAssembly community in one step, but there are user groups and specific audiences like the Linux Foundation Edge as well. Second, pick a runtime and compile the WebAssembly. Tons of languages compile the WASM and the emerging WASI component approach. Give one of them a try and when new ones add WASI support, try those as well and provide feedback or submit a PR. Third, run WASM in your environment. Give it a try or embed WASM in your application or platform by using the language bindings for your favorite runtime. As we continue uncoupling our business logic from our text acts and move closer and closer to zero trust code, WebAssembly stands out as an important open source addition to Kubernetes. And as the WebAssembly component features begin to emerge, its ability to run on multiple architectures and operating systems and in constrained environments means that you can start thinking how you're gonna use it right now. Only you really know how to fill the WebAssembly landscape posted. Thank you very much for watching.