 So I've recently gotten into boxing. I don't actually box. I mostly just watch from the comfort of my couch. In boxing, there are weight classes, like heavyweight, middleweight, lightweight, and so on. It's exactly how it sounds. A boxer must meet a certain weight to box within a class. And these weight classes remind me of the clouds, of the waves of cloud computing. So first up, you've got your virtual machines. This is your heavyweight class. Being in the heavyweight class in boxing means you are powerful. But if you get knocked down, it's going to take some time for you to get back up. That's basic physics, right? Virtual machines are the OG heavyweight runtime for the cloud. Often, they take minutes to start, but they contain an entire operating system, from kernel to applications, so you can do a lot with them. A container is lighter and smaller. And in the world of cloud computing, this would be your middleweight class. Middleweight strikes a balance between speed and power. Containers have given us the perfect environment for running a single long-running server. They take seconds, not minutes to start, and consume fewer resources than virtual machines. Now add to that WebAssembly the third wave of cloud computing. This is your lightweight class. Speed and agility is the name of the game here. Compile your app once directly to the WebAssembly binary format and use that same binary across multiple architectures and operating systems with no changes. And a WebAssembly app can be cold-started in about half a millisecond. Half a millisecond, making it vastly faster than even making it startup speed vastly faster than even containers. So how do we run WebAssembly on Kubernetes and in cloud-native environments? I'm going to talk to you about SpinCube today, which is a project that helps you do just that. So this is a SpinApp custom resource in Kubernetes. Online 6 is a reference to an OCI artifact containing a WebAssembly binary. Yeah, that's a thing you can do. So behind this SpinApp custom resource lives a Spin operator. Once I apply my SpinApp to my cluster, we're going to see it running in our cluster here. The Spin operator will actually pick it up and deploy the corresponding deployment, pod, and service. And it's configuring the pod to use a wasm runtime instead of a container runtime to execute our app. So on the outside, you still have your pods and services, same as usual, but on the inside, they're actually WebAssembly apps and not containers. This is that same app. We're going to increase the replic account to 50 and apply it to our cluster. I'm running on a two-node AKS cluster here. And if I check out my pods, they're being distributed. Look closely on my two nodes. One is an x86 node, and the other is Ampere's arm64-based node. And if I want to scale even further, then I can just throw that same binary on any burstable instance that's cheap for me at that time. These are the kinds of things that have been important to the team over at Zeiss Group. And I'd like to invite Kai now to talk to us about their experience in this space. Thank you, Kai. Thank you, Michel. So coming from Zeiss, I can relate to physics. As the ARC in our logo suggests, we handle with spheres, lenses, optics. From microscopes up to this bulk cube-like looking clunks of metal, which are our high NA lidocaf resistance, which you basically need to produce your beloved Apple silicon or NVIDIA GPUs. These are the only optic devices on the whole planet which can do that. So my R&D experts, they always try to explain this kind of optics to me, but there's no help there, so I can only grasp the basics of it. But what I understand is how to operate such business models on information technology. Many of our business models require that we process the information electronically, be it just for the pure scale of information we have to process, or the product specifications which are attached. So in the beginning, money is not the issue when we start such projects. We, the project risks are high. The commitments are made. And we basically recouponates the hell out of that case. Even wrap some CNCF goodness on it like a dapper, a quitter, whatever. As soon as the dust and the business settles, our finance folks catch up with us, which then requires us to basically go into more design for cost approach. Secondly, often in the beginning, we are capable to have the luxury of designing those services within bounded contexts, applying domain-driven design. So really doing a nice systems. Further down the road, often logistic aspects kick in, like container runtime sizes, number of parts limitations, whatever. And soon we are forced to give in basically that the technical granularity of what we run diverges from the semantic granularity. And this is one of the points why we started looking into WebAssembly in the cloud, basically to get those two closer together, the semantic and technical granularity. For that, we generalize one of our many flows, which reflects how we usually do that processing. That means we get a number of messages or orders dropped at our doorstep. And then the expectation is that these messages there are processed in a given time, answering, OK, when, how, we produce or deliver that thing. And this is exactly what we measure from the arrival of the orders to until they are landing in some buckets. Now let's see that in action. So what we see here are these three spin-ups, which handle the load and their respective deployment. So you see quite regular primitives, Kubernetes primitives. Let's generate some load. You see the basic shape of replicas. The load picks up. The first set of replicas already takes in the first chunk of orders. And as soon as the environment is scaling up, the number of orders that can be processed in the same time slice increases dramatically. First, some distributor functions picks the orders, does some basic decisions, and then hands it over, over queues to a second wave of services, which basically put it in buckets to some other decisions. What we can see here is how fast the services are scaling up and picking up the traffic. In the end, the whole throughput will be around 34 seconds, which, if you follow other posts I did with other environments, is pretty darn good. So in the end, it's just physics. By using a spin-up with WebAssembly artifact instead of a regular deployment, we can, for such a Node.js Express app, we can reduce the size from 400 megs to almost two megs. So even if we would add more logic, that size will not dramatically increase. And that allows that the scale goes up fast and scales down also as fast, which then, on the other hand, allows that the same resources with a slide overlap can be reused already by the second or third or fourth wave, whatever wave of processing you have. So we can use the same resources multiple times within such a process. And that by still applying all the tools, all the environments we already know. Again, to conclude, the smaller packaging size allows that we pack more services into the same resources. Or on the other hand, use cheaper resources to process the same posture of services. We can scale up down faster, have a higher grade of reusability of the same resources while keeping the tools we know. Here again, in numbers, even when switching from just from x86 to ARM, we could reduce or with 60%. So we could use 60% cheaper resources for that. And with that, I want to hand over to my brother in hair color, Raul. Thank you. What you've just seen and heard from Kai is a result of a collection of open source projects in both Go and also Rust and open source foundations. Obviously, the CNCF and also the Bicode Alliance Foundation across the entire world in collaboration. And I want to highlight just some of the important projects that you're probably familiar with that actually make this kind of innovation possible. First and foremost is Kubernetes. It's easy to think when we mention WebAssembly that we're somehow not talking about Kubernetes. We are. It's the maturity and the stability of Kubernetes as a platform that allows us to continue innovating in and around it. And for example, the ContainerD project in the CNCF has many different shims that allow you to integrate different kinds of workloads into Kubernetes. And Microsoft is very proud of having created the Rust-based ContainerD project that allows, called RunWassie, excuse me, and that allows the contributors, along with people like Fermion, Docker, Second State, and many others, can use this project to bring even more flexibility and scaling agility to Kubernetes without affecting the containers you've already using in your clusters right now. And Zeiss is no different than what many, many others are already doing. Now, because it's built on RunWassie, the WebAssembly ContainerD shimper spin allows Kubernetes to scale up and down very quickly as you just saw, but also move your WebAssembly, your workloads from one operating system to another and from one CPU to another. On the fly and without multi-arch builds, and because they enable WebAssembly workloads to run in the same pod as containers, you can continue using your workloads, your container workloads, that you use right now. Now finally, I want to call out, as I put up here, the operator framework. And Kwasm, Kwasm from Liquid Reply, that operator is used to install the spin shim easily. That's what Kai and Zeiss used. Without requiring a new cluster, you don't have to bootstrap something new. And there's a community behind all of this work, every single thing we've done. So, most people, though, would really prefer not to search for individual tools. I know a bunch of you are curious to dig into the bits and you can because they're projects, right? But most people would like to use a single stack of tools. They don't want to find them all together and then line them up the right way and configure each one. So working with the feedback of Zeiss and others, Fermion, Microsoft, Suse, and Liquid Reply are proud to announce the creation of SpinCube, which you saw the slide before, which is an open-source stack that you can use to streamline the experience of developing, deploying, and operating WebAssembly workloads on Kubernetes along with your container ones. Now SpinCube includes a runtime class manager, which handles the installation upgrade and uninstallation of the workloads, the container dshims, like RunWazzy-based shims that we're using here. And to run them, we actually have a series of shims. In SpinCube, we're using the RunWazzy-based shim for Spin, the Spin project. And to optimize those workloads, to do what Zeiss was doing, trying to dial in the density and scaling agility, right? We use the Spin operator. And together, this very full stack of Kubernetes tools allows everyone here and around the world to go ahead and get the very best of their Kubernetes workloads wherever they are. We'd love you to give SpinCube a try. And the best way to do that is go ahead and snap the QR code and tell us what you think. Jump into the project repose, give us feedback, and join up and help us make it even better than it already is. Now, I want to ask Michelle to come back up here for a second. Thank you. And today, we'd like to announce that we have submitted the application to contribute SpinCube to the CNCF. Thank you. Many of us in the cloud-native community have been so excited over the last several years about WebAssembly. So you've got lots of people, and you've got lots of projects working together to make this a more accessible technology in our ecosystem. And we hope that you join us in this endeavor of trying to make the power of WebAssembly even more accessible and leverageable in our space. And if you want to learn more, we'll be at the Microsoft booth as well as the Fermion booth. And we have a talk later today at 2.30 in S3 SO6. Thank you so much.