 All right, so let's get started. And, well, this, I remember the last in-person meeting I've been to in North America was the Wasm Summit at Google campus. And this is the first post-pandemic conference, in-person conference. I went to Asia in the middle and attended a bunch of conferences. This is the first one in the US. That's so glad to be back. And finally, I see everyone face-to-face. Well, so let's dive right into it, what we are talking about. And the first is why. Why do we need to manage WebAssembly in Kubernetes tools? And this is a picture I often use for, I like to use this. I didn't draw that. IBM OpenWisks drew it. And they talk about the evolution of cloud or server-side technologies. So it starts from bare metal. And I still remember the days when data centers have your own dedicated servers. You do the iris scan or do the fingerprint scan to go there. And you can actually find where your software is on those servers. That provides, use hardware to isolate software. It's by the level of abstraction. And then virtual machine come along. And it becomes cloud. And it becomes containers. And now we are at the new age of deploying functions. You know, I think, as Ralph had said, and Liam had said earlier today, that this represents a new level of abstraction. It's the abstraction, the separation from infrastructure code to the business logic code, which is something that people on the server side on the back end has always strived to. So I think to have the infrastructure completely abstracted the way and for developers to only write functions is one of the major step forward. So I think this is one of the big, the context of my talk. So we have this mask and said, serverless, why do we do serverless? We want to make servers great again. So I have this here. And if you guys want it, I have quite a few. So everyone wants it when they get it. So we want to do serverless. The reason we want to do serverless is to make server side programming or back end programming easier. However, there is still a dark cloud on the horizon, so to speak. Is that to say, people say, today, most serverless functions or function as a service functions are implemented using what we call micro-VM or Dockerlike containers. So you are essentially running the function inside an inside application container. And just to think about the work you have to go through to run a function. So the function is 1020 line of JavaScript code. You have to start Docker or the micro-VM and start operating system in it, start Node.js, and run 10 lines of code, and then shut it all down. It's not a very effective way of using system resources. And so here are two graphs. Again, those are not mine. Those are what people have done the measurements. When Datadog presented this graph to say, half of lambda functions finish less than 800 milliseconds. My reaction is, really half of the lambda functions took more than 800 milliseconds. It seems very long. It seems 0.0 second. If you are into the JAMstack application paradigm, that's all of the same. You'd think each web page would have 10, 20 different microservices on the back end. And if each one of them takes a second, it's a very long time. So even with what we call mega-cloud in supersized data centers, we are still facing the problem of code start and things like that. And that spells problem for having the serverless paradigm available everywhere on the network's edge and on edge devices. So even in today's mega data center, we have problems. So what we want is a lighter application container or something that is significantly smaller and faster than Docker-like application containers to run those lightweight functions. So we don't have to go through all the process of starting Docker, operating system, starting Node.js, and run everything. So maybe the answer is WebAssembly. That's why we are here. That's why we have a cloud-native bosom data. Could WebAssembly become a container? So at the time, there's talk on the internet to say, maybe WebAssembly is going to be replacing Docker or become the next Docker. But it's probably not because WebAssembly as of today or maybe for a long time wouldn't provide the same kind of developer experience that Docker would give you, which is to give you an operating system, and you can do whatever you want in there. However, what we think is more likely to happen is to have WebAssembly running side by side with Docker. So you would have, for lack of a better word, a service mesh or something like that. That's something that is managed by, say, Kubernetes or some other framework. And it can start Docker for some workload, and it can start WebAssembly for some other workload. It depends on what kind of workload you have and where do you want to deploy it. So this is the main focus of the talk, is to use WebAssembly as a container. Use WebAssembly itself as a container. But not to replace Docker, but to go where Docker cannot go, or Docker is too heavy to go. So I want to tell you a little bit about what is the framework or what is the shameless plug for the project I'm involved in. It's called Wasm Edge. It's a WebAssembly runtime that is optimized for cloud-native applications. And I want to talk a little bit about why it makes Wasm Edge such a good fit for microservices or as a serverless container. So one of the key features it has LVM based AOT, so it's one of the fastest, I wouldn't say it's the fastest, because there are different workloads, different VMs, different characteristics. But in a lot of benchmarks, we come up top. So it's one of the fastest WebAssembly VMs on the market with AOT compilation. And we do custom extensions, because when you talk about things like you want to run it as a microservice, you need networking capabilities inside the VM. So to have network socket support is a must-have. To have asynchronous polling is a must-have, although WebAssembly doesn't have multisread yet. But so does JavaScript. JavaScript doesn't have multisread either, right? So, but JavaScript support asynchronous as we already experienced from Node.js find out that that is appropriate for a lot of use cases. So have a synchronous polling and have things like AI inference, like a quad to TensorFlow, and to have database connectors and things of that nature. And one of the other things that we put a lot of focus on is to provide first-class JavaScript support. I think we are reaching a consensus from multiple talks today, from the Shopify talk, and it's that we want to compile the JavaScript runtime into WebAssembly and then run JavaScript inside that runtime. And people keep asking, why? Why do you do that? Isn't JavaScript already in some kind of runtime using Node.js and V8 design for that? Yes, it is. You can run it in V8, but V8 is never designed as a container, right? V8, if you don't run V8 naked, you run V8 inside Docker. So the role of WebAssembly is, WebAssembly plus a JavaScript runtime replaces the Docker plus operating system plus Node.js plus V8, right? So that's so. And by having a lightweight JavaScript implementation, by the way, the work we have done so far is also based on QuickJS, because QuickJS is easy to understand JavaScript implementation. We can provide ways for WebAssembly to supplement JavaScript, meaning that we could have JavaScript APIs but implemented in Rust, right? So in a JavaScript application, you may want to do something like image processing, but instead of using interpreted JavaScript to process the image, you could have a Rust function on the back end to provide this functionality pretty much the same way you use C++ functions to extend V8, right? So, you know, so we, so WebAssembly provides a lot of those JavaScript support. We are gonna give you a link later in the talk, you know, to talk about those. And then there's management features, which is the focus of the remainder of the talk, is that we made WebAssembly OCI compliance runtime, meaning that it can be managed by SystemD and C-group FS. You know, meaning at the process level, you can allocate resources to it. And there was also an excellent question earlier today, is to do we have finally, you know, more finely-grained, you know, resource allocation for WebAssembly runtime, because WebAssembly runtime are lighter, so you may not want to, you know, just provide, you know, resources as a process level. You may want to provide resources at a threat level, right? You know, this is a page we borrowed from the blockchain people, which is what they call gas meters, right? You know, that's, essentially, you give each WebAssembly opcode or instruction a gas value, and then say, when you start this VM, I would allocate, say, a million gas, and then, you know, then it would compute how many instructions it went through after it's exhausted all the gas, it stops, right? You know, so you can even get people to pay you based on gas, they can pre-buy those gas as credit, and then, you know, and they use that. You know, this, we have this, we have this gas support building to our C API, Go API and Rust API. I believe WasmTime also have it. You know, it's, if you look into their Rust API, they actually have something called, I forgot the name, they didn't call it gas, but it's a very similar concept. So those are some of the unique features that what I would say, that's made Wasm Edge, you know, appropriate VM for cloud native applications. Well, I skipped the use cases, so we can go right to how, you know, how we do that, how we make a WebAssembly workload run side by side with Docker workload in Kubernetes cluster, right? We have three approaches, and some of, you know, I would be focused, I would mostly focus on number one because this is the approach that we took. The other two approaches is one that you have heard from the end group. They have a WebAssembly runtime that manages, I think the last time they told me it's 500,000 machines, 500,000 machines in their data center, right? You know, that's, and they use container DCM. And then there's the excellent work Microsoft have done with Crosslet, you know, but the way we did it is that we have, we made our entire runtime compliant with OCI. So what does it mean? So what does, what is OCI compliant runtime? So this is, I think, you know, all of you are very familiar with this graph, right? You know, that you have Docker and Kubernetes at the top, and then you have a container runtime interface, it's CRI. And underneath that, you have different runtimes. And it all goes down to a couple of, you know, a container runtimes called Run-C, typically Run-C, C-RUN, you know, that's, and from those binary applications actually start the container, right? It's, this approach is actually not unique to us. You know, a lot of people have done it. You know, so there's Run-C, which is a goal-based container runtime, it's called means Run container. And then there's C-RUN, means C-based RUN, you know, C-based container runtime. And there's Run-E, you know, it's a SGX-based, you know, a zero-knowledge proof, you know, trusted computing environment container runtime. And then there's Kotter runtime, there's CY, there's Run-SC, you know, there's bunch of those, you know, so, so, so people have, you know, as a community has innovated to change those runtime bind, to change those run times to make some run different workloads, right? You know, so some of them are application containers, some of them are not, and run application containers in different places. So the work that we have done is to, is to fork C-RUN and we, you know, and we are trying to merge it upstream, but still in the process of that, right? You know, so, so the work on our side is mostly done. It's, we call this tool C-RUN-W, and there's a GitHub link there. So it's based on C-RUN, but it has some logic in it to detect whether we are running a Docker container or we are running a WebAssembly image. If it's a WebAssembly image, it's load the, you use the OCI interface to load the WebAssembly image and then run it. And so it's C-RUN-W means based on C-RUN, but runs WebAssembly, right? So it's, like I said, it utilizes the CRIO and Kubernetes extensible architecture. You know, Kubernetes and CRIO are both set up to allow you to add your own customer runtimes. So instead of using C-RUN, we replace the C-RUN binary with our build here. That allows, you know, that's because this is a dropping C-RUN replacement. It runs Docker containers just like C-RUN does, and it's also runs WebAssembly. So it allows us to run Docker and WebAssembly side by side in the same cluster environment. And then up from that, you know, you have, you know, you would be able to, you know, store WebAssembly files directly in Docker Hub. So it's with all the Docker semantics would work. You can pull the image and then you can start and you can run, right? You know, so that's, and it supports the resource allocations through C-Google FFS and the system D. There's a video, which I don't think we should play it here, but you know, it goes through the whole process because as we know, anything related to Docker and Kubernetes take a long time to demo. You know, that's, you know, you have to show, you know, download this, download that, you know, and resolve this conflict and that. So, you know, but I walk through the main steps here. So from the first part, you can see this is to create a Docker image. The Docker image has no operating system. It just has a WebAssembly file. That is the OCI compliant, so to speak, OCI compliant WebAssembly application that being packaged in the container format and can be stored and pulled from the Docker Hub, right? So we publish to Docker Hub. And then we use the CRIO command line tool. It's CRI, CRI CTL. And we can pull that from the, well, before we get to this step, we have already replaced the C-ROM binary in the CRI, in our CRIO installation with our build of the C-ROM W binary, right? You know, so there's a slash being slash C-ROM. And we replace that binary with the binary that we have built to make it aware of the different type of container images. So we can just do things the way that you normally would do with Docker images. You can pull them and you can run them. You can create a pod and then you can run them. And it would just start the container. It would run that WebAssembly application. That WebAssembly application happens to be a WASI application that they read something from the command line and then generate some output, print out through the command line. And you can see all the results in the logs, right? You know, so that's a easy way to test it, right? You know, so that's, with that modification for the C-ROM, you can try it with Docker images and you can try it with WebAssembly images in both of them work, right? So now we go to the next step. We want to build, you know, we want to use this modified CIO to build a Kubernetes cluster. However, in order to do that, we need a WebAssembly application that is long running, meaning that we need a WebAssembly application that's in itself act as a microservice. There's no Docker around it. There's no Docker or another host application around it. That WebAssembly application, when you start, it should listen to the network and respond to the network. And that requires us to support the WASI socket specification. And the WASI socket specification, as I understand it, has been fairly slow. So, you know, different teams have come up with different ways to do it. You know, that's what we did as well. That's a growth out of necessity. That's because we need to have this run in a Kubernetes environment. So we must have a microservice that can listen to the HTTP network. So we have, so I'm going to demonstrate two ways to do it. The first is just to use our Rust API. And because this is not a W3C standard, so we can't just use the standard Rust API to create network connection or socket yet. We have to, as you can see, we have to import WASIM edge WASI socket for the TCP listeners and TCP stream and things of that nature. But once you have it, the main application becomes fairly simple. You know, this looks just like, you know, how would you write a web server in Rust, right? You know, that's, you know, you open the port and then you say, listen for it. And the function that doesn't work is called handle client. And the code is here, you know, this, those screenshots are the entire application. You know, the entire application just have this much code, right? You know, so if you look at the handle client, it basically, you know, gets the input HTTP and pass out, you know, decoded and pass out different elements from it. And then it generates a response. And this response is basically just to echo back the body, right? You know, so if the post request has a body and then, you know, from the command line, from the command line that you run the server, it would just echo back the command, it would just echo it back. So that's a very simple Rust-based HTTP server that runs inside WASIM edge. And that allows us to have a long running WASIM edge instance that can be managed by Kubernetes, can be started by our modified version of CRON, right? CRON W. And then the second way to do it, it's probably what most people would do. And as I think we heard it again and again today that people want to use JavaScript, especially Java developers, and especially web developers, they want to use JavaScript. So the way we did is that in our, in the JavaScript runtime that runs inside WASIM edge, we used, like I just said, we can use Rust to implement JavaScript APIs. So meaning that we can define a JavaScript API that is available to JavaScript developers. But when the JavaScript program calls it, it actually calls the underlying Rust program to execute it, all inside of WASIM edge, right? You know, so taking that, you know, Rust-based SDK, we can make it into a JavaScript API called HTTP server. So this, so all the Rust code becomes this couple lines of JavaScript code. And so it's just a started HTTP server. And then, you know, and for the sake of simplicity, I'm showing it as a blocking example, right? You know, so it's just an infinite loop. It keeps getting the request, and once you get the request, it sped out the body. And it's in the, yeah, so, but I also said, there's another work that we did with WASI is to support asynchronous polling. So we can do that in a way that is asynchronous, both using JavaScript, async, or using Rust futures. So both of those are supported as well. But, you know, here we are, you know, showing a very simple JavaScript example. That's only, I'd say, less than 10 lines of code, 10 lines of code. You know, that's, and it starts a container as, you know, it's, this experience goes very similar to, it strongly reminds me of the old days of Java, right? You know, that's when you have a JVM. You use JVM to do an application server. You do exactly this, right? You know, that's, but you know, that's, well, you know, I think this is what enterprise software demands. You know, that's, so we are, we are evolving towards that stage at WebAssembly as well, so. So on Kubernetes, once we have the CRIO setup and the C-RUN in CRIO setup, and also the, the WebAssembly-based microservices, that's the long running setup, we can now set up Kubernetes, and it's become really simple, you know, you just, in a standard Kubernetes setup, you can pass the argument, say the container runtime, right? You put container runtime as a remote and then point it to CRIO. Or in Minicube, you know, that's probably easier to do. You just say, you know, container runtime CRIO, and then it would find the CRIO, the modified CRIO runtime on the pod machine, and then start it, and then use that to go to Docker Hub or to go to an image repository to get the WebAssembly image and run it, right? Okay, so there are a couple of real-world use cases, and you know, because I spent a lot of time in Asia, you know, that's, most of the use cases are actually from, you know, internet providers in, from internet providers in China. So, you know, they are leveraging like CDN networks to do distributed computing, so for instance, you know, there are distributed CDN networks where they put, you know, those setup boxes into people's homes, and then they predistribute tonight's movies onto those boxes, and then, you know, because for each neighborhood, you know, the movies you're gonna watch are probably just that three or four. You know, that's, so they would predict that and put that on the box, right? You know, and from those boxes, they can act as a distributed CDN and serve a lot of clients, but that box essentially only has a cell phone chip in it. It has an ARM 7 chip in it. So, before WebAssembly, they used to run Docker on it, and this device can run four dockers, and they serve four big enterprise customers, you know, something of that nature, right? So, now they can, with WebAssembly, they can run a lot more. So, you know, so there are CDN network testing, and there's, you know, there are other, you know, cloud providers that does sidecar runtime, you know, that's managed by Kubernetes and doing service mesh using the work that we have done. So, well, so then, the first approach is our approach, right? And then the second approach, really, is run wasmaps as container Dshams. That's what one of the previous talk has, you know, has talked about, right? You know, there's, you know, that's Ali and, and, and financial, so I'll skip over that. And then the third approach, which I saw also was very, very interesting is the cross-led approach. Obviously, we have experts here, you know, people who have developed that, you know, so I wouldn't assume my ignorance, you know, so if you have questions about cross-led, you know, so you can ask the experts. So beyond Kubernetes, you know, that's, so we now have a way to run what I would say wasm workload in a Docker-like fashion, in a Kubernetes-like cluster, you know, that's a, so beyond the regular Kubernetes, what are the other use cases? There are, obviously, there are quite a few Kubernetes variations for edge computing and I think all three, there's four. The top three are all in CNCFs today, you know, those are all what they call Kubernetes for the edge, right? Cubed edge, super edge, open-yell. And, you know, and then there's KubeSphere, which is a private cloud Kubernetes deployment and they are integrating with Dapper, Microsoft Dapper and they are also, you know, having their own functions and service, you know, so there's lots of innovations around this area, you know, that's, that goes slightly out of the regular Kubernetes, but still use the same OCI infrastructure. So that's one of the areas that we thought our approach could be really interesting because that's, because underneath of them, they all use C wrong or wrong C, you know, now we change that to C wrong W. And then, of course, there are, you know, service mesh. In service mesh, there are two strong use cases. One is just to use the WebAssembly runtime or the WebAssembly application as a sidecar application, you know, so the WebAssembly provide microservices itself, right, you know, so we have examples that we have done with Dapper, you know, that's, you know, there's KStudy published that on InfoQ and you can look at that and, you know, there's multiple ways to do it, how to integrate that into the sidecar, right? You know, that's, you can have a host application that use sidecar SDK or you have, you know, wasm edge application running by itself. And then, of course, there are, you know, there's active collaboration that we want to pursue are, you know, those sidecar runtimes that managed by Kubernetes, you know, natively managed by Kubernetes like LinkerD, right? You know, those are the things that we are, you know, we are very actively looking at, try to integrate. Then, on service mesh, there's other side of the story, which is the traffic proxy, you know? So, in the service mesh, you have those sidecars, but then you have proxies to direct traffic to the sidecars, right? And there is a growing trend, I think, started from Envoy is to have, is to make a web assembly into a scripting language for the proxy replacing Lua, right? You know, so Envoy has a standard, it's called, you know, proxy wasm, and then it's being adopted by, you know, a bunch of other guys. So, this is also a very interesting area that we find, you know, that's, it's not directly related to Kubernetes, but it's tangentially related, right? Then, of course, you know, as earlier speakers has also spoke to, you know, there's also a growing need to run web assembly directly on devices. That's, you know, that's where web assembly become really interesting because there's different implementation of web assembly, they all have different characteristics. Some of them are really small. That's, like I said earlier, talking about running web assembly on camera, right? You know, but for larger systems, that's increasingly popular, for instance, autonomous cars or smart factories, that's, they all have not regular Linux, but real-time Linux operating systems, and they all need containers. I think Wind River, you know, as a VxWorks guys published something called VxWorks containers, I think a couple of months ago, and they were talking about using container technology in fighter jets, right? I have seen Toyota using containers in their, using Docker containers in their next-generation cars. Just think about it, using Docker in cars. You know, that's Docker controlling something as vital as autonomous driving, right? You know, that's, you know, in my opinion, this is not a good fit. You know, there's a much better fit, which is, you know, that's to have a much more abstract and lighter runtime that's based on web assembly. So on that ground, we work with a project, also a Linux foundation project called CELFOR, and that's a formally verified real-time micro-carnal system. And it's, I think it's being adopted by quite a few, you know, autonomous driving companies, and, you know, electric car companies. CELFOR is based in Australia, and, you know, it's users, a lot of them are in Asia. And the US military as well, they have to put it on the drones and, you know, things like that. So, you know, so that is another potential you, okay. Okay. This is another use case, you know, that's, so it's not directly related to Kubernetes, but that's also something, when you run it in a real-time operating system, you also need an orchestration solution. And is Kubernetes gonna be, or a modified version of Kubernetes, or something else would be part of the solution. It's also something that we are very interested in exploring. Yeah, so then, I think I pretty much covered that. You know, that's, then there's data-driven orchestration, you know, meaning that, you know, we are not thinking about computer instances, but thinking about a flow of data. You know, when data comes in, we want to react to it. You know, like Connor just said, right? You do create a reactor framework. And then, you know, and then WebAssembly becomes, becomes a reactor or becomes a reactive function in that framework. Maybe that's the way, you know, maybe that's the orchestration solution for a lot of those applications. So, yeah, that's, I think I'm right on time, yeah. So, if there's any questions, yeah. Hi, everybody, so we're back here. Are there any questions before we move on to the lightning talk? Yeah, go ahead. Yeah, the upstream is C wrong. So, we want to merge back to C wrong. However, as you might imagine, it needs convincing, right? You know, because, you know, we need to tell, we need to tell the C wrong developers why you need to support WebAssembly. So, and we are trying to talk to them, try to, you know, that's why we come to meetings like this, right? You know, try to get consensus. Make sure that's on. So, we're gonna go ahead and have Oscar come up, but while we do that, because Oscar, go ahead, while we do that, while we're here, let's ask one or two more questions. On the virtual stream, you may get cut off, I'm not really sure. But, so there was another question over here. Right, so the question is, how would a developer take advantage of the custom extensions that we bake into WebAssembly Edge? So, for all the extensions that we have, we must first have a Rust SDK, then we attempt to have a JavaScript SDK as well. So, for developers, you know, that's, in a way it breaks the compatibility with Rust, with WebAssembly, because once they use our Rust SDK to build application and compile into Rust, it would not work on another runtime, because the other runtime doesn't have the host functions. But that's also why we are here, that's why we want to standardize it, right? You know, that's all. Okay, well, thank you very much. Go ahead and unplug, and we'll.