 Okay, so let's get started. And my name is Michael Yuan, and I'm really glad to be here to present at ContainerCon. I think for WebAssembly to be presented in ContainerCon is the indication that we finally made it. Because, you know, for the longest term, you know, people talk about WebAssembly, people think about it as front-end technology, people think about use on the blockchain, and, you know, people just do not associate it with, say, a cloud-native runtime or container. So, you know, and it has been a long journey, you know, that leads us here. So I'll give a very brief introduction to WebAssembly, because, you know, the thing about WebAssembly is that it's not web and not assembly, at least not in our context. So what it is, you know, so when it started, it's a front-end technology. You know, so the idea really is that people want to play games in the browser, so they need high performance, execution environment in the browser, you know, other than JavaScript, because JavaScript is too slow, right? So you invented WebAssembly to run CC++ code in the browser. There has been a lot of innovation, so this thing has been, you know, coming along since, I think, the last past ten years. And in 2019, you know, I think, you know, I think during the pandemic WebAssembly becomes the fourth language for the web, so there's only four languages that are formally recognized as standard for the web, HTML, CSS, JavaScript, and WebAssembly, right? Then how it's going, you know, then because the requirement to run WebAssembly on the server in the browser is the same requirements that made WebAssembly potentially popular on the server. It's lightweight isolation. It provides a way to run native code or to run CC++ code, in particular, in a secure environment in a way that wouldn't crash its host, right? That's the number one requirement when you run, when you start to run code in the browser. So over those years, WebAssembly becomes, people will start using it more and more on the server side. That really strongly reminded me how Java got popular, server side Java got popular as well, right? So, you know, when Java first came along, you know, it was two wrong games in the browser, right? You know, there's Applet and, you know, things like that. And then over the years, people need to become more and more popular on the server side. They become a serverlet. And, you know, now, if you talk about Java, people think it's a server side technology. No one really associates Java as a client side anymore, right? So the Java story happened almost 20 years ago. Then 10 years ago, a similar trend happened with JavaScript. You know, JavaScript moved from the client side to the server side, you know, which in the means of Node.js, at the time, we all thought that was crazy because, you know, JavaScript is a single-serrated environment. You know, how is that possible to run a server in a single-serrated environment? But Node.js made it work, right? So there has been a long-lasting trend of, say, reinventing the wheel or, you know, technology matured on the front end and then become adopted on the server side. So we think WebAssembly is along that line. So at the end of 2022, you know, there's a CNCF run annual survey and it's key finding. There's only one key finding from the whole survey, right? This container is the new normal and WebAssembly is the future. You know, it brings up WebAssembly and container in the same sentence. You know, that's to talk about the next generation of code runtimes in the cloud, right? So this is the survey. You know, if you're interested, you can have a look. You know, there's, you know, it surveyed people about their WebAssembly perception and use, right? So to summarize, you know, the first, you know, the background, it's a WebAssembly, it's become popular on the front end and then it's an obvious trend. It's moving to the server side. It's, you know, like Java has happened before that and JavaScript and Node.js happened before that, right? You know, so. And when we started to do a server-side WebAssembly runtime, it was made in, I think, in 2019 and Solomonga is not this famous tweet to say if Wasm has existed in 2008, we would not have created Docker. You know, that's, you know, for those of you who know, that he's the founder of Docker. He's the founding CTO of Docker, Inc., right? So, you know, that highlights the similarity between WebAssembly and containers. You know, so it's, when containers, when container technologies first come out, it serves the same purpose, is to isolate a piece of native code because, you know, the JVM also provides some isolation, but the JVM, the goal of JVM is not designed specifically for isolation, but the container is, you know, so it's a lineage from the Linux VM to the Linux container. You know, so the container is when you have applications that retain goal in particular. You know, you have runtime safety issues because it's compiled to native, right? So you need something to contain it. So that's, I think it writes the whole wave of, you know, the cloud native and the adoption of the goal programming language and all that stuff, right? You know, so that's, so his idea is really, you know, WebAssembly and containers, in this regard, they serve similar purposes. But however, you know, that tweet was very famous, but this is a, you know, in the same thread, he also talked about, answered a critical question, which I think a lot of people overlooked, is that when, you know, as you can imagine, when Docker's founders are not a tweet like that, people are going to ask him, do you think that we don't need Docker anymore? But, you know, his answer is also very clear, is to see, at this stage, WebAssembly would not replace Docker, but we would have, we would live in a world where WebAssembly coexist with containers, Docker-like containers. So this is what the most of the rest of talk can be about, is that, you know, when you have cloud native workloads, you know, you can run it inside regular Linux containers and you can also run it inside WebAssembly. So how do you make that choice and how do you manage those two different technology stacks together in a single environment, say, in a Kubernetes cluster, right? So that's, so then fast forward, that was 2019, and fast forward to, I think, 2022 KubeCon that was last year in Detroit. You know, we had a Wasm Bay there and Docker made an announcement with Wasm Edge to say there's a preview in Docker Desktop. You know, essentially Docker itself has integrated WebAssembly into every single copy of Docker Desktop. So if you use Docker, you already have WebAssembly. In fact, even if you don't use Docker, you already have WebAssembly because it's bundled with all the browsers as well. So, you know, so everybody has WebAssembly and the Docker itself has elevated it to the same level as, say, Linux containers that are managed by Docker tools. And it is very nice that Solomon had a follow-up to it, although he no longer works at Docker. He has, you know, he's on startup now. But he said, you know, there's the Docker plus, one of the ones that make perfect sense. That's what it's supposed to be, you know, for all those years, right? You know, so, yeah, if you are... You know, we're going to do some demonstrations that's later in this talk, but if you are interested in, say, using WebAssembly, say, run, say, a microservice in WebAssembly with Docker Desktop that you can do it today. You know, that's... So WebAssembly as a container is actually, in fact, today it's bundled and shipped with Docker Desktop. So a few words about the Wasm Edge runtime. So WebAssembly being a standard, there's multiple implementations. So the original WebAssembly implementation has all been in the browser. So, you know, it serves the WebUse case where, you know, like I said, you want to run a CC++ program in the browser. So, you know, so Chrome has one, Safari has one, you know, there's one in V8. So, but on the server side of WebAssembly is really a fairly recent phenomenon. I think the leading implementations are Wasm Time and Wasm Edge. So Wasm Edge is our project. It's one of the CNCF sandbox projects going into incubation. So the idea really is, because in the browser, there's lots of things, services that are provided by the browser, right? You know, say, if you're off networking, okay, you know, the runtime doesn't need to have networking. You know, just ask for the JavaScript environment to do the networking and get the data and provide that into the WebAssembly runtime. So it's heavily reliant on JavaScript to act as a host. So it's JavaScript to bootstrap the application that needs to run inside WebAssembly. But to run it on the server, you really need something that is more like a computer. So, you know, so from WebAssembly, you should be able to access the POSEC interface that is in underlying computer so that you can access the network, the file system, and all that stuff, right? So that's the separation between, say, the browser side WebAssembly and the server side WebAssembly. And of course, there's, along the same line of thoughts, you know, you could think about WebAssembly used in different scenarios, like on the blockchain to run smart contracts. It's the same type of idea. You know, instead of file system and networking, now you have to expose, say, address and transactions and that type of construct into the WebAssembly runtime so that application can access it, right? So the bottom edge runtime is really designed to be, you know, to support this type of use cases, this type of, you know, on the server in the cloud type of use cases. So there's a lot of those, say, elements, you know, depending on the use case, like, we can talk about that in a minute. I would also say a lot of people use WebAssembly, you know, to run server side workload. What type of workload? You know, that's, they do AI inference, right? You know, that's because it provides a much lighter stack compared with, say, Linux container plus Linux plus Python plus PyTorch, you know, instead of a whole stack of close to gigabytes of stuff. Now you can have the whole thing running in 20 megabytes, 30 megabytes, you know, something of that size. So it's easily 10 times or 15 times more efficient, right? You know, so that's a little about, you know, so yeah, I think I touched upon some of those points already. You know, so why? Well, why do we need a WebAssembly runtime that's optimized for the cloud service? You know, the first is compared with, say, Linux containers that it's supposed to compete against, but it's not really, it's co-exist with Linux containers. It's the WebAssembly container or application that's contained in WebAssembly. It's about 1% of the size. So instead of talking about 100 megabytes or 1 gigabytes, you are talking about less than 1 megabytes. You are talking about a web server and a database client that's running with kilobytes of, you know, the entire isolated application would be that size, right? And the startup time is dramatically improved, you know, so instead of, you know, so one of the same, if you talk about, you know, the public cloud serverless function, you know, like AWS Lambda. Although AWS Lambda is not implementing containers, but it's close enough, it's a firecracker, it's a micro-VM, right? So if you look at the Lambda startup time, you know, in a lot of cases, it's measured in seconds. You know, it's like six seconds in order to code start, right? So, you know, similar application running in WebAssembly would be like one millisecond, so it's 6,000 times faster, right? You know, something like that. And so it has a near-native runtime performance, you know, because the application written in CC++ or in Rust compiles WebAssembly, and then, you know, there's a lot of compiler techniques that can further compile down that sandboxed native code into true native code and then run it, right? You know, and another interesting thing about it, it's secure by default because it has, because it's not Linux. There's no way you can accidentally ship a Web server with it and forgot to turn it off, you know? And so it has a very small attack surface, you know, it just does what you tell it to do. It doesn't really have this other operating system baggage that's associated with a regular container. It's completely portable across platforms, you know, meaning I think that's also a very important point, you know? Because for a long time, you know, cross-platform compatibility is not a huge issue anymore because people say CPU is getting faster and faster, so there's only, you know, x86 CPUs and maybe AMD CPUs, maybe ARM CPUs. You know, that's it. But the thing that happened in the past, I think, maybe 10 years, you know, is that CPUs have stopped improving. You know, the morse law is not there anymore, right? So you see an explosion of specialized hardware in the states. You know, you have the GPUs, you have TPUs, you have all the FPGA devices, you have RISC-5, you know, you have all those things. So at the software level, or at the application level to provide cross-platform compatibility, you know, that abstract away the underlying operating system and the underlying hardware becomes important again. So the Java story becomes important again, you know? So in a way, the emergence of WebAssembly is very much like Java. Instead of Java language and the JVM, now we have the Rust language or Kotlin or, you know, a new generation of, you know, what we call lightweight languages and lightweight runtime in WebAssembly. So it's a very similar pass, right? So, and then the last point is important, you know, place well with Kubernetes service mesh and distributed runtime, et cetera. This is something that we're going to see, right? So, you know, in the past couple of years, you know, at every KubeCon, we have a Wasm Day and, you know, a couple of hundred people show up. So over the years, we have built a lot of tooling to support WebAssembly as a container format in popular operating systems and around times. So a lot of those has, you know, has already has built in WebAssembly support. So I specifically mentioned Docker because, you know, it is one of the, perhaps the most iconic or most representative, you know, in terms of container management tools. So, you know, so I said, you know, last year, Docker supported managing WebAssembly alongside with Linux container, right? But it's not the first, you know, like ContainerD and Podman and, you know, so those other framework actually has supported WebAssembly for over, I think. You know, some of them has supported WebAssembly for 18 months now, you know, that's, you know, to provide the capability to manage WebAssembly workload and Linux container workload in the same environment, right? So this is, there are actually three graphs here and this is, I think, a little bit technical, you know, that's, but I think it's fitting for this audience. You know, so if you look at the container management landscape or Kubernetes landscape, it's abstract into different tiers, right? You know, so at the top, you would have, you know, the management, orchestration and management solutions. And then in the middle you have CRI runtimes and at the bottom you have OCI runtimes. So they each provide higher level of abstraction layers. So WebAssembly can be managed in each of those tiers, okay? So it could be managed at the OCI runtime, meaning OCI runtime, you know, the represented ones are the Ron C and C Ron and Yo-Key, you know, those are the low level runtimes that runs the container. So how do they support WebAssembly? Is that when they pull artifacts from, say, Docker Hub or registry, it would see the operating system that's the operating system metadata that's associated with that image. If the operating system says it's a WASI wasm instead of, say, Linux ARM, you know, or whatever, right? You know, it would know to use a wasm runtime to run it instead of starting a Linux container to run it, right? So, you know, so it would, at the OCI runtime when it sees the image, it would know that which runtime it would start to run that particular image, right? So that's, I think that's starting from the bottom of the stack. And then the graph in the middle is to start from container D, you know, is that... So, you know, container D is what they call the CRI runtime. You know, it's a higher abstraction layer than the OCI runtime. So in container D, you can build shims. So meaning that when container D sees a container image, it would identify which operating system tags that is associated with it, right? If it's, say, a Linux ARM, it would know that's to start a Linux container to run it. But if it is, if the image is associated with WASM WASI, it would know that it would go another pass, you know, go through the shim to start and to use wasm time or wasm age to run it. So those are the two approaches. And then there's more integrated approach, like, you know, this is a new project that was just announced in KubeCon, like, last month, you know, in Amsterdam, right? You know, that's the Quasar project. You know, it's a... It has more deeper integration into container D so that it can identify different type of image formats, you know, not only just a... not just container and web assembly, but also VMs, micro-VMs, and Cata container, you know, things like that. So those are, basically, those are the basic approaches. So under the hood, there are several open source projects that are active and people are contributing to that. You know, the first is called run WASI. That's the second approach we talked about, is the container D shim approach. So this is also the project that's used in Docker plus wasm, right? You know, so this is a sub-project in container D, so it's one of the official projects under container D. We contributed to that, but it's lead by the Microsoft Azure team. So what it does is that it detects wasm as the CRI, at the CRI level. So when wasm image is pulled into a Kubernetes cluster, it detects that this is the wasm image, and then at that level it branches off to run it with a wasm container. Quasa is a similar idea, but, you know, it's at the container D level, you know, so it has a lot more, I think, I would say, proprietary, but it's a deeper integration with container D. And C wrong is a red hat stack, you know, if you use OpenShift and, you know, the red stack, Podman, you know, the red hat stack of Kubernetes, you would use C wrong. It detects wasm at the OCI level, so container D would not be able to, would not know, you know, what type of image is that, or would not make the decision. It would just take the image and pass to C wrong. And C wrong would check whether the image is a regular Linux container or wasm container, and if it's a wasm container, it goes to wasm. So, you know, that allows us to fairly seamless run this in things like OpenShift, Podman, and the CRIO stack. So those are the things. So I have a demo, you know, but instead of doing a, you know, let me introduce that demo first. So, you know, when people say, you know, I have talked for 20 minutes about, say, where wasm is alternative to Linux containers. What exactly does that mean? Does that mean it can run, you know, applications that most people use on the Linux container? And the answer is more or less yes. You know, so what I'm going to show you is a database backup microservice written and running in WebAssembly. So it's, so the old way is like this. You know, the old cloud native way, you have three Linux containers in a three tiered application. They probably run in the same Kubernetes spot, right? You have a web server and proxy on the front end to take the traffic. And then you have your business logic and an HTTP server that's interfaced with the proxy. You know, that's written in the container, in the language that you are familiar with, like Java or, say, Python, you know, whatever. This is business logic. And then you have a database driver to connect to database. So three Linux containers running in the same pod, right? You know, so that's the old way. You know, that's a complete microservice that has the API and all the way to the database backend. So how would WebAssembly do it differently? So the web proxy can still be the same. The load balancer can still be the same, although you can replace that with WebAssembly. But for the illustrative purposes, we would make it stay as a Docker container or a Linux container. Database, again, there are databases that can run as WebAssembly artifact. But now for this purpose, we're going to say this is a heavyweight database. It's a MySQL. It runs its own Linux container again. In the middle, all the business logic, you can use, you can compile that into WebAssembly so that it has a benefit we just talked about. It's going to be 100 times smaller, close to a thousand times faster. And because it starts up so fast, you can scale to zero, meaning if there's no incoming request, there will be no instance running in the middle. And when there's incoming request, like 125 concurrent incoming requests, I start one WebAssembly at a wrong time. That's one in the middle. But if we keep getting more requests, I'm going to start more. So you can scale from zero to infinity in a relatively elegant way. So here's a two-minute demo. Let's try to start the entire demo application. So this uses Docker plus wasm. So Docker plus wasm, you have to tell it to use container D because our integration is with container D, right? So now, really, you only need one command. It's called Docker ComposeUp. Because in the Docker Compose configuration file, we define that three containers. And the Docker ComposeUp allows the whole thing to be built. There's no compiler you need to install. There's no WebAssembly runtime you need to install because it's all bundled in Docker desktop. Now, it's up and running. Three containers are running. Now you can go to localhost, you can go to the web page and try it out. What this application does is that it provides a web page. You can fill in some data in the web page and you can say save. It's saved to the database, right? So this is what it does, right? So that's a graph that I've just shown. In the Docker Compose file, there's those... Let me pause it. So you can see there are several containers that are managed by Docker. There's this... Yeah, so in a group, this microservice Rust MySQL is a whole group of containers, right? In Kubernetes terms, they are in a pod, right? So three containers. The DB1 is a Linux container. And the DB1 is a MySQL container. The client one is HTTP proxy. And the server one is all the business logic. It's processing data in JSON, connect to the database, start database connection pool, and do all that. So as you can see, it's clearly labeled. Those two are not labeled because DB1 and the client one are just regular Linux containers. So they appear as normal containers in Docker Desktop. But the server one, Docker knows through container D it's a bottom container. So it labels that way, right? So you can... Yeah. So the application logic is entirely in the middle. Now you can see their size. The whole application server is three megabytes. And I put all the debug symbols in that, right? You know, if you look at the other two, the MariaDB is 100 megabytes. And the inject server, which those two are just services that serve the microservice. Each of them is auto-magnitude bigger than actual application itself. So just imagine if the application itself is really in Java, how big that's gonna be, right? Because you have the application server, you have Linux, you know, you have to have all those. And so let me see... Oh, sorry, okay. So... And people resonate with that. You know, so there's... Those are two, I think, fairly popular tweets that we tweeted, you know, early this year. And then each got over 10,000 views. You know, this is not a very big Twitter account. It's only have, like, 1,000 followers or less than that. So, you know, so its amount of views is 10 times its followers. So what it does is that we compiled a complete Redis application and run it inside a a Wasm Edge container, right? You know, so using Docker plus Wasm. And the total application size, as shown by Docker, is 700 kilobytes, okay? So since when the last time you heard about a complete application that is with its own container and everything isolated is, like, measured in kilobytes, right? You know? So, and it starts in milliseconds as opposed to, say, if I want to do a Linux container that runs already as client, you know, that could easily be, you know, 10s of megabytes. And here's another one, you know, a PostgreSQL client app running inside a Wasm container. It's only 800 kilobytes. It's, you know, again, this is, you know, this is why, you know, we think people want to use a WebAssembly to contain at least some of their workload because, you know, the efficiency savings that are coming out of it, it's accomplished similar tasks as a Linux container would do. But, you know, it provides tremendously operational savings. So that's... Well, yeah. So, in the next two slides, there's lots of tutorials and there's lots of content that's out there to say how to deploy different Wasm containers in Kubernetes cluster. Kubernetes SQL system is so complex, you know, like it just mentioned, there are different layers. You can deploy it in different layers and then, at the top, you also have different versions of Kubernetes, right? So, you know, we also built a library of those CICD run every day and, you know, to make sure that they work and, you know, things like that. And then, one of our partners in our community, you know, it's a company called Liquid Reply in Germany that they build a Kubernetes operator for Wasm that further automates Wasm deployment in Kubernetes so that it makes, you know, writing those configuration files and, you know, things like Yeezy, right? You know, however, you know, those are, I think those are nice, but they are not really the focus. You know, the focus still is there's Wasm has its place. We no longer think, you know, at least I don't think it was, you know, it was pitched that way, but nobody really thinks that Wasm can really replace Linux containers. The idea is more Wasm that co-exist with Linux containers. And so that means there are certain trade-offs, you know, so when I go to companies and to evangelize Wasm, so to speak, right, you know, to say, okay, this service, that service, you know, because some of the companies has tremendous amount of microservices. You know, I know there's a company we work with, they have a really popular application, they have 50,000 microservices on the back end, 50,000 for just one mobile app, okay? And each of those microservices needs multiple machines. So there's potentially a lot of things that you can generate cost savings by switching to Wasm. However, there's no free launch. It's not a zero cost switch, meaning that in order to get all those benefits of Wasm, you have to, I would say, make sacrifices, right? It's the sacrifices that you need to recompile your application at least. And sometimes rewrite your application because some of the Linux APIs are not available in Wasm, right? Because of the security concerns and things like that. So it's not a general operating system environment. You can't just, you know, the old story of Docker is to ship your computer. Whatever works on your computer, you dockerize it and you're going to work on everybody's computer, right? And Wasm is not really that because, you know, for the whole Wasm environment, it is more effort than to make it work on your Linux container. However, I think it's also a big plus because that's, especially in the use case we talked about, like the AI inference, it really removes a lot of bad habits for developers. You know, if you look at how AI inference is set up today, a lot of those are just, say, I've seen people run Python in cars. So they need to do inference in some really resource-constrained environment, but they don't know how. So because they trained their model with Python, so they're going to use Python for it, right? You know, so the Python image is a tremendous size. It's one gigabytes, two gigabytes, right? You know, so by allowing anything that runs on your machine to run on the production environment, you actually limit what, you know, what you can do with production because a lot of things runs on your machine is too heavy. So WebAssembly, forces people to rethink how to really trim down the weight of those solutions and make them run faster, better in more places. So there's, you know, many use cases of WebAssembly. So here are some. That's, say, microservices. Even with microservices, there are a lot of different types of microservices. The thing that we see a lot are data-related applications like AI and machine learning to use WebAssembly to run AI models in a wide variety of settings, you know, on the edge, on the cloud, you know, on different places. And streaming data functions is another one, you know, is that, you know, there's a common pattern, it's Kafka-plock-flink. You know, that's, you know, in the Java world, there's a messaging queue and there's a processor for the messaging queue, right? But now I think there's a big trend of combining those two together in one lightweight solution. So that requires, you know, UDF solutions, user-defined functions, essentially. You are building a messaging pipeline but then you want to run your customer's code, you know, in your messaging pipeline. That is the exact same problem. You know, every multi-tendency cloud provider or the browser provider would have to face, or blockchain provider would have to face. It's through an untrusted code in an integrated environment. So WebAssembly is really this here. And to take that idea to extreme, you know, there was also even the UDF in database. So this is one of the projects that we collaborated with LibSQL. LibSQL is the open-source SQLite on the server project, right? You know, so they want to take server Lite to the server. Not server Lite, SQLite to the server, right? You know, that's, so we define a UDF function, we provide a UDF function on runtime in LibSQL. The whole package is only two megabytes, but what it allows you to do is that it allows you to store, say, a blob data, like this picture. You store that in the database as a blob field. And then you can use SQL to ask the database to tell you what's on that image, right? Under the hood, it used the WebAssembly runtime to run an AI model. And you know, that's always, this is Dr. Grace Hopper, right? You know, but the model recognizes that there's, you know, the predominant feature on this picture is the military uniform, right? So you can think about, you know, there's lots of AI workload that you can use this way so that people don't have to learn the AI stuff, the PyTorch and stuff. They can just do SQL to do that, right? You know, that's to take that message to the extreme. So there's serverless functions for SAS which is what I'm going to spend the next, you know, the rest of the five minutes to talk about. You know, is that, you know, you have because today, especially with things like chat GPT and, you know, things like that, I think it's become more and more obvious that the API for SAS is no longer sufficient. You know, so when, you know, for the longest time, every SAS has API. So if you interact or want to customize the experience, you do the OS and use the API. But I think there's more and more SAS that want users to upload their code into the SAS so that can provide a deeper integration and less of the, you know, roundtrip and authentication problems and security problems. One of the earliest ones to do that is GitHub actions, right? You know, so if you think about if you want to do a bot on GitHub, what do you do? You typically use GitHub actions to do that. You typically don't use the API because the actions allows you to write something and upload and then it works. You know, you don't have to figure out the OS and all that stuff, right? It's the same idea, you know, why AWS Lambda is so popular is because it allows you to run a piece of code in the AWS infrastructure. It's not to say it's a business logic. It's really the vast majority of AWS Lambda does is take a message from messaging queue and save it to S3 or push it somewhere else, right? So you can get up because the code runs inside the system so you get around all the authentication, the permission and all that stuff. So WebAssembly really allows this type of work excels at this type of workload because those type of workload are typically very simple and yet they take a lot of, you know, you know, they are run frequently. So, you know, so to spin up a Linux container or a virtual machine for each of them is very wasteful. So, you know, that's why do I bring up chat GPTs? Because, you know, we see a lot of, you know, people keep asking about, you know, if you say WebAssembly running a function is so great, you know, that's, can you give us an example? You know, so we see actually a lot of use cases for, you know, in the past couple weeks, you know, wasn't it run times to interface with ALMS, right? You know, so there's several benefits, but I'll skip that and I'll give you an example. So, in our own open-source project, we use Wasm Edge to build bots that's to do PR reviews, okay? So what it does is that, you know, as you know, if you, you know, all of us open-source, you know, open-source, the PR review from communities is probably the most time-consuming thing because there's no predictability of the quality of the code because you don't know those guys and you need to spend a lot of senior developers' time to evaluate and give feedback on those PRs. And yet, if, you know, I've seen studies to say the average time for PR merge is like four or five days, you know, that's the contributor in the community sometimes get really pissed off if you don't provide, you know, timely response to all that. So in our community, how we manage it is that we provide a chat GPT bot that takes the PR and breaks the PR into pieces, into commits and into files and then send to chat GPT and then gets comments back, you know. So people say, okay, I know this is kind of hard to read, so I have a very, I have an example, you know, that I show people. So here is a code contribution someone made, you know, that's, it is actually a very simple function. So it's, the function is called check prime, you know, you check if an input number is a prime number or not, right? So you can see what he did. He said, you know, I will start from I will create a loop from two starting from number two all the way to the square root of n, okay, n is the number I want to check if it's prime. And then check if it's divisible. If it does, I would return it's not a prime. If it's, if I run an entire loop and it's not divisible, I say it is a prime number, right? Is that correct? Yes, it is correct, right? You know, that's how we were taught to do prime number checking. That's the textbook definition of prime number checking. What did the bot say? You know, I, when I saw this, it blows me away, you know, the bot says you don't have to check even numbers again. You know, so the loop, you don't have to go from one by one, you can increase by two. But then you can ask the bot to say according to this logic, I have I don't have to check multiple of three first three as well. The bot actually can write you the code that you can use, you know, to have two arrays. You know, one is all the prime that you have discovered so far in the loop and then the other one is the main loop, right? You know, that's you can, you know, that's so, you know, the thing I want to say is it's provided tremendous value for us. You know, so, and the way that we did it is to use the wasm is to use the wasm runtime because you know, this is a very simple piece of code. You know, all it does is take a get-up request, pull request and break it up and have some prompting logic, you know, to change how you want to, how you want the language mode to respond and then wait for the language mode to respond. The vast majority of the time is waiting because the, the, if you use GBD4, you know how long it is, you know, if I have a large PR that has maybe 20 different files the whole process could take 20 minutes, you know, so the function has to run for that long, right? And if this is a VM or Linux container, you are taking up you know, hundreds of megabytes of space, you are taking an entire thread and then you are waiting for there to to finish, right? With the wasm container, you can do co-routine, you can have, you can have much higher computer density, you know, because most of this are very light in terms of CPUs. They're just waiting for the other end to respond. So you can do a lot more you know, with a small machine using wasm than say using a Linux container. So I think this is actually, you know, something that we use ourselves, we eat our own food, you know, so to speak, but you know, that's, you know at least in my opinion it's a really good use case of say, you know if you want to get started with WebAssembly what is the use case? I think this is the one instead of say, you know, go create a service function and then deploy it in Kubernetes clusters and all that, right? Obviously we want you to go to that position as well, you know but I think to get started to have a WebAssembly function that can work on your own GitHub repository and then provide value to your community, I think it's perhaps the most interesting one. So I think that's, I'm one minute over so that's it and you know, so any questions? I think we have a couple minutes for questions. Yes, please. So performance there's two layers. One is the startup time. Startup time obviously WebAssembly is much, much faster because it doesn't need to start with container right, you know, so it's well so that's why we need a project like Quasa because when WebAssembly is added to the existing container infrastructures, a lot of times it's still doing the same check although you don't have to do those checks for WebAssembly so you can see it's WebAssembly and then skip over all those networking setup and isolation and all that stuff so essentially we need deeper integration of container D. That's part of the rationale for say a new runtime like Quasa, right? So I would say startup time in series could be a lot faster, you know, even in the Kubernetes cluster but from command lines it is a lot faster, you know, so and then the runtime performance I think container gives you a penalty of say 15-20% of the performance compared with say running without anything, you know, just wrong, you know as a native code, right? WebAssembly is in a lot of times, you know, we published a paper on that, that has a benchmark that says WebAssembly runs faster than native. Think about that, you know that's if you are the reviewer of that paper what will you do? You reject that paper because that is clearly wrong, you know that's a, you know, Rust program compiled to native, Rust program compiled to WebAssembly and runs inside WebAssembly. The benchmark says WebAssembly runs faster. How is that possible? So the reviewer rejected our paper outright and, you know, that's a paper eventually published on IEEE, right? So we went back and checked and it was the case, you know the reason for that is because that a lot to do with developer habit because the WebAssembly runtime optimization happens at runtime so, you know, so when you have the WebAssembly bind code and then have the runtime transform into native code it knows exactly what type of hardware it's running on. So it knows it's the fifth generation of AMD and you know things like that. It can turn on all the necessary optimization flags but for regular developer they typically don't do that, you know when they do run inside a container because container itself is not portable to begin with, you know, it's, you have AMD container, you know, you have ARM container and X86 container, you know, to start. So if you are running X86 container you typically have the lowest level of optimization turned on, you know because you don't really know what's the CPU that underneath it. So, you know, you don't want to have the CPU throw a crash because you invoke the instruction that's not available on that particular CPU, you know, the fourth generation AMD to the fifth generation AMD so in that regard, you know it could be faster, you know, that's so empirically, you know, that's what we see but, you know, in general I think at the same optimization level we are still looking at 5 to 10 percent performance lose for even for web assembly compared with native. With what, sorry? Yeah, with native, yes. With native. It should be fast I think it's in general faster than than the Linux containers, you know because it has less of the, it has a more direct execution route, you know you know, in fact there's a lot of say web assembly is like JavaScript, you know, it's single-threaded environment so it's dependent on core routine to do multi-tasking so if you have something that is that is truly multi-tasking that needs threads then it's become difficult to run, you know, so people say web assembly is great at computing tensor but that's take with a grain of thought because, you know we do AI inference and you know things like that the SIMD, there's underlying hardware support and support inside the web assembly you can run it very well but for a very general case you know, you just have a multi-threaded application that you run in the web assembly I think it's, and also there's a lot of libraries that haven't been ported yet so, you know, so if you want to run Python you can but you can only run pure Python, you know so which means a large part of the Python libraries is not available, you know they're going to complain, you know yes, you know I think we are on sister projects you know, I would say because in order for something to be a standard, you need at least two implementations, you kind of just want to implement it so for both of us it's important to have the other, right I think Wasm Edge is more commercial, that's what I would put it you know, Wasm Time is more focused on the standard, you know, so a lot of the things that, and they are also more focused on the Rust developer experience because their entire stack is written in Rust right, you know, so our stack is written in C++ with Rust SDK there is actually very subtle but important difference most Rust developers can feel it so Rust developers would like, I think a lot of them like Wasm Time more however, you know, I think being a C++ run time you run at more places you know, because you know, there's say, RISC-5 CPU you know, there's a bunch of those things, right, you know, so we have, so I would think the biggest difference really is that they are they take a standard first approach you know, they are everything they do, they want to make it a standard we take more of an application first approach, so we would build features in there and see if people use it or you know, one of our, say a big user in our community wants to use that we would build that feature and if they later say, okay let's build one, you know, that's the take you know, we would do things like that I think it's more ad hoc or more casual in terms of adding features and instruction features, you know, so I think that, you know, so we I think it has you know, the community has to have several web assembly run time you know to be called a standard, you know, so I think you know, we we work closely with them for instance like the component model, you know, that's they are leading the effort, you know, we are we are aiming to be the second run time that implement component model because they are going to be the first one to implement that so, yeah thank you, yeah