 Hello, everyone. Welcome to CNSafe Wasm Day. My topic today is WebAssembly class application runtime, brings a new area of fast. My name is Jason Song, and currently work for Onto Group. I'm now mainly focusing on Cloud Native and the microservice areas. We have deployed a large service mesh cluster in our production environment up and running for two years. The sidecar is named as Mohsen, which is written in Golan and provides similar features as Envoy. Now there are more than 200K parts in the service mesh cluster, and the service mesh has solved our challenges in service connectivity areas well. However, as a company with a lot of business scenarios, we are still having challenges in other areas like cache, message queue, etc. As the diagram shows, they are tightly coupled with the application code. For example, if we want to migrate the cache from Redis to memory cache, we need to upgrade the SDK on the application side and require the application to change its code to use the new API. Another problem is the cost of multi-language support. Since we need to develop the logic of protocols, codecs, load balancing, disaster recovery for each language SDK, the cost is very high. In early 2020, Belgian Abram published an article called Multi-Round Time Microservice Architecture. The main idea is to abstract the various distributed capabilities into multi-round time so that the application no longer needs to rely on specific SDKs, but will interact with the runtime over defective standards, such as HTTP, GRPC, which then solves the challenges just mentioned. And that brings Laoto. As shown in the diagram, Laoto is built on top of Mohsen. It provides a unified standard runtime API with various distributed capabilities. Developers no longer need to care about the differences in the implementation of the underlying infrastructure, but only need to focus on what capabilities they need and just call the standard APIs. Speaking of the runtime API, we know that it is not easy to define a set of APIs with clear semantics and wide application scenarios. So we are now working with DEPA community and Alibaba to build the API, hope it could be a standard in the future. Here are some excerpts from the API definition. We can see there are APIs to invoke services, get set of cache, publish messages, et cetera. In addition to normal applications, we are also exploring the fast area. WebAssembly, aka Wazam, was introduced to web browsers to address performance issues with JavaScript. However, its near native performance, sandboxing and portability features also makes it attractive for fast scenarios. With Wazam, Wazam provides a high level model for accessing system resources. However, in real cases, there are many other resources required to make a normal application work. For example, invoke services, get set of cache, produce or consume messages, et cetera. Lack of these features makes it hard to host a service applications in Wazam. As Lioto abstracts a standard API, so we think why not deploying Wazam modules with Lioto so that Wazam modules could consume external services via course to Lioto sidecar? However, as Mosan also has the capability to host Wazam modules, we then think, why not we go further to combine them together? Then we have this diagram, the Wazam host, Mosan, and Lioto are all logical components in one process. Wazam host, like Wazam Edge, is embedded in Mosan to host Wazam modules, and Lioto is used to provide distributed capabilities such as invoking services, reading cache. These capabilities are exported to Wazam as API, similar to Wazi. We wanted to develop a demo to prove this concept. So we did some research and found there has been some exploration about how to integrate Kubernetes with Wazam. The basic idea is to develop a container D-shame plugin so that when it receives the request to create a container, it will hand over the actual creation logic to start a Wazam runtime to host the Wazam module. However, since we want to verify the non-no process architecture, so we did some customization. As the diagram shows, we first start a Lioto runtime on the node to serve as the Wazam host and provide distributed capabilities. Then we customized the container D-shame plugin so that when it receives the request to create a container, it will let Lioto run the Wazam module instead of creating a container. With this solution, we could reuse most of the Kubernetes capabilities and yet achieve the effect of running multiple Wazam modules as non-no processes in Lioto and can also take advantage of Lioto's distributed capabilities. We also reuse Docker to package Wazam modules and submit them to Docker Hub for distribution. Here's a demo showing what we've accomplished so far. There are two Wazam functions. The first function receives a HTTP request and extracts the book name from the request. It then calls the second function to query the inventory of the book. And finally, we'll return the inventory information to us. We defined the invoke service API with function ID, method, and the param so that Lioto could use this information to locate the function and do the invocation. And here is the second function. When it receives the request, they will use the book name to query Redis for the actual inventory and then return the number to the caller. We defined the get state API with store name and key so that Lioto could find the correct backend store and query the value. Now let's build the programs to Wazam modules. Then we could use Docker build to build them into Docker images. We also need to register the runtime class to enable our custom container runtime. After that, we could deploy the Wazam functions with normal pod definitions. To make the query work, let's pre-configure the inventory to 100. Let's check the function status. Now let's query the inventory for book one. We can see it works. Let's change inventory to 99. And yeah, it returns the latest value, cool. And finally, let's clean up the resources. I hope this demo explains our idea clearly. With this demo, we proved this idea could work and we believe with the fast development of WebAssembly and application runtime, the possibility to bring a new area of fast will eventually come true. Okay, that's all for my talk. Thanks for watching and you may check out our GitHub repository for more information. Have a great day, bye-bye.