 Hello everyone, welcome to my topic about one marine and why I'm loyal from Intel as a cloud orchestration software engineer and today I would like to introduce one marine oil from three aspects. First, I will give a brief explanation about M1 vaccine. Then I will introduce the web assembly micro runtime which developed by Intel mama here and finally I will give the benefits we can get from this work. You know, why we enable one marine oil. Yeah. M1 integrate wasm technology because wasm can bring accessibility and flexibility to M1 without bringing up M1 design. We can learn about the M1 wasm as this four modules. First, M1 has an internal C-platas filter to encapsulate wasm code and offer M1 API implementation. And it also has a project named proxy wasm host. If you encapsulate wasm runtime, extract sandbox API and readjust M1 API to sandbox, we all know web assembly is a stack-based virtual machine. And another project is proxy wasm SDK. It will explore the sandbox API and declare the reference to M1 API. It also encapsulates M1 API and controls the sandbox API so we can use this SDK to develop our wasm filter and the implementation interface we defined in the M1 wasm filter. Then we can integrate this filter into the M1 service mesh. Currently, M1 supports the following five wasm runtime in its latest release. And the one more is here, yeah. The one more is what we bring into M1. The developed build of the M1 binary is a waybar runtime. It is based on the waybar engine in browser, yeah. And at the same time, WAM here is other runtime with different features part, yeah. And the runtime noon, as default comes out of modules linked into M1, yeah. So what is one more? Here, this shows the web assembly M1 runtime, one more over real. And we can say it here. One more itself supports many CPU, actually, texture, and at least here. And the platform supported at least here. We can say one more itself supports many CPU, actually, texture, and the platform, yeah. And it has a successful community. One more itself targets the small list and the first standalone one same runtime. And the module designed to cover the uses from low end device to cloud, yeah. And then support the interpreter ahead of time and adjusting time compilation. The one more itself is a state-based implementation and the ILVM-based compilation. One advantage it has is it has a small footprint and memory consumption. The VM call of it shows like this, yeah. And it is self-implemented ahead of time module loader. It also supports the Intel SGX, no need for the SGX label as other things. And it measures application for using its application framework. And it's very easy to pop into your platform. You can get the code detail from this link, yeah. Here we give the one more performance number. We have tested on the X86 platform, architecture and platform. And it says the performance number compared with the native JCC compiler. We can say the ahead of time performance is from 0.4 to 1.5 of the JCC compilation among the workloads. This is the benchmark of it. And this is the performance number, yeah. Yeah, it's JCC generic SSE code, yeah. And here we will give the SGX or Kubernetes-related adoption and the engagements. First, most of the Intel project, such as the private data object open source project, can get the benefits from using one more. And the financial from Alibaba adopts one more SGX for the China leading blockchain platform, and also Kubernetes under the open source process. And the early life adopts one more with SGX on multi-parts computer product. And also by dox-label, Mesa TE also integrates one more under its features. Other customers like Microsoft, Alibaba and our under team, yeah. So I have gave you a brief introduction about one more. So why we bring one more in why? Because one more itself in the same runtime may increase the binary size and the performance number is not real in our test. So we bring one more into one more and do some performance tests to get this advantage. The first is memory consumption we use this Docker comments to get the one we unpick. So we can say in this table, using one more interpreter model or one more just in time model, we can say the memory consumption have a large reduced decrease. Also the binary size based on way by runtime is larger than one more. Using one more to build the binary, we can get the why binary decrease the set about 50 and the search set about 10. And the high performance, we're using the left hook to test the performance. We generate some HP requests to avoid a gateway and get the performance number. We can say in most batch marking, we get about a five to 10 improvement. Also, using one more way may enable SCX into NY HP filter, which will increase the safety of the HP filter chain. So here's a reference I will give you to about our work in NY. A pull request tool and what proxy have been merged and the proxy was in project. And this is one more on time. If you have any questions, you can contact me using this email. Thanks.