 It's close to the end of the day. I hope you had a coffee. So my talk would be interesting, because it is indeed one of the, you know, the title is JavaScript in WebAssembly. Why would you have JavaScript in WebAssembly? So it's really why and how. So let's talk about why first. So when WebAssembly first came out, it was designed to run side by side with JavaScript. That's how it's get started. So in a browser, you would have a JavaScript interpreter or JIT JavaScript engine like V8. And then you would have a WebAssembly runtime. And they communicate through something called Watson-by-Gene. It's for Rust. And you could also have a C-based bridges. That bridge between WebAssembly and JavaScript. So the idea really, when WebAssembly was invented, I think it's almost 12 years ago. It's hard to believe it has been that long. The idea is to run native application in the browser. So we know that we can run JavaScript in the browser. But there's a whole bunch of applications for various reasons are written in C, C++. And we want to compile them and run them inside of the browser. And obviously, we can't just run compile native coding browser. So WebAssembly was invented as a security sandbox or as a sandbox format for that purpose. So while the Chrome achievement actually happened early this year, Photoshop says they have compiled the Photoshop application written in C into WebAssembly. And you can run Photoshop in the browser. So that's how it started. WebAssembly originally was designed as a supplement to JavaScript so that you can run native application in the browser. But how it's going, I think the evolution of WebAssembly has strong parallels of technology evolution in the past. Something started from the browser and then moved to the server side because it becomes 10 times more front-end developers than back-end developers. So people learned, front-end developers learned it. And it's become standardized and supported by the two chains. And then it becomes a back-end technology. So I think if you're old enough like me, you would remember that's exactly how Java got started. It used to be the applet and the serverlet. It's also how Node.js started. It used to be JavaScript. JavaScript used to be exclusively on the browser and then it becomes a server side technology. In fact, when people start to use Node.js to develop a server application, so we're all like, this is impossible. This is the wrong way to do it because JavaScript is a single-threaded execution environment. How can you possibly do a server-side application using a single-threaded execution environment? WebAssembly is moving, going through the same evolution or the same process. So that's how we see the direction it's going, is that you would have a VM or a runtime and it can run JavaScript applications and other applications inside this VM. So this is not necessarily as a supplement to JavaScript, but to run JavaScript inside WebAssembly. Of course, there's lots of engineering work that has to be done and a lot of developer APIs that has to be developed, which would be the focus of my talk. So all those plugins that plug into the JavaScript interpreter or JavaScript runtime that's running inside the WebAssembly. But it still has an answer to it. So this is an article that we wrote that's how to run in JavaScript inside WebAssembly. But with all this, we still have an answer to why. People look at that and say, why would you want to do that? I get this question all the time. It's because when we use WebAssembly on the server-side, we want to make it a container. There was a famous saying back in 2019, is to say if WebAssembly was invented in 2008, then Docker wouldn't be invented. And Docker's founder said that. So the world that we are envisioning is really to make WebAssembly much more like the JVM of the past. And the Docker container of the more immediate past is to make it into a runtime that can stand on its own instead of just a plug-in mechanism for other platforms. WebAssembly has been very successful to act as a plug-in mechanism for other platforms. So for instance, in the browser, you don't use WebAssembly by its own. It's always interacted with the JavaScript runtime with, in some capacity, because it doesn't have its own network support. It doesn't have access to the file system or other parts of the operating system. So you use WebAssembly to do compute. And then when the result comes back, you use JavaScript to render it. So in the browser. And another big WebAssembly use case is on the blockchain. It's the same. So you have the consensus mechanism and all the platform being built. And you use WebAssembly to do one thing around smart contracts. So the smart contract thing is really, in my opinion, is what serverless functions should look like to begin with. Because it's a piece of code that you write or anyone can have where it can be, have bugs in it or have bad intentions. And you submit it to a network. You don't care whose computer is running it, whose node is running it. And but somehow, you're going to get the correct result coming back from the network. And you pay for each execution. That's, in my mind, at least this is what serverless computing should look like. But in that context, WebAssembly still runs as a plug-in mechanism. So you have a much larger platform that isn't built. And then you use WebAssembly to, I would say, an important but small piece of work that is doing the computation. I do stateless computing in particular. But our vision for WebAssembly, at least, I started the project Wasm Edge, which was a WebAssembly runtime in CNCF. So we have always wanted to make WebAssembly more like the JVM or more like the Docker container so that you can run an entire stack of software, including the networking to use WebAssembly to create, say, microservices, where you can call out to other servers and you can respond to other services as well to make WebAssembly to do AI inference, replacing the Python layer of AI programming using Rust and compile that into WebAssembly. In order to do that, we need a WebAssembly runtime that is more encampmenting. And the benefit of that is also very strong, because by using WebAssembly as a container format, one of the things I learned from this conference is that there is a huge focus on cloud security and software supply chain security issues. I'll leave it there. That's in this conference, right? And WebAssembly is really one of the, I think, at least from my point of view, is one of the leading candidates that can help with those security issues. Because if you have Linux container, you essentially have a very wide attack surface. You have, because you are running Linux, you could have SSH turned on and not knowing it, right? So there's lots of issues that you have to deal with. But for things like WebAssembly, you can really lock it down. It has a very tiny attack surface exposed. It has a very simple software supply chain, because you compile everything into a single bytecode and run it. So there's many, many benefits of running WebAssembly as its own container format or as its own runtime for microservices. And we can make WebAssembly to be fully OCI compliant, which is the graph that I draw on the other side, which is the green boxes are the things that we already implemented and support, is to say, I can build a Docker image that has only one WebAssembly file in it, and then publish it on Docker Hub. So if I use Kubernetes to run that Docker image, it would say wrong format. It would complain, because it doesn't see Linux in it. However, by changing at the OCI level, at the OCI runtime level, by changing C wrong and C wrong, so we are now already merging into the upstream of C wrong, I can make the runtime aware whether this is a WebAssembly runtime or a Linux runtime. And if it sees a single WebAssembly file that in the Docker image, it would know to invoke wasm edge to run this WebAssembly file and run all the services and applications in it. So by making that change, because Kubernetes is such a really nicely layered system, so it can work with container D. It can work with all the CRI runtimes and the Kubernetes applications and everything about it. So our idea has really been, we want to use WebAssembly as a security sandbox and container to run microservices side by side with other OCI, with Linux containers and VMs and things of that nature. So in order to do that, we have to support JavaScript, because a lot of microservice developers, we can't tell people you have to use Rust to do that, although I love to tell people that. But in reality, maybe 90% of the developers are using JavaScript or languages of that nature. So in order to make it widely applicable and to make it widely adopted as a microservice runtime, we would have to figure out a way to run JavaScript inside WebAssembly. Because now in order to run this whole, because the whole value proposition is this is a container. It contains everything that runs in it. I can't have a lot of things that hanging around it, those VAs or something like that, that hands around the WebAssembly runtime and give people a big binary. And to say, just run this, I have to put everything together as a container, and then it runs WebAssembly bytecode. So in order to do that, I think that is the primary driver of trying to run JavaScript inside WebAssembly, is to say, I want to write microservices. And the microservices is completely contained within the WebAssembly runtime. It's managed by Kubernetes, like a Docker container and a VM that exists in the same cluster. So this is why we spend so much time trying to run JavaScript inside WebAssembly. So there's lots of use cases. There's a, yeah, so I'll go over them just very briefly. So there's many cases where you want a microservice that is managed by WebAssembly instead of a Linux container. Today there's lots of microservices that are managed by Linux containers. Those are long-running services that needs the full capability of Linux. However, you can have a lot of services that's transactional in nature, or it has to live on the Edge Cloud, on the CDN network, or it has to be run on an Edge device. For instance, the example I always give is say if you have a door camera. The door camera has taken a stream of pictures. And one of the things that you don't want it to do is to send every picture into the cloud to recognize it, because there's lots of privacy issues. And most of the time, it's people you know. It's your family members and things like that. Only in a rare case, it would find a stranger. And in that case, you should recognize that. And it's a new alert. So there's a lot of benefit to do it on that device, although the device may not have that capability. So do it in your home, in a server you install in your home, like in a setup box, or in a NAS. And if you're buying that to go to an Edge data center, that's something within your city to do it. Instead of going all the way to the internet and get processed, you don't know where it is. So for cases like that, I think those are the sweet spot for, say, web assembly-based microservices. That's because those services can be programmed and executed in a very efficient manner. And ideally, people want to use JavaScript to do that. They don't want to, at least at the stage where there are not that many Rust or C++ developers who can, who's available to do that. So that's the use case. Well, so for the rest of the talk, I will talk a little bit about how we did it. So the basis of implementation is also simple, and it's built on other people's work. So there is a wonderful project in the community called Quick.js. It's a very compact JavaScript interpreter. It's written in C or C++, I forgot. But it so happens it can be compiled into web assembly. So now we can have a JavaScript interpreter that's understand all the JavaScript syntax in the language and run it inside the web assembly. And with that, I can feed that interpreter another JavaScript file so that that JavaScript file can also be executed inside the web assembly. So that's really the basic idea. So you can see it does basic stuff. That's, it does understand JavaScript syntax and it can run the JavaScript program. However, when I tell people, we now support JavaScript. People immediately take an existing JavaScript application and try to run it in the most 99% of the time it failed. Because JavaScript has such a big ecosystem, you can't just say, I can do pure JavaScript or standalone JavaScript program. Most JavaScript applications out there have some, depend on some kind of other modules. So you all have to support other modules in your runtime as well. So this is the work that we did that's to support ES6 modules in our JavaScript runtime. So this example, if you're interested, you can look into the code. But it's not, it's just a very, I mean, a demo that shows how to define a module and how to call the module. Of course, the JavaScript ecosystem has more than ES6. So we went ahead and supported Common.js as well through the row up.js, right? That's it allows you to, what's the word they use for this? It's not cross compile, but row up. So it figures out all the dependency of your JavaScript application and download all those modules and combine them into a big file. And then you just execute that big file of JavaScript. So that solves the Common.js modules support problem, right? So Common.js, NPM, and ES6 can both be supported with this, with the runtime that we have built for web assembly for Wasm Edge in particular. But we should not stop here because one of the great advantage of web assembly is its performance. And if we introduce a Java interpreter and just stop there, then the performance would really suffer because the benchmark or the leading JavaScript engine is V8. V8 has, I think, 15 years of optimization and thousands of PhDs that's working on that, right? So it's impossible to exceed the level of GIT optimization and performance that's being achieved there, right? So to have a simple JavaScript interpreter, you would have the problem that people would say, it's nice to run it inside the web assembly and have those security and footprint and start time benefits. Those are orders of magnitude better than, say, running Node.js inside a Linux container, right? However, you don't have V8. So at runtime, you are three times slower. So that would be unacceptable. So one of the things that we really wanted to do is to take advantage of the multi-lingual or the ability for web assembly to support multiple compiled languages. So the solution that we come up with is really to build a Rust API that allows Rust developers to write JavaScript APIs. So the idea here really is that if I want to perform something like AI inference or something that takes a long time to complete, I write this function in Rust. However, there is an API that allows me to expose it as if it is a JavaScript function. So other developers come in, they can just call that JavaScript function. And the system knows how to route it to the Rust application that the compiled web assembly and then execute it in a much more efficient way. So I think by using that, we can increase the runtime performance of a lot of functions by a large margin. Because essentially, this is how Python does machine learning. That's when people always say Python does machine learning, but it's not. That's Python. Most of the heavyweight machine learning stuff is passed to the underlying C++ library. So if you just have a pure Python interpreter environment, you actually cannot run TensorFlow or anything like that. You have to have C APIs plug in there. It's the same kind of ideas. So you have compiled language and high performance language that allows system developers to develop those APIs and then present them. As if they're JavaScript APIs, so that JavaScript developers can use them. And based on this idea, there are a couple things that we did. So for instance, on Wasm Edge, using JavaScript, now you can invoke other web services or you can create a server that responds to inbound requests. Using non-blocking network IEL, meaning that's, although it's a single-threaded environment like other JavaScript environments, but it's non-blocking. So there's a scheduler that can schedule things around. So you can have multiple connections at the same time while it's waiting for each of them to finish. It can accept more connections instead of blocking. So here are some coding examples that's making an HTTP request and getting an HTTP response. And the way that we implemented that is through Rust. So we write a Rust program that we also have a socket API for WebAssembly by supplementing the WebAssembly's WASI standard WebAssembly system interface by giving it ability to handle sockets. And use a Rust to access that layer and then expose this Rust API as JavaScript API that allows people to write applications, programs like this. So a program like this should be very familiar to JavaScript developers because there's no Rust here. There's not even WebAssembly here. As far as they're concerned, I'm writing a JavaScript application that does some kind of server or HTTP kind of operation. And I can package it as a web service and enjoy the benefits of all the WebAssembly container formats that give me security, like footprints, the startup time, and all that stuff. So here's one example. And another example that's related is, you know, and you know DGS, there's a new API that people have been asking for forever. It's a fetch API. With fetch, you can do a lot of interesting stuff. For instance, you can do server-side rendering. You can have a React application that's rendered on the server because it can fetch all the data from where the page is being rendered. But in order to do that, in order to do that on the server, you need the server-side stack to be able to access other network protocols, access of database, and access other web services. So we implement that as well. And then there's another use case where I thought it was interesting is that we have built a WebAssembly extension that interacts with TensorFlow. So it's pretty much like what Python does, right? So you have Python, the interpreter, and then the TensorFlow library underneath it. So we replace Python with WebAssembly. And for inference only, right? So we have a, so that allows people to write a Rust application that interacts with the underlying TensorFlow library so that it can run TensorFlow models to do things like image recognition and things like that. And on top of that, we package that Rust library into JavaScript APIs. So here is just an example. I think it's fairly straightforward to understand. It's just a really image, input image, really image. And then define the input cursor tensor to say what is the input format of the model? How do I structure this image? And then run through the model and then it gives the results. So we have a demo on our website. So it's give it a picture of a hot dog. It could tell you it's a hot dog. So it's a very typical mobile net demo. So that's also ties together. That gives a very simple JavaScript API to do AI inference. It's hence, it's the example that I have just mentioned, the facial recognition. You can see it can be done this way, right? So a JavaScript developer wouldn't be able to write an application that does facial recognition from a camera. And underneath it, it would take advantage of all the TensorFlow. If there is GPU, it can use GPU. And all the data preparation is done by Rust. But the API stays JavaScript. So that's the developer would be able to, you know, the application developer would be able to use a very simple API to build their service, to build their microservice for H devices, you know, things like that. Yeah. So, you know, here's a more complex example. You know, we can do actually the four React 18 streaming SSR, you know, React 18 allows a way for the entire UI to be rendered on the server side by executing the JavaScript that meant for the browser on the server side. You know, it's called isomorphic application web programming, right? You know, meaning the same piece of code that can run on the client side and on the server side. So, you know, we have completed this demo to show that this thing works fully from end to end. You know, that's so, you know, our JavaScript engine can handle a complex use case like that. So, yeah, this is a demo, you know, that's, you're interested, you can look into that. But there are more, you know, that's, you know, so we have done all this. However, you know, we have also encountered a lot of issues. You know, that's what are the issues that we have encountered? You know, that's, it's one way to start to tell people that we support JavaScript. People immediately get an existing JavaScript library, like I said, and try to run it. However, if you look at the JavaScript libraries that's out there, maybe, especially on the server side, maybe 90% of them use some kind of Node.js API. You know, they use Node.js to do networking and, you know, things like that. That would, you know, that would cause issues on our end because we only know JavaScript. We don't know Node.js APIs. So there's a big community effort that's going on with, within the Wasm Edge community. And I'd love to have you guys contribute to that or, you know, it's our GitHub repository is issue 1535 and we have a long list of JavaScript APIs that we need to support in our runtime. And some of them, you know, they each have different priorities. You know, that's, we have one's, priority two is the most important ones that we need to support immediately. And we have also, some of them, we can use other JavaScript to polyfill, right? We can use JavaScript to implement JavaScript. But quite a few of them that we have to drop down to the operating system level, meaning that we have to use Rust or C to implement them and to supplement the runtime in order to support them. So, why call it the community effort because we are working with the Linux Foundation's internship program. So we have four summer interns, they are graduate students and they are working on different aspects of this. And, you know, so hopefully by the end of the summer, you know, by, you know, the next conference we would be able to say that we have full support of the Node.js API. So now we should have, by then we should have a much more complete, you know, JavaScript story because most of the JavaScript libraries would be able to run, you know, on a multi-managed platform. So, you know, that's, but that's also something we would love to see, you know, more community engagement and, you know, come to our GitHub and tell us what you want to see and whether this approach is right or wrong, you know, or, you know, what are your favorite JavaScript libraries that you want to see that runs inside the WebAssembly and we can figure out what API it uses, right? And then prioritize the kind of APIs or whether it's, you know, Node.js or something else that we can support, right? You know, so that's, yeah, that's, we love to see, you know, participation from you guys. Yeah. So, well, that will have went through all the, you know, implementation, I think, ideas and, you know, and the need, right? You know, that's, so I have one last topic is QuickJS versus V8. You know, so, because I think this comparison is memorable and I have talked about a lot of those issues that's throughout my talk just now, is that people say, you know, in JavaScript, there's a crown jewel, it's called V8, you know, you cannot exceed performance of V8, that's, I acknowledge, you know, that's, you know, it's, it's pretty much impossible to exceed it, just, you know, I've looked at it as a source code, it's beyond comprehension, at least beyond me, right? You know, but, you know, so the first objection really is that QuickJS is much slower than V8, especially with GIT turnout, right? You know, however, like I said, we believe the key bottlenecks can be improved dramatically, including stream processing, including AI inference, including things that are really takes a lot of time, you know, because if things only take like five milliseconds, then being three times slower is not a big deal, it's 15 milliseconds. However, if something takes close to 100 milliseconds, then being, you know, 10 times slower would take about one, four seconds, that would be bad, right? You know, so we try to identify those performance bottlenecks and implement them in Rust, and try to improve JavaScript performance in our runtime this way. And the second thing I think that speaks to the benefit of QuickJS approach is that it's much smaller than V8, especially if you consider V8 is not a container, so V8 has to be run inside Barker or inside Linux container, that would give you like one gigabytes of footprint, you know, that's because you have to have the Linux libraries and, you know, things of that nature put in there. And WaterMage with QuickJS is like 10 megabytes or less, you know, so that is a very big difference in terms of footprint. And we'd like to believe, or we'd like to say that QuickJS in WaterMage in a WebAssembly container is safer because it's a smaller attack surface, it's a simpler supply chain, and the WaterMage runtime is designed to be a security sandbox, at least in the browser, right? So in fact, you know, V8, which JIT turned out, has become problematic on the server side, you know, that's, I think a lot of people know that because, you know, one of the fundamental problem is that V8 is fundamentally designed for the browser. So if some issues that only happens on the server, you would have, it's, you know, the V8 team say that's, you know, it's not a priority for us to fix that, as a which I can completely understand, you know, that's because, you know, the most important thing for them is to make it run better in the Chrome browser, right? You know, so on the server side, especially in large computing density environments where you have to run a lot of other people's code, we believe WebAssembly is, we argue for that, WebAssembly is a better security model. Yeah, so of course then the last thing, quick jazz in browser image is more manageable, it's because we can do OCI compliance, we can have full integration with Kubernetes and the other, you know, contender tools like contender D, CRIO and all those, we have experiment, no, we have tested and validated that they can load, you know, WebAssembly files or WebAssembly basic images and have them run in the same cluster with, you know, so our ideal world is you have one cluster that runs the VM, the container and WebAssembly runtime, so all three things can be run side by side, depending on what type of task you want, you know, so WebAssembly would run, say computational intensive, but transactional in nature tasks, right, you know, and the Linux container would run long running tasks and the VM would run high security tasks, you know, so you can have, you know, that would give, you know, I think it's ops people a lot more flexibility to do those things, yeah. Well, so there are, of course, you know, that's the approach we just talked about is our approach. There are other ways to do that, you know, so there's, Mozilla has their own JavaScript runtime that's called SpiderMonkey and there is a major effort to compile SpiderMonkey into WebAssembly and also have a Rust interface to that, so that allows it to do everything that's, I have just shown, but in a different WebAssembly, in a different JavaScript runtime, you know, so instead of Quick.js we can do SpiderMonkey, we really look forward to that because SpiderMonkey has GIT in it, so it could really improve performance, you know, although it could be bigger than Quick.js, but it would be super nice to offer the community's choice, you know, whether you want a larger footprint, but or you want faster performance, right, you know, so that's one thing. And then Shopify has a project called Javi and it's a Rust wrapper around Quick.js, which is also similar to our approach, but it's, you know, that's where we say Javi is more of a runtime independent approach, you know, so it's trying to conform to the WebAssembly standard only and we do a lot of our own extensions to WebAssembly runtime because we are, you know, because our project is WebAssembly runtime, so we try to optimize it for the use cases that we identify like running JavaScript, right? So, you know, so I think that would be the difference, so I think Javi would have less capability than the things I have just demonstrated, but it's gonna run across different WebAssembly runtime so it has better portability, so yeah. And then the other approach that we have been experimenting with and I would really love to hear the community feedback and to see if anyone wants to do it is to use V8 as host functions, meaning that's a cam occasion, but at the bottom you have WASI that goes to libc, you have, you know, the AI inference stuff like TensorFlow that goes to the TensorFlow library and then you have JavaScript in execution goes to V8. So this is also something that we are experimenting with, but, you know, we are open source project, we'd love to, you know, I'm talking to researchers and students and developers and try to convince them to do this, right? You know, that's also why I'm here, you know, that's, you know, we try to engage the community to see if there are people, you know, that's like-minded developers who wants to explore those ideas further, yeah. So, yeah, that's the end of my talk, yeah, that's, thank you very much for your patience, that's, you know, you know, late afternoon today and if you have any questions, you know, I'll be back in a couple minutes, that's, you know. Yes, please? Yeah, so let me repeat the question. So the question is how exactly it works, how different components fit together, right? You know, that's, so it's, the way it works really is that, you know, so you have Quick.js that written in C and Quick.js compiles into WebAssembly and Quick.js, and then we have a Rust API that is, that is compiled with Quick.js, okay, but access the plugins that we build into Wasm Edge, you know, meaning to access the networking, the file system, the TensorFlow and all that stuff, right? So what you would get is the Wasm Edge container and the WebAssembly module that runs inside it. This module is the JavaScript runtime and then I feed it with a JavaScript file, right? So the JavaScript file is a text file that's passing as a parameter to this module and so from the top all you see is Wasm Edge. So when you start Wasm Edge, let me show you the command line, I think. Yeah, so this is how to use Wasm Edge on the command line, right, you know, there's many different ways to use Wasm Edge, so at Wasm Edge, so it's the Wasm file is Wasm Edge Quick.js, which is the Quick.js compiled Wasm Edge, okay, and with the Rust API already bundled in it, and then you pass another parameter, it's called holo.js, which is a, you can pass that as a string or you can pass that as a file and so the Wasm Edge loads the Wasm module first and then from the Wasm module it loads the JS file and then it loads the rest of the parameters. It's execute that JS file inside this module, so there's only one Wasm runtime that's involved in here. Yes? Oh, yeah, so that's in our documentation, you know, that's, it's a Rust compile, it's, so this Quick.js, we have a Rust wrapper around it, so it's presented itself as a Rust project, so you build it with Quick, and but say the target is Wasm instead of a CPU target instead of ARM or X86, you know, so it would, and so the build artifact, the result is a binary file that start Wasm, you know, so basically. Yeah, so there's something that's, there are things that already build in, like the networking stuff and you know, everything I show, they already build in here. However, if you want to add your own, you know, that's using Rust to implement your own JavaScript API, you would have to fork that, that's a Wasm Edge Quick.js project and there's an extension folder that put your Rust code in there and compile it into WebAssembly again. So that's also one of the interesting things about WebAssembly is that it always produce, or at least at now, you know, I think there's a lot of talks about, you know, dependency modules and you know, things like that, but at least for now, it produce a single executable binary and so it's really easy to determine, it's a supply chain, you know, it's so what goes in there, you can fairly easily analyze it, yeah. Okay, yeah, please. So one last question, please, yeah. So the question is, what's the difference? Well, that's, you know, I was hoping an easier question, but that's a, you know, I would say there's two big differences. You know, one is the implementation is different. You know, Wasm time itself is implementing Rust. Wasm Edge is implemented in CC++ and it has many interfaces and Rust is one of the interfaces it provides, right? We believe, although we are in the Rust community, so a lot of people keep asking, you know, why don't you do that? It's strongly reminding me though, days of Java communities, they always ask, why don't you rewrite this in Java? You know, that's, why don't you write JVM in Java? You know, that's, you know, but from our point of view, because we want Wasm Edge to be more adapt to edge computing, so we want it to be in places where, you know, where the Rust 2 chain may not be so well-established. For instance, on RISC-5 CPU, inside the Intel TEE, you know, the Intel SGX, and you know, that's sort of hardware environment. And also in non-linear operating systems, like, you know, like we adapted to CELFOR, which is a real-time operating system. You know, that's, so there's a lot of those places and open harmony. You know, there's a, there are a lot of those places where the Rust 2 chain may not be that well-established, but we still want to compile the runtime, right? You know, so that's one of the reasons. And another reason, of course, I think the community need diversity in terms of web assembly runtime. You can't have just one web assembly runtime, and if there's a bug discovery in there, and then everybody breaks, right? You know, that's, so, you know, from that aspect, so that's one thing. The second is the features. You know, I think, one time I think it's a, it's a bytecode alliance project. We are a CNCF project, right? You know, so, we take, you know, our philosophy has been built first and see what becomes a standard. You know, we don't try to come up with a standard first. You know, so, for instance, our contribution to WASI has been, we have been trying to, you have seen all the experiments that we have done, and none of them are current standard. We want to see what's the community going to adopt and then try to push them as standard, right? You know, so, one of the issue with the standard first approach, however, you know, that's, we have seen that work clearly from, from the early days of Java, you know, when there's EJB versus Spring and, you know, all that things, you know, and today, you know, one of the big frustrations in the web assembly runtime community is that, say, networking capability in web assembly was in socket has been lagging for almost two years, and there's very little progress made in the standard community because, you know, I think people who make standard doesn't really have this need because, you know, most, because a lot of people are running web assembly as an embedded runtime. So it's not urgent need to make a web assembly standalone runtime that runs microservices. So things like that has been lagging. And, but for us, you know, that we take a different approach. We implement first. It may be right. It may be wrong. Who knows? But the community is going to tell us, you know, if all the 10 things that we did, nice things may well be wrong, but one thing would be good. You know, we'd be right. And we put that to be a standard. So, you know, that's, that essentially our approach. You know, so I, of course, there's many, many technical details that we are different from wasm time and wasm runtime, but I think those are the biggest high level differences. All right, yeah. Thank you.