 Hey, good morning everyone. So just give it a little bit of time and a few more minutes to see if more people join. Good evening. Good morning. Hey, good morning, Michael. Are you Michael? You're in Texas or? No, I'm not in Texas. I'm stuck in Beijing. Can't go back. So it's a night for me. This is my office. Is everything open there? The offices? Oh yeah, in China. Yeah, as if nothing has happened. But the foreign tribals are all closed. So they don't let foreigners come in and out now. What about people from there? Can they go out? Yeah, if you go out, you pretty much can't come back. That's a problem. If you leave, you can't come back. I see. I think maybe we can get started. I mean, this is three minutes past the hour. Other people might join in. But yeah, we can get started. We're excited to have you here to talk about SS3M. Yeah. Second state, WebAssembly virtual machine runtime. So we know that WebAssembly is newer technology and then excited to hear more about it, how you can run workloads using WebAssembly. Hi. Good morning. So shall we wait for four minutes more or shall we just start? We can start. Okay. So is it possible to give me the, you know, let me share my screen? Yeah, for sure. Okay. Let me try that. So we usually have this interactive session. So I might be able, we're going to interrupt you at times. Yeah. We have some questions. Please. So you guys can see my screen, right? Yep. Okay. So yeah, first of all, it's a real pressure to be able to present to this group. So I want to talk about this called SSVM and it's very unimaginative name. It's called second state virtual machine. It's SSVM and it's a WebAssembly virtual machine that is designed for the out of the browser hosting environment. So as you know, the WebAssembly standard was developed for, I would say, a second, fast virtual machine runtime inside of the browser other than JavaScript virtual machine. So it has been, you know, it's, it has made some progress in that area. But I think people, especially in recent years, you know, coming from the folks from the blockchain world, and then from the server side, you know, people are increasingly interested in running web as somebody outside of the browser. For full timers like myself, I mean, it's, it's, it's very similar to the past Java has taken, you know, back in 1997, and, you know, all the way to, you know, early 2000s, when I was, you know, when I had a startup that doing Java application server, right? You know, at the time, you know, we all remember this Java app that story, right? It's running inside of the browser and then, but in the end, you know, it's, it becomes successful as a server side technology. That's also where we see, you know, perhaps WebAssembly runtime is going. But time has changed with some significant difference. In the back in the days when we still have more slow, you know, performance isn't such a big issue and the developer productivity is, is the predominant issue. So the Java virtual machine is pretty much designed to trade, you know, developer productivity for performance, at least at the time when, when it was first developed, right? Of course, a lot of optimization has been done over the years and made it much faster now. But WebAssembly is, it's designed from ground up to be very fast and to be high performance so that it can, you know, it can take advantage of, you know, it can take full advantage of the hardware. So, but it also tries to preserve developer productivity in terms of, you know, cross platform compatibility, which is more of an issue now than in the past because I think we have much more, many more, you know, operating systems and chip architectures on the server side than say 1997 when there's, you know, you know, pretty much just a few, you know, CPU architectures back then. And now we have like AI chips, GPUs and, you know, a bunch of those stuff. And we also have a much longer history of the Linux operating system. We have operating system dating all the back to 10, 20 years ago and it's still being used in production, right? So, so those are the things and, and, and, you know, that's, I think those are the things of what I think WebAssembly would be set out to solve. And we're also, where we think that it could be a good fit for the cloud native foundation, you know, because I have worked with Linux Foundation in the past. And I really like how the community works. By the way, I used to work at Red Hat. So that's my, you know, connection with Linux, right? You know, so, yeah, so, so that's, but so, you know, that's the introduction where I come from, where this, where this, you know, SSVM virtual machine come from. So it's a popular web, you know, when we call it the popular web assembly virtual machine optimized for high performance applications. And it has a GitHub repository. It's an open source project from day one. So if I may, perhaps I should show you the, the, the GitHub page, you know, let's see that. Can you guys see the screen? Yep. Yep. Okay, great. So, you know, it's a, for infrastructure project, it's, it's fairly popular. It has like 700 stars. So I know it's not, it's, it has been around for two years. So from, from day one, you know, there's over close to a thousand comments, but mostly from, you know, from, from developers in, in, in a, in a close group, you know, that's the second state of the company and people who associated with us. So those, so I wouldn't say it has a very large, you know, country and committer and developer community yet. That's also one of the reasons we want to join CNCF as a, as a sandbox project, right? So that we can, you know, develop, so that we can collaborate more with the developers in the community. So it's, so, you know, there's some people who like it, you know, 700 people who like it. And, you know, there's, you know, people who contributed code. And as, you know, there's fairly comprehensive documentations in terms of where to run, how to run on different versions of Linux sovereign system, and how to build it, you know, how to, you know, how to run for advanced tasks like, you know, AI inference, how to, how to work with a sensor flow and all that sort of things. So, you know, that's, so it may be interesting to take a look, you know, in terms of, you know, what this project is and what this history and, you know, you know, and just download the code and play with it yourself. You know, you're very interested. So this is, so this is the project and it's a GitHub page. And perhaps the most important slides in this, you know, in my, you know, prepared presentation, you know, just feel free to interrupt me at any time because I want to talk about four, six application scenarios where SSVM, in particular, and WebAssembly in general can bring value to the cloud native ecosystem. So, you know, people say, you know, we have, you know, on the server side, we have, you know, we have Docker, we have Kubernetes, we have all those, you know, why, you know, Docker and Kubernetes, Kubernetes probably, you know, gives you the full operating system capacity, you know, you can run any application that, or any framework that you would normally run on the Linux operating system in those containers. Why do you need something that is potentially even more limiting? You know, that's because on WebAssembly, you come around the operating system. So you have, you have a clear boundary in terms of, you know, what type of application that you can run, you need to conform to some kind of SDK, you know, things like that. So, you know, that's, so people ask that question. So, so those are the things that we see in the market, you know, that's, that's people use our product and people use other server side WebAssembly product to do, right? You know, and the first is what do we call gem stack applications. So gem stack, this refers to JavaScript, API, markup, you know, so it's a, the idea is to use a static website generator, like Hugo or Next.js or what's the other popular one and Gatsby, right? You know, so you use those applications to generate a static website and distribute those static web pages through CDNs. And then, and then those big front-ends have JavaScript in them. They provide interactivity through APIs on the backend. So it's, it's provided a complete separation from the front-end and the back-end. And so, so the front-end can be distributed in any way you want. It can be good up pages. It can be, you know, like I said, it can be CDNs. It can even be a blockchain project. You say IPFS, right? You know, so it's, you can, you can distribute it just like files and in any way you want. And then you host the interactive piece, the web services on the back-end. So that's, that way of developing web application has gained a lot of popularities in the past, I think in the past two, three years, you know, that's, you know, a lot of developers love that, especially with, you know, with, you know, GitHub providing get a pages for free, you know, a lot of people are doing that, you know. So the key issue in Jamstack application is to provide simple and easy to use APIs on the back-end to talk to the front-end JavaScript. So web assembly plays a role here because it can provide very lightweight, what we call serverless functions, you know, meaning in the cloud, you instead of the front-end application needs to perform some functionality. For instance, send an SMS message to some user for logging, or to query a database, you know, something of that nature that you need a really light function to be executed on the back-end. There's cloud providers who would provide you serverless functions, you know, like AWS Lambda and all that. But the problem is they are fairly heavyweight because, you know, they are virtual machine-based or container-based. So, you know, so you basically are setting up an operating system and they're on Python or Node.js in that way. So that's, you know, people have, you know, have a lot of discussions of this type of architecture to say it's a fairly wasteful in terms of, you know, resource use. And it's also has a problem of code start time, it's too long. And it's also not CDN friendly, you know, so you have all those front-end files, static website is being generated from your framework that you distributed them to CDN for performance reasons, close to your customers. However, your back-end services is still in centralized cloud, you know, people still need to go all the way back to AWS in order to, you know, log in, you know, something like that. You know, so that's not the ideal architecture. So companies like Fastly and Cloudflare come up with those ideas of serverless functions on the edge nodes, right? You know, so they would say, you know, we would provide serverless functions that execute untrusted user-approved functions, you know, that's closer to the edge node on the CDN network. And the way to do that is not so darker or more heavy containers, it's through very light, you know, language-based virtual machines like WebAssembly. So that's basically how the, I would say, the edge computing industry is moving, it's trying to move that direction, right? So, you know, so that's one category of applications. I'll give you, you know, a full page of links which you can try out, you know, the applications revealed this way. You know, there's AI inference, there's image processing and all that. So, you know, those are gem stack applications, all the front-end is just, you know, pages, and the backend is serverless functions, and the serverless function is executed on a lightweight VM and SSVM or other WebAssembly VM could fit that bill. So that's one area of, you know, where we see, you know, cloud-based WebAssembly, where does it fit? It could fit into this, you know, gem stack. And the second is more of what I would say, the traditional JVM or Java use case is to provide a unified API for something that is complex, right? So here is an example, is to provide a unified API for AI inference. So, as we know, you know, that's deep learning and AI has, you know, has gained a lot of attention in the past couple of years. And to train an AI model, there's lots of books about that. And, you know, then tell you how to do Python and do that. However, to deploy AI model in production has always been quite difficult. You know, there's TensorFlow servers, there's, you know, there are lots of things you need to mess around in order to get Python and everything else work together. And it's not super efficient because in order to do it efficiently, you really need access to things like, you know, GPU at least, but custom hardware is probably the best, like AWS, there's A1, right? You know, things like that. So there are lots of things that you have to do at deployment. For developers to write code that's specific to the development platform is, I would say, pre-Java days, right? You know, that's, you know, you write a web application, you have to know it's underlying architecture, it's a sound spark, right? You know, not X86 Linux, right? You know, that's, so I think that's a huge burden for developers. And in our particular case, you know, we, we partnered with Tencent cloud, you know, they have they have a serverless business that basically use Docker containers to run serverless functions, right? And the Docker containers in question runs CentOS 7, which is about 10 years old and doesn't even run TensorFlow. In terms of flow, doesn't even, you know, you can't even compile TensorFlow for that particular, for that particular environment. And from the developer point of view, it's, it'd be very difficult to get it working because, you know, you need to first to choose which AI firmware kind of works in that environment, these are TensorFlow or ONX or something else. And then you need to figure out the exact configuration, the operating system that's running so that you can static link newer versions of GLCC into your, into a stack in order to get running, right? So, you know, we, so the way the WebAssembly virtual machine works is that it can abstract away all those issues. So we can have all those issues preconfigured by us in collaboration with Tencent, you know, that's one center for all. And then we provide a language level API in Rust and in Swift and in TypeScript, right? You know, that's two developers and the API basically allows developer to say, you know, load my AI model. And here's the input tensor, run the model. And here are the output tensors, you know, so those are the very generic ways to run, you know, AI models in a unified API. And then the WebAssembly runtime on the deployment platform figures out how to route those computational tasks, right? Could it, is this run, is this executed because I'm now running on Tencent serverless. So this should be run, you know, should be executed by TensorFlow on CPU. Or if I'm on AWS, this could be run by, say, PyTorch, by ONX, by ONX runtime on the A1 process. You know, that's so, so those are the things that we try to make things easier for developers so that they can provide more, less abstraction layer on top of the operating system, at the language level. Pretty much things that Java has done 25 years. I have a question. So yes, thank you for the presentation. This is really interesting. So the WebAssembly modules, when you're serving a model, I mean, I guess you will still have something like TensorFlow serving in the back end, right? So and then, and like this will be just the API layer. And I suppose this could actually live anywhere. It could also live at the edge or because they're very light weight modules. And yes. So have you seen some just cases yet where people maybe, where do they put the WebAssembly modules with respect to the server and the instance where they have the actual, you mentioned that you can have it in AWS or you can have it somewhere else. But then is it better to be closer to the edge or have you seen just cases where they just put it anywhere or you haven't seen any use cases? And I guess the same question applies to the web apps, right? Because you have these really lightweight modules, but then they still need to talk to a back end. That back end actually could live anywhere. It could be in AWS, could be a database, or it could be Kafka, Broker or whatever, right? Does that make sense? Yeah, yeah. So, yeah, so I think we are still, you know, the whole landscape or the whole industry is still in its early stage. So where we want to go is to have those WebAssembly modules directly deployed on edge nodes. And the edge nodes could be a variety of different nodes, right? It's not just the public cloud edge where I think are pretty far network edges, but also, especially in China, you'd have small data centers in factory where the government has subsidized, right? And that's one of the things where the government has so much power. The government tells all the factories that you need to modernize and build a data center in each of them. And then they build a data center and they have no task to run because there's not so much computational intensive tasks in the factory, right? And so they unplug the power flag, right? You know, so basically it says they're completely idle or it's not even powered up. You know, so there's all kinds of computational capabilities out there like that. And our goal is really to work with, you know, edge cloud providers, right? You know, to have them because they have access to resources like that. They have resources to spare computational capacity in 5G TARS, for instance, right? Or, you know, that's, you know, a lot of those infrastructure projects. So, you know, they can put them together and they can run WebAssembly so that they can provide computational services for residents in the city, in the nearby city, right? You know, that's, you know, although those things are originally designed for the factory, but the factory is not using it, might not be using it for something else, right? You know, it could do, it could process images from the dog camera, you know, there's lots of things that you can do. But, you know, and WebAssembly now provides a much, you know, much more efficient way of doing that. And as we would say, as we would see later, it also provides more efficiency on the operating side. It provides, it is more developer friendly because, especially for AI applications, because it's not constrained by the operating system that's running inside Docker. You know, although Docker is all the container is standard, but you can run any operating system in there. That's become very, very nonstandard. So, you know, that's, so it's developer friendly, it's operational, operational efficient. And it's also, well, yeah, that's, that's, you know, that's, but the trade-off, of course, is it's not as flexible as Docker. So, so, right now, you see a mixture of those both. And we would even see, you know, that's, so if you ask me, what's people are running on the Docker? Sorry, what's people are running on the edge today? You know, those are, you know, that's like QBH projects, which is a CNCF project, you know, that's used Kubernetes to, to, to spin off, you know, containers on those projects. And, you know, so, so there's a variety of different ways to doing that. And is WebAssembly mainstream yet? No. But that's, we are hoping to make it more mainstream. You know, so that's, yeah, that makes sense. Yeah, thank you. So, yeah, that's, so that's really leads, leads us to the next, you know, topic, which is edge devices. So, that's also where we collaborate with, you know, people like in the industrial, you know, situation where, you know, you have those new software driven cars, you know, that's, you know, all those electric cars after Tesla, they're all just giant computers sitting on batteries and wheels, right? They're not cars anymore, they're just computers, you know, and you need a lot of software in them. And, you know, that's, people say, oh, you know, the car has a lot of software. That's, irate it is, you know, that's, you know, we've worked with, you know, that's, you know, a company that does that, they use QBH and the Docker to, to run software inside of the car, right? You know, that's, because the car needs to run a lot of, you know, third party of untrusted software, they use Docker to do that. And this is, for us, from our, I think everybody in this understand it's a huge waste of resources. That's also cost, you know, the same that you heard over and over again, the car has more software than the spaceship, right? You know, that's, you know, it's because there's so much redundancy, so much, you know, complexity of running software in the car. And so, you know, that's, that's also the area that we really want to improve the SSVM is to make it, it's, it's to make it run better on those RTOS, you know, real-time operating system like StratX, right? You know, and so that, and also access to the controlling hardware in the cars who, you know, what we call the WebAssembly system interface, WASI interface. So that's to expand the WebAssembly system in that way so that it can be, it can run software more efficiently inside those, inside the cars, right? You know, so, so that's the third application scenario, whereas the edge devices, there's lots of different operating system, lots of different hardware that's, that's, that people have to deal with. And we, we hope WebAssembly would provide a better abstraction for, for developers to target those platforms better. So, yeah. And then, of course, you know, the, the, the fourth one and fifth one are similar, you know, it's a, it's, it's, you have a large system like a SAS system or IoT messaging system. And, but this, when you go to a customer, you need to add, you know, functionalities to it that's specific to the customer. So the traditional way of doing it is through APIs or through configuration files, right? You know, that's for easy things, you try to configure it. And then for complex things, you have APIs that allow people to call your system from outside and make callbacks and you know, things like that. And we think perhaps another approach where I think it's become more and more popular is to make it serverless. So meaning that's in a system, instead of making callback to external API, I ask developers to upload a piece of software, upload a function. And I run it inside of my infrastructure. That's, you know, so, and in that scenario, there's an unknown code that's uploaded by the developer that has to run inside, you know, inside your SAS system. And you need a sandbox for that. And the docker is too complex or, you know, a traditional application container is too complex and too slow. So WebAssembly is actually quite, quite perfect in this particular scenario. And we've seen this in quite a few large scale projects already. You know, I think, you know, as America, Shopify is probably one of the biggest ones that does it. You know, that's their application scenario. It's actually also pretty interesting is that, you know, if you think about Shopify, it's an e-commerce website builder, right? And one of the heavy customization they need to do is that they need to give people flexibility to provide discount or rules at checkout. So, if I'm a shop owner, I want to say, if you buy three of this, get four for free. Or by three of this, get the shipping for free. You know, there's many rules like that. And in the past, they do it with templates. And that's addressed a lot of those needs, but not all of them, right? You could argue a better way is to let people upload a small piece of code, just use software, use code to describe what they want. But that creates a situation where, you know, the platform has to specify, you know, has to run this code. And, you know, in a shopping, in an e-commerce checkout scenario, of course, you cannot have, say, a callback, you know, something that would take like a second, you know, because, you know, people can abandon their shopping cart, they have to wait for a second. This has to be done in milliseconds, right? You know, so the approach they take, you know, is to use WebAssembly to run this, to run the user application, upload it to software, right? You know, so, user upload a piece of code that says, you know, how do I provide discounting my items in my shopping cart? And then, you know, they use WebAssembly to plan very fast that checkouts on their infrastructure, instead of the developer have to set up a server to do the callbacks, right? So this is also the work that we do with, you know, well, those are two Chinese characters, but this is basically a TikTok's, you know, parent company in China, and they have a Slack competitor, you know, so they have a work collaboration software by TikTok's parent company in China. And the way that they try to expand their platform is also, you know, if you look at, you know, their current documentation is API-based approach. So if I use this message, you send this message, you know, it would use the API to do a callback to a server that's the developer sets up, and then the server comes back with a response, right? You know, that's a typical, a very, you know, standard way to provide API services. But, you know, there's lots of issues with the reliability of the developer's system, whether it's inside China's firewall or not, you know, there's what type of security it has, you know, all kinds of stuff. So, you know, that's, so we thought it's, you know, a easier way to do that is to just let the developer upload a piece of code and the platform runs it, right? You know, so those are the, I think those are the two related application scenarios that we have encountered for WebAssembly in the cloud is to have, is as an extension mechanism for larger platform, like, you know, a SaaS platform, something like that. So then the last slide. Sorry, I have a question. So can you explain again how they do that with the chat applications? So they upload a WebAssembly module to the developers of load a WebAssembly module to the chat application service and they provide unique functionality based on that? Yeah, so say, you know, say it's a chat application, let's think about Slack, right? You know, on Slack platform, you can write bots, right? You know, the bots that can respond to users. And the way you write bots is you read just a callback URL with the Slack platform to say, if my users send a message to the bots, please forward this message as a HTTP request to my server and my server going to process it and my server can come up back with a response and then you send the response back to the user, right? You know, that's the normal way, right? You know, that's, you know, the API platform way. We thought a better way is to say, why do you ask the developer to run the server? Why don't you ask the developer to upload a piece of code? The code basically has an interface that says the input parameter is a string. That's, that's is what the user says. And the return value is another string is what the bot says, right? You know, so that's, that piece of code is running inside the messaging application zone platform. So it's never done. And the security requirements and all that stuff that they may have, right? You know, so yeah. Because the sandbox and all that and yeah, okay, got it. Yeah, so to do that in the web assembly, I think the problem is, is worse in China, you know, that's because in China, you have the, you have regulations around content, you know, you are not allowed to say things that the government doesn't like, right? So if the developer has their own server, you have to make some, you know, to check their stuff. And, you know, there's lots of compliance issues. And to have it all on your own server, on the platform server, you can make it a lot faster. You can make it a lot faster, you know, and a lot more compliant. Yeah, it's much faster. But also, if you end up with like, thousands and thousands of web assembly modules on your own platform, there might be some challenges there. Because like, when you do a call back to an external service, you're doing it to multiple services. Of course, you have the network latency issue, right? And in this case, you have the web assembly module and the whole messaging platform. But you may have hundreds of them, right? So there may be some scaling issues there, but I think everything has a tradeoff. Exactly. But that's also why, you know, we are interested in working with developers in CNCF, right? You know, because that's, because the thing that you have just described, how to scale out thousands of web assembly instances, right? You know, how to even start them, right? You know, that's, that's something, you know, that's what we consider cognitive. That's, you know, that's, you know, that's what we want to improve on, right? So that's, yeah. Cool. Yeah. So then the last one, perhaps, you know, it's also a large use case, but perhaps less relevant here is, it's a, it's a wrong time for blockchain smart contract. You know, that's, I don't know you, how much you guys are interested in the blockchain space. But, you know, the first generation of blockchain is the Bitcoin, it's just ledger, right? You know, just keeping record of the accounts. And then the second generation of blockchain is what we call Ethereum. You know, that's, it's, it has a tooling complete virtual machine on the blockchain. So, you know, instead of doing coin transactions, you can attach a piece of code with a coin transaction. So you execute the, you know, the, the, the piece of code together with your transaction and all the nodes have to come to agreement, right? You know, that's, however, the, the, the Ethereum virtual machine was written by Vitalik when he's 19 years old. It's, it's a brilliant piece of software, but it's not, you know, what we would call, you know, a well engineer. So, you know, so there over years, there has a lot of been, be a lot of people lost money on this thing because, you know, it's so difficult to work with. And so, essentially, after Ethereum, you know, all the next generation blockchains, they all decided to choose web assembly because, you know, because there's a larger developer ecosystem around there, less bots, you know, there's LVM compilers and all that stuff, you know, that's, that's a completely two, a complete two chain that you can use. So that's where we, you know, you know, that's where we see a lot of financial interest, you know, as you know, Bitcoin is like all time high again, you know, that's, so those guys have money, you know, that's, that's pretty simple. You know, so it's, you know, so this is also one of the areas we see a lot of, we see a lot of interest in all the developer interests as well. So those are the use cases. And so I hope I have provided a, you know, you know, introduction of why this is needed on the, on the cloud, right? And why this is not just something, you know, you should run it inside the browser, you know, that's, there are actually some use cases for, for this technology on the cloud. Oh, sorry. So the interesting thing about WebAssembly is that it has a core specification in W3C. And then it has a number of, you know, optional proposals. And it has a standardization process that is very similar. Well, I wouldn't say very similar, but somewhat similar to the JCP process in the past. So, you know, people come up with proposals in terms of what to implement it. And then you implement, and then the community come up with implementation, and then there will be reference implementation, and then comes the, the, the proposals graduates, right? You know, so, so aside from the key, you know, virtual machine, you know, specification that's originally developed for web browsers, you have perhaps the most important one is the Wasi. It's on the left side. You know, it's called WebAssembly Systems Interface. It allows WebAssembly to access the operating system features instead of the browser. So inside of the browser, you don't have the concept of file system. You know, you have the concept of dome on the page. You, you know, so inside of the browser, there's lots of things that are missing. But if you run WebAssembly on the directly on the operating system, you need to get access to, give it access to sockets to, to the file system to, you know, that's, so all those things that come standard with, with, with the operating system. So that, so that is Wasi. And the way that they define Wasi is expandable. So you can use that to call other native facility because Wasi to cooperate in system essentially is to Wasi for JLPC, right? You know, so that's the standard operating system libraries become available. But then you can have other operating system level libraries that makes them available as WebAssembly function costs, like, like I just said, TensorFlow, right? You know, that's, it's, it's not part of the operating system, but it's a native library. So, you know, you can use it, you can use the same approach to make it available inside WebAssembly. So, you know, so, so there are many things like that. And one of the very interesting thing, you know, I'd say one of the unique features of the SSVM is we try to experiment with as many of them as possible, try to develop, you know, try to support all of them. And, and, and we, because we have many use real world use cases, we also try to improve on them. However, one of the issues with the WebAssembly community is that right now there's no, the W3C has come up when is the place where you come up with the specs, but there's no place for reference implementations. And you know, that's fastly and Mozilla. And I think Red Hat and Intel, I think a bunch of them created an organization called the Pythagore Alliance last year. And I think, you know, however, we have reached out to the multiple times and other people have reached out to the multiple times and it's, and it seems that they are still trying to decide, you know, the governance structure and all that things, you know, so they are not, at least to me, they are not very, you know, very eager to engage with new members or other people in the community, especially with the Mozilla layoff last year, you know, that's, you know, fastly basically hired all the WebAssembly people from Mozilla. So now that organization is basically just fastly, right? So that's, let me see, okay. All right, okay. So that's, I think what's missing from the community. And that's also where we see we want to be part of the CNCF to push with the Linux Foundation to push this community forward, right, you know, to, by implementing standards that we can see, we can become the experimental ground for people to experiment, to experiment with more WebAssembly, you know, specifications, right? So that's part of the rationale to become the, to join. I have a question. So what is, was it socket? Is that, you said it's under development, right? Yeah. So that's another standard to provide network access to the SSVN. Different from WASI, basically? Yes. So the standard WASI is, unfortunately, it doesn't include the network piece. It just has that. Yeah. So it's, you know, that's new CPU infrastructure and all that stuff. So that's also our rationale to try to join the CNCF, right? And then there's, we experimented with a lot of, you know, what we call non-standard WebAssembly extensions, right? You know, so we use the WASI-like technique to connect to other native libraries. So like I just said, you know, that's, and, you know, we connect to TensorFlow and other frameworks, we connect to, say, the blockchain-specific stuff, like Ethereum. Under Ethereum blockchain, for instance, you know, there's no concept of the file system or JLPC anymore. It only has the file, the account system and coins and, you know, things of that nature. So, you know, so those are the, you know, the way to, you know, WASI is a way that WebAssembly integrated with the host system, right? You know, so that's also an area that we really want to experiment with. And that's also our rationale, trying to be part of the CNCF is to become, is to engage, to have more engagement with the community so that it can become the reference even taking off the watch or provide a feedback to the spec development at W3C. So that's, so yeah. So that's an interesting about WebAssembly is what we call capability-based security. You know, that's the concept that has been around for a very long time. And it's operating system concept is to say, you know, that's, you know, for each process or, you know, for traditional operating system, your privilege or your security is being limited by your user, right? This user has a certain amount of access. You know, that's a, so if I start this process as a root user, it's going to incur all the access permissions that the root user has. But in WebAssembly, it's different. You can, when you start the WebAssembly virtual machine, you can specify the access it has, right? You know, you can say this, it can only access the stash temp directory. So even if the WebAssembly virtual machine started by the super user, it can, you can still make it very restricted that only have access to the APIs and to the resources that you explicitly specify when you start the virtual machine. So we think, you know, that's, perhaps that's a minor point in the grand scheme of all the things that I talked about. But I think that's, you know, people keep asking, WebAssembly better, you know, provide a better security than Docker, you know, or the other containers out there, you know, I would say it has the potential to do that, you know, because it has a different security model. Does it do it today? Probably not, you know, it's probably because the ecosystem is still young, right? It doesn't have time to fully develop all those features, but it does have the potential of doing so, because it has a different security model. So then this is also the point that I touched upon, you know, that WebAssembly is a way for us to, to give, to improve developer productivity when they use, you know, hypervulnes languages like Rust and C++, because it abstracts away the differences in the underlying operating system. So it can be, in many ways, it can be cross-platform, you know, from very old lettings distributing systems, you know, to very new ones. And also Windows and other things, you know, that's, you know, all you need to do is to conform to a specific API that Rust provide you or C++ provide you. So then, of course, there's a question that the SSVM does it have any requirements as to what Linux kernel needs to run on and are there plans to support some other operating systems like Windows or macOS? So, so we don't, because this is a primary server side story. So we haven't heard a lot of, you know, requirements for Windows support, but we can easily do so. It's another reason to join the SSVM, right? You know, these two have more developers working on it, with different needs, right? And the, the thing that is high on our roadmap is to support the RTOS, X-red X, you know, so that we can support, you know, very limited, you know, edge computing cases, like, you know, automobile, right? You know, so that's, so that's where we think this project is ready to be part of the community is that, you know, there are certainly a lot of things that we prioritize because our business needs, right, you know, but, you know, we, but we can also see there are other people would want to, you know, want to add other things to it. You know, so we, we love to see people have a lot of implementation for Windows, for instance, you know, that would allow it to be, to be supported on Windows. Makes sense, makes sense, yeah. Thank you. Yeah, so the cloud native support, the thing I want to talk about is, you know, so those are the use cases that we currently have in the cloud. And one of the things that I want to emphasize is that we are, you know, we are in the process of making the SSVM WebAssembly virtual machine fully compatible with Kubernetes by conforming to the open step, open container initiative. So, you know, so that allows us to use, you know, QBH and all the Kubernetes compliance tools to, to start and manage, you know, WebAssembly runtimes. So that's, you know, partly answers your question, your early questions that if, if this is getting better for a large source platform, you could have a southern instance, it's how do you manage them, right? You know, that's, so hopefully that's, you know, that in the near future, we'll be able to use, you know, the Kubernetes to be able to do that. So one question is, have you looked at this project called Crosslet, they presented, yes, they're pretty, they're, they're pretty experimental, right? But I mean, their idea is also to have WebAssembly modules run with Kubernetes. So is this something like it's, it's going to be similar to that? Or it's going to be slightly different? Or, or, Yes, I think the goal would be pretty much the same, but so the technical approach might be a little different, you know, that's we modify different part of the operating system in order to achieve that, because, you know, right now it's all, you know, to tell you, you have to hack the existing system in order to do that, right? You know, but that's, you know, we just hack at different points. But that's, but, but I think, but again, I think those are the things that we're going to experiment and a clearly better solution can emerge and we can all use that. Cool, cool. So yeah, and the other thing about using WebAssembly in particular, you know, compared with Java, the JVM, like I, I, I draw a lot of, you know, parallels between WebAssembly, WebAssembly runtime and JVM. So it's, it truly supports multiple languages on the front end. So I often say Java plus the JVM is roughly equal to Rust plus WebAssembly today, you know, because Rust is probably the best supported front-end language for WebAssembly. But it's not the only one. Swift is well supported as well. C++ is well supported. C and assembly scripts and those other two are, you know, blockchain languages, like FE and Solidity, those are, you know, programming languages that come out of blockchain. So those are well supported on WebAssembly. It's, it's different than Java where, you know, there's multiple languages on the JVM or JVM languages, you know, or Java, you know, like languages, right? You know, so, so, so those are the difference. And so this is pretty much the last, you know, important slides is to say we did a lot of optimization for WebAssembly, for WebAssembly, we have two years to do that. And one of the most significant things we did is the AOT, you know, is to do ahead of the time compiler optimization. You know, this, you know, when WebAssembly is used inside the browser, the way to optimize it is to use GIT. Because inside the browser, you never know what code you're going to run into next. So, you know, so you just have to optimize that as you compile. But on the server, you pretty much know that you're going to run this piece of code for a million times, two million times, right? You know, so you can afford to do ahead of time compiler. You know, that's the first time you see this code, you compile, you know, that's then run much faster. So that's, so, so we wrote a paper and published it on IEEE software earlier, already this year, and it's published in the January issue. And it compared SSVM performance with the other leading WebAssembly roundpipes like Google's V8, like, you know, a fast list to set. And also compared with, you know, normal docker, it's docker without anything else. It's just a plain docker plus C++ application. You know, there's no Node.js or Python or anything like that. So it's fastest way for you to run docker. You know, so we did all those. And it seems, you know, that's, you can read that paper. And I think, you know, the SSVM round, you know, can outperforms most competitive by fairly large margin. And I think the reason is AOT, you know, that's, in some cases, we, you know, the SSVM runs faster than native, you know, without docker native. And the reviewers was very surprised by this result. And they sent the article back to us to say, you know, this must be wrong. But we were on the test, it was right, you know, because that's, with AOT, you can optimize for the machine, you are running, you know, that's, with native, you are just optimizing for a class of CPUs, right? That's, so, so, so, yeah, that's, that really showcases the performance characteristics of, you know, that's, you know, the SSVM can cheat. So, yeah, if you're interested, I would send this, you know, slide back to the group, that there's lots of examples that we have done, you know, that's, you know, do the image processing, add a watermark to the image, flip the image to do the OCRs, you know, that's, you have an image that I can read what's text on it, to do TensorFlow, you know, image processing, like all kinds of image classifications, those are all life demo links, you know, they're live because, you know, it's, because of thanks to gem stack, right, you know, on the, on the backend serverless, so if no one uses it, we pay nothing. On the front hand is get a pages, it's all static web, web pages, so it costs us nothing to keep this demo live, you know, other, you know, unlike the old ways of doing web application, I have to have a server, that's, that runs all those demos, I actually have no server, you know, it's, so no one uses it, I pay nothing, you know, so that's, that's, that's, we just thought that's, that's a very, we're, you know, we're a good way to, to develop software, you know, so then there's tutorials, we, we, we write up fairly comprehensive articles in terms of how, you know, to walk through the code, how to develop those applications, and you know, things like that, so yeah, that's it, you know, that's, you know, I'm sorry, I'm long-winded sometimes, but that's, that's my baby, right, you know, that's my, you know, that's, that's my project, so yeah, that's, yeah, so if you have, yeah, feel free to ask me any question. Yeah, this is great, I mean, I think it's, it will be great for the CNCF, so you, are you thinking about applying to sandbox or for sandbox, and that you, do you know the process or, or? Yeah, so, so, so I think, you know, we are filling out the forms, I, I think that the next review date is later this month, right, so you know, so we still have time for the, for the next date, right? Okay, that's, I think so, I think so, yeah, yeah, there's a, I think there's a form that you have to fill out, and then it gets considered by the TOC, and then typically if you have all the requirements, like the README, and I think you agree to the transfer of the assets, and all the type of stuff that you do with open source, then it goes to, and it gets accepted into sandbox, so, okay. And so, yeah, and I think it will be great, I mean, the, currently you, there are no projects, runtime for WebAssembly and the CNCF, so just, I think we'll get more exposure. There, there's also some events at KubeCon, there's a cloud native WebAssembly, I think it's called, so there's going to be more, more presentations about the topic, I'm not sure if the CFP is available, but I mean, something keep in mind if you, if you want to present. Yeah, we'd love to, yeah. That's, yeah, that's, we definitely would apply for the sandbox, I think for consideration for the next meeting, you know, yeah, and, you know, to participate in more CNCF activities, that's our goal, is to, you know, engage the community, yeah. Yeah, great, and yeah, and also you can share these slides on, on the segment time slide channel, so, and that's maybe some people might be interested in, you know, maybe they have questions, what not, or maybe they can, they can actually contact you with directly. Excellent, yeah, so I put my contact information at the top of the slides and, you know, and, and, and share it, yeah, that's, that's, that's a great idea. All right. Well, thank you very much, and yeah, and then we'll, we'll keep in touch, yeah, so. Thank you. Yeah, yeah, and hopefully you can, you can travel soon too. Yeah, and hopefully we can all travel soon, so. Yeah, exactly. Yeah, okay. Thank you. Yeah, bye. Yeah, bye.