 Hello everyone, welcome to the Cloud Native Wasn't Day. In this talk, we want to talk about Cloud Native serverless functions and what role, if any, wherever somebody can play in here. So well, before we start, let's look at what is a serverless function. The first serverless function that's well-known is AWS Lambda. It's a new paradigm of developing. You just upload your code. And you don't need to know where it's running and what's operating system it's running and any detail about the server. The public cloud would just run it. And a URL or something else that for you to call this function and get a result. So that's what we call no infrastructure. And it's because the developer don't see the server. We call it serverless. Although, obviously, there's still server on the back end. Just serverless is just making servers wait again. So if you look at this survey from Datadoc, the Lambda adoption of our AWS users has gone really sky high. I'm actually surprised to see this. Half of AWS users have adopted Lambda. If you consider how many products AWS actually have, it's amazing that something like that can be adopted by half of all users on AWS. It's the same for the other cloud as well. So let's look at the leading cloud in China. AWS is leading cloud in New America. So I translated this paragraph of Chinese words for you. So Tencent cloud is a leading cloud. It's a leading cloud provider in China. Has over 1 million developers provide services to 10,000 businesses, including 500 large enterprises. How many serverless function costs per day? 10 billion costs per day. There's people are writing a large number of high traffic, heavily used applications using Tencent cloud serverless as a backend service. So that shows us that's really popular and the developers really want to use it. And so what are the most programming languages in those? What do developers use when they write serverless functions? And they use Python and Node.js. No surprise here, those are, I wouldn't call them easy languages, but those are high productivity languages, right? And it's easy to write applications in those languages. But they're also heavy weight languages, meaning that they have a heavy runtime and has somewhat slow performance, right? So that's developers write those applications and in Python and Node.js, probably without thinking a lot about the performance. And that shows the duration of Lambda functions. This figure said it in a way that is class half four, right? Half of the Lambda functions runs less than 800 milliseconds. My reaction is, oh gosh, half of the Lambda functions run more than 800 milliseconds. Meaning, so if you think about, how high performance internet, how high performance website works, if you consider the time need for the data to travel through the network to reach the customer and then to go from the client to the server and then back to spend 0.8 seconds on the server, it's a very, very long time. Especially if you have an application that makes 10, 20 or even hundreds of backend services cost, that could really add up. So that's actually one of the other things that has somewhat surprised me is that the way that people use serverless functions tends to be, I think that's on the next slide. So the most common serverless function use cases to run a simple function on a heavy stack. So the stack, Python and Node.js are slow and takes a long time. And that in turn only allows people to run simple functions. You can only run a very simple function and it's gonna take 800 milliseconds. So that translates to what? That's translates to there's only limited number of use cases that we can actually use serverless functions. Meaning in most public clouds, it's used as the glue code when you need to move something from AWS S3 to the AWS message queue. You know, something like that, the best way to do it is to use a serverless function because it's sitting in the message queue and it's not very time sensitive. And there's no one else that's waiting for a real-time response from it. So in those cases where you can write Python and Node.js applications and not really care about their performance and each function call takes maybe a second or more. So that's today's most common use case of serverless. However, as we can imagine, because serverless is such a powerful paradigm and a lot of developers would love to use it. And a lot of developers actually want to write four applications in serverless. You know, there's a new way of developing web application called the Jamstack and J-A-M stack, right? If you haven't heard of it, you've probably already done it. You know, that's because I think most new web application today I see actually what's done in that fashion, you know? So on the front-end, there's a static website generated. So you have frameworks like Vue or Next.js or Hugo or something like that. So then it took the content that you wrote in some kind of markup and it generates a static website with JavaScript in it. And that static website can be deployed anywhere you want. It can be deployed on the local host, for instance. It can be deployed on CDN or it can be deployed even on blockchain, right? You know, you can deploy it anywhere you want because it was just a bunch of HTML files and JavaScript. And then JavaScript contacted back-end service to provide functions for this static UI, right? So you have a web page and then JavaScript goes to call some back-end function to do things like, you know, image recognition or, you know, saving data to the database, you know, things of that nature. And that back-end function is typically a serverless function, right? You know, so that's if we see it from that lens, you know, that's you want, we want to use serverless as a universal back-end service. We would want it to be much faster and much lighter. And basically it needs to have a different performance characteristics than what serverless is today, right? It shouldn't take 800 milliseconds. You shouldn't even take 80 milliseconds. It should be a lot more faster, 10 times, 100 times faster than that, right? So, you know, so that's where we think, you know, the current model or the current paradigm of serverless can really get some help. You know, that's where we think cloud-native web assembly could have an impact. Okay, I want to contribute to this whole thing. So, before, as we get into how web assembly could help, let's take some time to review what are the popular serverless round-tags? You know, the first is what we call hypervisor VM or hardware VM. It's, you know, some people call it micro-VMs as well, like AWS File Cracker, right? That is how AWS Lambda was run. You know, that's, it provides hardware-based, you know, it provides a high level of isolation for each of the serverless instances. And you know, so the serverless is contained within the VM and that. But that is very inefficient because you start a whole operating system. Just, and then the software stack on the operating system just run a single function and then set it down, right? You know, that's, to start it off may take a second. And then the function may only take 10 milliseconds. And then, you know, 99% of the work in this process is wasted or it's an infrastructure busy work. So although that provides safety and security, that's not very efficient. So some people, it starts to use application containers like Docker to do that. And Docker is a lot faster than the hardware VM. But it also provides less isolation. And also, although the Docker runs on top of operating system, it also has a guest operating system inside it. So it takes, it still takes time to get up the operating system and the software stack. So it's, so although it's faster than hardware VM, it still takes time to prepare the environment. Then the third level, the highest abstraction level is to run high-level language VMs in a thread. So the most classic example is the JVM and the Java virtual machine. But it's also the Python runtime, the V8 for Node.js. And of course, WebAssembly. You know, the common thing about those high-level VMs is that they don't have operating system in it. The type of code that can run in this type of VMs are called bytecode. So they're already defined and formatted and there's a compiler that generates those bytecode. And then those VMs can be started really fast in running operating system in a thread and then can be shut down really fast as well. So that makes them most nimble and makes them least wasteful and allows us to write high-performance applications in those high-level language VMs. So as you can see here, you know, you probably have heard of JVM Python, but you know, that's WebAssembly is also here. And what we think is WebAssembly gonna be probably the best VM that's run serverless functions. And it's not just us that thinks that, you know, in 2019, you know, it's hard to believe, you know, more than two years ago now. The founder of Docker and the CTO of, you know, the first CTO of Docker said, if WebAssembly wasm stands for WebAssembly, if WebAssembly plus WASI existed in 2008, we wouldn't have needed to create a Docker. You know, that's how important it is. WebAssembly on the server is the future of computing. That's how important it is. That's, I thought that's very strong words, right? You know, that's a standardized system interface was a missing piece. So that's how WASI is up to the task. We'll talk about what WASI is in a minute. So we have made the claim that WebAssembly is faster than Docker. Docker is faster than hardware VM and a micro VM. We're, you know, there's lots of research and lots of data that shows that we don't want to go in, you know, that's, you know, that's we'll just take that as fact. But the comparison between WebAssembly and Docker is interesting. You know, that's a study that we did, you know, our team did and we published the results in actually software last year. It shows the performance of WebAssembly versus Docker in different scenarios. The blue bars are SSVM, which is our implementation of WebAssembly, our open source implementation of WebAssembly. We can talk about SSVM in a minute, in a minute as well. And the orange bar is Docker plus native, meaning that it's a Docker, it's a bare bone Docker image. And then we run a native application on top of that image. The native application is written in C and compiled to run whatever the guest operating system Docker has. And then the green bar is Docker plus Node.js. It's Docker runs a high level programming language and runs a heavy stack, right? To perform the six exact same functions. So the nope is, the nope function is to just start and shut down to measure startup time. And the blue bar is actually invisible. So we have to multiply it by 50, you know? So in order to just show up on the block. So from here we can see in terms of startup time, although Docker improves significantly from micro VMs, it's a thousand times slower than WebAssembly. You know, WebAssembly just can start and stop in a very, very quickly. Without, you know, it hardly consumes any time. But with Docker, you know, that's, you're still talking about tens of milliseconds just to start up the, just to stop the container. And then you have to run the application that's written in a slow language on top of it, right? So from there, that's already, you know, I think over a hundred times, you know, performance gains right there. And then for the application, that's actually the CPU intensive application that actually runs inside of the containers. We could say unsurprisingly, Node.js performs the worst because, you know, it's written in Java square, what do you expect, right? Docker plus native is comparable to SSVM, but SSVM is still faster. So even for runtime tasks, you know, WebAssembly container plus it's, you know, it's sandbox bytecode, it's still about 10, 20% faster than Docker plus for the native code. You know, so that means, you know, if we do a performance comparison, WebAssembly is faster than Docker across board. And that solves the dilemma that we raised in the beginning of our talk is to say, how do we make serverless functions start up faster and run faster so that it can be more versatile? So that it's not just the group code to connect different systems. We make it a universal backend for Jamstack applications, right? So, you know, that's here we show that WebAssembly at least has the potential because it is fast. So cloud native WebAssembly use cases. And so those are the partners that we work with, you know, at second state, obviously there are other use cases, but I can only talk about things that I actually know, right? You know, so those are our customers and partners. And then the first category of course is Jamstack web applications. The WebAssembly helps here by providing a universal runtime that we can deploy not only on the cloud, but also on the edge. You know, CDN network has a compute node, right? That allows Jamstack application, the serverless-based Jamstack applications to have to reach high performance, right? You know, then we have SaaS in the past. That's also an interesting use case. It's because, you know, one of the common thing about SaaS application is the need to customize and extend it for different customers, right? And today, people do it with API or with callback APIs, meaning that's the events that happen in the SaaS. Someone did something on the SaaS platform, could be send to external server and then developer runs that server and then the developer provide a response from that server and it goes back to the SaaS interface. Just imagine, say, a chatbot, right? You know, in the messaging application. You know, how do you create a chatbot for messaging SaaS application? You know, you have an API-based approach where, you know, when a message was sent to a user, something happened inside that messaging system. The messaging system will forward this message to the external server that you run and then the developer, the application runs on that server, sees this message and generate some kind of response and send it back into the messaging application. And then the messaging application translates that into a message and send it back to the user. It's response to it, right? So in that scenario, you know, we have a chatbot but we also have an external server side application that runs side to side with the SaaS. And let's reimagine how this might be wrong in a serverless environment. Why do we need the developer to set up a server that's on the side? You know, it's tedious work, it's expensive, it's tedious and it's also error prone because, you know, the server might be done and all that. Why can't the developer just submit a piece of code, just submit a function into the SaaS platform and say this function responds to an incoming message, right? You know, so if anyone sends a message to my users, call this function and this function takes a string as input and returns a string at output. So something like to apply the serverless model into a SaaS application, you would be able to get rid of most of the callback APIs. And most of the API complexities when people extend those SaaS applications. So that's something we are also very excited, you know, that we work with some SaaS providers that's for this very purpose, right? You know, of course, there's IoT devices and cars, you know, there's, you know, in the automobile operating system, there's lots of places where, you know, the sub module or subsystem is developed by someone else. It's by a supplier or by a part supplier or by integrator. And then it's needs to be integrated together in a system that's can run together without interfering with each other. And one of the ways to do that, of course, is to use Docker. You know, that's, so, you know, say you have an electric car and it has a subsystem that come from a different manufacturer that controls the window, for instance, the power window that goes up and down. You know, the logic of how the window goes up and down could needs to be run inside a container because it cannot be allowed to interfere with the drivetrain system or the autonomous driving system or the braking system or whatever, right? You know, that's when the window, when the software that comes through the window crashes, it cannot crash the car. So in that scenario, you need something that's like Docker, right, you know, that's a software container. And we have assembly being able to run a variety of different hardware and operating systems, including real-time operating system, provide a very good candidate for that. You know, that's, it allows those, the integrated system inside of the car to function together, right? You know, of course, there's, there are blockchains and smart contracts, which is another way of, you know, the way we look at blockchains and smart contracts are decentralized serverless, right? You know, it's the development paradigm is exactly like serverless. You submit a piece of code and you don't care where it is run. You don't care whose machine it's running on, but it's give you the result, right? You know, that's the end you pay for the result. And except this, in this case, the servers are not run by a cloud provider, but by a network of nodes, decentralized node, right? You know, so, so all those use case scenarios, as you can see, you know, that's, would WebAssembly replace Docker in cloud computing? Probably not, but would it replace Docker where Docker cannot go today? For instance, on the edge cloud or on constraint device or on a source environment? I think absolutely, you know, that's, there's lots of use cases for that, as we see here. So the benefits of WebAssembly VM, you know, that's, because here it's a bottom day today. So I assume everybody are fairly familiar with those. You know, first of course, there's security, especially important for untrusted code like we talked about, you know, if you have a system where you allow people to upload code and you run it on their behalf, then you need some, you need a sandbox, right? That's what WebAssembly is for. And code that needs to access hardware, you know, that's, you know, the IoT setting or artificial intelligence and AI inference setting where code needs to access specialized chips. That's, you know, that handles the AI work. You know, all those things has to be sandboxed. You can't have it unrestricted access that would, you know, that's would have a type probability of crashing the system, right? You know, it's very efficient and lightweight. That's what it designed for, you know, it's, it has very, it actually does very little. It's a security sandbox. And as we can see from our previous, you know, the performance charts, it has near native performance. It's just, it runs as fast as, as compiled to the native code without a sandbox. So it has near native performance. And due to its dynamic performance, compiler optimization such as IoT, sometimes can even exceed native performance. You know, that's an interesting point. I wouldn't expand it here, but if you read our paper on actually software, you'll see a discussion of it. Why is the AOT competition could generate code that is faster than native code? You know, and the runtime safety, you know, that's, I think security and safety are different things. You know, security are, other people want to attack your safety is, you know, the bugs you're having your own code that are going to crash the system, right? And the portability and platform independence, those are the old benefits of Java, you know, the benefits of basically any software VM, right? You know, that's, you know, it allows you to develop, you know, to develop on one machine, deploy on another machine of a different operating system and architecture. And it's especially important in edge computing where there are so many devices and CPUs these days, you know, it used to be the X86 is the dominant CPU on the server side, right? But now you have ARM, you have all kinds of, you know, edge environments, you have all kinds of chips. So, you know, so it's important for a lightweight VM we have somebody to run all those hardware configurations and, you know, and it allows code to be, to be written on a developer's machine and then deploy it everywhere across the edge cloud. And the manageability, you know, that's because it's a container, so it should be managed and orchestrated like other containers by things like Kubernetes and no prove edge in the edge computing world. So it's automated deployment and ops, you know, it needs to support over-the-air deployments, right? You know, hotspot being with zero downtime. So, you know, that's if you update the software you should be able to hotspot it in and out of the software module with no downtime. So the WebAssembly system interface is called WASI. And that's what Solomon Hacks talked about in his trip. And this allows WebAssembly not only to interface with the browser where it's web browser, where it's originally designed, but also with the operating system itself. So it's interface with the file system, environment, variables, network, and all that. So it allows the WebAssembly runtime to have access to all the operating, the host features that it runs on. The WASM edge is open source project that we don't need to see and save. It's used as a commercial version called SSVM, second state VM and it's a completely open source project. We developed open source from day one and we wrote it, especially optimized for server side and, you know, edge computing use cases. If you are interested to check it out and, you know, like it, fork it, and, you know, discuss issues with us on GitHub. This is the paper that we published IEEE Software that shows the SSVM is already one of the fastest WebAssembly runtimes that's available in the market. So we, there's one particular thing that we like to add to the SSVM in context of serverless functions is that we want to build powerful WASI-like extensions. You know, WASI provide access to the host operating system. And we think, you know, why stop has Lipsy and the operating system level function costs? Why don't we make all the native functions available to WASI and to a very organized and very polished API that's available as a Rust SDK as well. So we can do TensorFlow inference, which I talked about in another talk. And we can do storage, we can do blockchain stuff that's access instead of the file system, the blockchain account system through our WebAssembly runtime. The cloud native features are the ones that I talked about, you know, in the previous slides, which is to, through OCI compliance allows the hosting to be managed and orchestrated by Kubernetes. Well, so, you know, a serverless function written in WebAssembly is really simple. You know, here's a complete function. You know, it's to say hello, you know, let's take a string and then if you read a little bit of Rust, it's append hello in front of this string and it returns it back, right? The one I've just shown is a serverless function where you can access your URL, right? You know, it's a serverless function that's on the backend of the Jamstack application. And here is another example that we talked about is a chatbot example in a messaging application where you have a serverless mode, you upload a piece of code like this to the messaging application platform and then specify a hook to say if I received any message called this function and it will tell you what the message is received from which user it's come from and they ask you to provide a response, right? You provide a response and then uploading and running this function, you no longer have to run your own server and do the API calls or do the API callbacks for the two extensors applications. So there are lots of live demos, you know, especially for the Jamstack application use case, we have a lot of AI demos where you can write serverless functions to do TensorFlow inference to do image recognition and all that. All the source codes are available and you can deploy them in minutes and see the result yourself and you can even try the live demo. There's lots of them on our website and there's also a lot of tutorials about writing server-side web assembly applications and also how to optimize them for Wasm Edge and SSVM. All right, I think my time's up and thank you very much and check out our GitHub repository and the website and I hope to see you there. Have a great day. Thank you. Bye-bye.