 Okay, so thank you very much for the introduction, Leaman, for having me today. So, yeah, as Leaman highlighted, I'm going to talk today about WasunWorker server or how to write portable serverless applications thanks to WebAssembly. This was supposed to be a joint talk with my teammate, Rafael. He couldn't make it on the last minute change last week, but, I mean, he's doing an amazing job also on the project and wanted to put it here too. So, the first thing I usually get the question is, what's WasunWorker server? So, WasunWorker server is a tool to develop and run serverless applications. That's what it's important for developers and for the people that are going to use it. Internally, it uses WebAssembly, enabling you to combine multiple languages and run your applications almost anywhere. But the interesting part on every project is to understand why we built it or why we decided one year ago to start creating this project instead of using another one, for example. So, starting a new project is an exciting, but a rough task. You need to decide what the language that you want to use and if you're in a team, you need to coordinate what are the languages that we understand, how we want to start this project, architecture, patterns, frameworks, all of the stuff. What are the tooling that we plan to use for this specific project? The developing environment? Everyone has preferences on this, so it sometimes gets complex to get all the different developers working on the same project using the right tools to collaborate and contribute together. How do you plan to distribute the application? That's also something important that affects the way you develop the project. Also, how do you plan to deploy it? All those things happen at the beginning of the project some of the responses to this question get written into stone when you start the project and you need to carry them over during all the time of the project lifecycle. So, based on this, we started to talk together and discuss is there is something we can do for this and trying to make projects more flexible in the sense that we can start with a language, we can implement some part of the application using a different language, for example, that may be better for that specific case. Can we provide a simple way to run different languages together without having to install Node.js, Ruby interpreter, Python, all the different tools together in your laptop? Container solved a lot in the past but could move it like one step farther and simplify this even more. So, to solve all these questions, we started creating WasunWorker server and we decided to do it based on four principles. The first one is that it should be an easy to use CLI. So, developers should focus on the code while using the languages they already know. This is a critical part for us because we don't want developers to have to learn a new language to use our tool. We want to use for them to reuse the knowledge they already have and start continue writing the applications that they want. We want it to be compatible. All of us felt like at some point that you created a project, we started maybe doing some specific implementation that is tied to a specific platform or a specific service and it gets complicated to move away from it. So, you need to spend a lot of time working to change it and adapt. This is why we wanted to make it compatible. So, the code you run today, it will run in a different place if you need for some reason to change the way that you deploy the code, for example. It should be portable in the sense that the applications as today, they run in many, many different environments. Actually, in the previous presentation, the Cosmonitane show like there are many different pieces in which you can run the applications. That goes from cloud to your laptop and either devices like your phone, for example. And we want to keep this. So, the code that you write can run in all the different environments and adapt to provide the best experience that you can provide to your users. And then it should be secure. By default, the code that you write only have access to the resources it needs. When we were discussing about all these features, we noticed that WebAssembly is a really good fit because it matches all the different principles that we wanted and decided to start working on this project. So, how it looks actually. For Wasm Worker Server, a project is a set of files as you would do for any other project. In this example, we have four different files in a magnetic folder like hello.js, ID, brackets, Wasm, the Tommel file, and the GTignore. When you run Wasm Worker Server, the CLI, it detects the files that it considers as independent functions that can reply to requests or how we call them workers. It also identifies the related configuration files for those workers. So, you can enable specific features and provide access to specific resources only for the workers that need it. So, you avoid if one worker needs to talk, for example, to an external service using HTTP calls. You avoid the other one calling a service if it's not meant to do it. And once it detects the different files, it expose them as an HTTP API. In the case of the hello.js file, it will detect it as hello. And in the case of the brackets notation, it will use a parametrized root. So, it means that any root slash whatever will be managed by the ID Wasm except the slash hello root, which has a higher priority than the parametrized roots. And that's all you need to know to start working with Wasm Worker Server. And I'm going to show you one, for example, about how you can run your first worker in just one minute. So, here, the first thing that you need to do is you need to install Wasm Worker Server, which is this singles command. It's already installed in my laptop, so I'm just going to run the help command. And now, I have one folder, which is my project, that contains an index.js file. This index.js file exports a function which exports an object that contains a fetch function that returns a response with some specific body and append some headers. If you are familiar with the front environment and platform like Cloudflare, Bercel, and this Netlify, you may be familiar with this specific source code. And this is part of the promise that we wanted to create with Wasm Worker Server. And it's that this same exact file run on all those platforms without having to change a single line of code. So, we want that, that you can take your code and run it where it makes sense for you. So, in this specific example, I have two files in that folder, same as the example I was showing before, the ID, which is a parametrize root, and then the index. So, now we run WWS, pass the folder, and if we do a get request, then we get the information from the Worker. We get the body here, and we also get the header that we configure in the Worker. We can also query the parametrize root. And then we have here similar one, but in this case, it's taking the parameter from the root and showing that information in the body. Another thing that we are also exploring is that for Wasm Worker Server, the only thing that you need is a specific folder. But folder could be, for example, a remote repository. So, you don't need to actually download CLOM manually, but you can point to 13 artifacts that are remote, and then you can pull them locally and automatically spin that specific project. In this case, we are supporting a remote repository, but in the future, the idea is that we will support more and more targets, so you just need to run the command, the specific location, and it will run the project for you. In this case, we are running it from our own repository in which we have an example folder that contains many different examples, and in this case, we are using the JS Basic. So now, if I call to the same URL, I get the response from that example. And the good thing is that in our repository, we have in this folder in the example one, four examples in six different languages that you can try to start working with Wasm Worker Server. So you have a good base to start creating your projects with Wasm Worker Server. So, how it works internally. Now that you've got a taste about what you can do with it, let's see how the data passes between the different boundaries and why we decided to do it in that way. So we have on one side Wasm Worker Server, which is the CLI and the server that is listening for requests, and then we have my worker Wasm, which is a specific worker that could be written in Rust or Go that will reply to the URL. The first thing that happens that Wasm Worker Server receives an HTTP request, it identifies the target worker based on the different list of file. It detects what's the one that should reply to that route. Prepare the Wasm environment. That is done also by the configuration, so it enables the feature that you granted for that specific worker. And then serialize the request and pass the data via the STDM. The worker does the job, provide the response, and then send the data in a serialized way back to Wasm Worker Server using STD out and then Wasm Worker Server returns the response. So that's the kind of very, very basic flow of data inside the project. If we add an interpreted language, as I was using before with JavaScript, there is one intermediate step because we need an actual interpreter to run this code for you. So there is this element in the middle that takes the information from Wasm Worker Server, convert it, run the user code like the JavaScript script, and then return the response back. This approach, as you may imagine, has benefits but it also has challenges. So the first thing is that using this approach with data management on STDM and STD out, help us to add new languages and version of the language quite rapidly. So we didn't have to wait to implement very custom features because any interpreter that can be compiled into WebAssembly comes with basic features like STDM, serialization, and all that stuff. So it means that in one week, I think we created three SDKs for three different languages without having to deal and to spend a lot of time on this. So we can grow later on. Interpreter languages are transparent for users. They don't need to compile in a single line of code. They didn't have to call any other tool for compiling, for example, the JavaScript code. It just dropped the file and then it runs and responds directly. And also that workers have independent permissions. You can configure every worker with its own sets of resources that can access and the other won't be able to do that. But there are also challenges. The first one is performance, of course, because all these data boundaries with serialization all that stuff increase the response time, which is something that shouldn't be there, but it's something that we are working on and trying to find ways in which we can keep this promise about being easy to use while not affecting to the performance of the server. It's also a challenge about being compatible with other platforms because for data passing the request and respond information, it's simple to make it compatible with others, but for other features that we will see later, we still have to create this glue code or this way in which we pass those features that are different from other platforms. So it's not simple to make it. The other part is that the more language we support, the more features we need to implement in all the languages. So at the beginning, all the languages supported all the versions, but once we are adding more and more complex features, like we will see later AI inference, for example, it's not trivial to add all those features to the workers. However, with all the work happening in the component model that we, I think, since one month ago, we support components in WasunWorker Server, we'll simplify how we can add those bindings to support all the features. I'm also debugging, but I think this is something common to WebAssembly. It's not something specific to WasunWorker Server. And just an example about how simple it was to add a language to WasunWorker Server. One teammate created a brain whatever SDK for WasunWorker Server. Here you have an example, which is a super weird language, the first time I saw that. And if you run this after compiling it to Wasun to a Wasun model, it returns hello, Wasun, which for me is pretty amazing. And it represents how simple it is to add new languages SDKs for WasunWorker Server. So let's talk about the features. What are the features embedded in this specific tool? We have an in-memory key value store, which anyone can use, it's really simple and it's available in all the languages. We have external HTTP requests, so you can call third-party services by allowing the domains configuring what can be reached from inside the worker. You have also dynamic routing, as we saw. We not only have parametrized route, but we also got in external contribution what we call catch all routes, which means that you are not only taking one portion of the full path, but just all the sub-paths from that one. This is a pretty cool use case that I will talk later. We also have AI inference with WasunM. Thank you, Andrew, the no-if is here, yeah. Thank you for all the leadership and the work here. We just take leverage of all this amazing technology. We also have static asset management, which is something pretty common for some specific use cases. And much more. And you can check all these features in the official documentation, but let me show you some examples about how this works. So here I have a different project, which is this demo tool, but in reality what it's inside is a tic-tac-toe game that uses the in-memory key value store to provide the functionality. So here I just want to show you that by default, workers doesn't have access to any key value store and you need to configure it manually. You need to give it an in-space and then you can start using it directly. So if I now run the demo, let me open here, okay. So I have the first one. Now I need to open it in a different tab. I will try to do it in the same. Okay, yeah, that could be a good feature for the example. So here, as you can see, you can start playing tic-tac-toe. It works pretty well. We have also the other example, which is the AI inference with Watson Worker Server. For this specific one, I first need to initialize the open vinyl, which is the runtime behind this example. So I just configured it in my environment, so Watson Worker Server can find it. Now, if we check the inference.toml file, which is the one configuring the worker, we see here that we have more features that we can configure in Watson Worker Server. On one side, we can configure the models that we were to preload. This is a pretty new feature in the Wasian N specification in which we are passing the already existing models that you can pull from inside the worker. And then we're also more in certain folders inside. So the worker can store the images for later usage, and it can provide some specific data that is required for the worker to provide the labels for the images. So if I run this example, now I can open this one. It's an image classification example that I forgot to mention. So if I put here an ambulance, it works. So it's really easy to start working with all these features together and just put everything and start using the ones that make sense for your application. And this are all the supported languages that we have as today. We support Ruby, JavaScript, Python, they in the interpreted side, and we also support Rust, Go, and SICK. And the cool thing about this is that these two languages, Go and SICK, were external contributions. So it means that some people decide that they want to have these languages inside WassonWorker server and pursue to create SDKs. So it's a good sign that people want to use specific languages as they create their projects. And one of the things that I usually highlight with WassonWorker server is that it allows you to create polyglot applications, and especially to integrate multiple existing applications into one project. So in this example, we have kind of a full JavaScript UI project that renders some pages outside and we mount everything in the root. So it means that by default, any root will be captured by this worker and will provide similar to a single page application in the terms from them. We want, for example, add a new API to this project and we want to use Python. So it's simple. Just put this specific file inside the API using the same approach, and now everything under slash API will be managed by the Python application. You can use Flask or any framework that you want. But now, if you won't even want to go a step farther, you say, okay, it's fine that I'm using Python, but I want to start migrating some of the endpoints to go. That's simple in WassonWorker server. You can take, create a resource, which is a fixed one, and then that specific URL will be replied by a completely different language and everything works together. And you have a path for migrating stuff. It's not like you have to start from a scratch on the entire project. You can take a 16 project, put together, and start migrating step by step if it makes sense for you. And what about deployments? This is a hot topic. Since I mentioned before, run almost anywhere, how can we reply to that specific bet for the project? So in reality, WassonWorker server is a very thin layer on top of an existing Wasson runtime. In this case, Wasson time. So for us, the best way to approach the deployment of WassonWorker server today is not having opinions about it. We are still in learning mode. We want to understand where users, customers want to put the projects, how they want to deploy it before taking any decision. So you can deploy WassonWorker server as any other kind of service that you deploy your infrastructure today. You can put it in virtual machines, but totally fine, cloud server, it's a container directly put in the binary side. You can run through container D in Docker and Kubernetes. Thanks to the Punguasi project. Actually, we have a talk tomorrow about how to run applications with WassonWorker server and Kubernetes. You can even put them in small devices if you want, like Raspberry Pi. And we're exploring crazy ideas like why not to run WassonWorker server in the browser, for example, but that's something that may come later. So what about the future for the project? What do we plan to do next? So the first thing is that we want to add more languages and improve the existing support for them, because this is something that keep in all the conversations that we have about WebAssembly, people want to use the language they know. We also want to add more features. For example, we want to add persistent. For now, we're having memory key value store, but we want to make it persistent. We want to think about SQL databases, those kinds of things that keep coming in different conversations. We want to improve the platform compatibility. And at this point, one of the things that we are exploring and it looks promising is the component model and the Wassy Cloud Core, which is a set of standards for accessing common resources, like key value stores, SQL, HDDP hundreds. So if we can provide this as an alternative that work with the other workers in Wasson Worker Server, those will be automatically deployable in other platforms that are compatible with these interfaces. And we also want to think of standard deployment. It's cool that we have a theme runtime, but can we make something on top of that that makes sense for the entire project and provides you a better experience when deploying, like pushing a project, putting multiple projects under the same host and managing it for you in a better way so you don't need to manage it as independent processes in your infrastructure. And if you have any other crazy ideas, we are pretty open about discussing it. So if you want, feel free to go to the repository under the discussion section in GitHub and feel free to drop any ideas that you want for discussion with the team. That's a feature, but what you can do today is to try it. So here you have the QR code that goes to the documentation side. You will get access to all the examples that like the tic-tac-toe and the ones that I played today, you can use them, you can try combining all the different languages. Of course, opening an issue that you find with the project, continue dropping ideas for the team. Yeah, that's all that I have for today. Another phenomenal presentation, thank you. Questions? No questions, so I will be around. So in any case, feel free to ping me here. Thank you very much.