 So, who are we? Well, there's three of us, I swear. One of them is a ghost in the show. So, Long died, actually, led a lot of the work that we're talking about today. But he was in Shanghai and not able to actually get out of here. So we have a video segment, but Long is an excellent part of this process. And we want to acknowledge that as we go along. I'm Adrian, and I work at TouchRate. And I work on WebAssembly runtime call was zero, which is written in Go. And I got acclimated to DAPR much more so through this work. So I'm sort of like a community contributor. And it's Mauricio. Mauricio, I work for Diagrid. And I think that we have a couple of slides like introducing, right? That's you. That's your Twitter account? That is me. Yeah. Good. I'm code from the crypt. And so if you want to follow those things, I promise that it meets its name. Definitely. And if you're interested in these topics, yeah, I'm at Salavoya Twitter as well. I work for this company called Diagrid. We help people run DAPR in production and, you know, in multiple clouds. And I'm writing this book that is titled Platform Engineering on Kubernetes, where I'm covering kind of like these topics and how you put different projects together and how you extend platforms is kind of like the topic of today's session. That's the QR for the book. And down there, it's my name with 40. That's my age, unfortunately. But it's also a 40% discount code if you're interested in that. I'm just trying to find your book. I'm trying to find like a positive angle of the 40 thing. So this is what we're going to be talking about today. We are going to start like with DAPR just showing it in action. How many developers do we have here in the room? Yeah. That's a good 30%. 30%, I would say, maybe. And then we are going to talk a little bit about how we are kind of combining DAPR and WebAssembly together. And then we are going to see another demo showing that exactly. I think that what we are showing here is just starting points for collaborations and there are tons of other things that we can be doing. So if you don't know anything about DAPR, DAPR is a CNCF project. It's a distributed application runtime. And I will not spend so much time discussing about what DAPR is. I wanted to show you in action. But you need to know two things about DAPR. One is thriving on the CNCF community is the top 10 largest projects in the CNCF. That basically means that there is a lot of people contributing back and getting involved and extending it basically. And the main thing that you need to take away from this presentation if you don't know anything about DAPR is DAPR provides components that helps application developers to make their distributed application simpler, easier to maintain and the couple from infrastructure. So let's see that in action. Imagine that you are building this kind of applications where you have applications that are writing data somewhere, some other applications reading that data and doing some processing and some applications that are subscribed to notifications. Imagine that you want to listen every time that a new order gets created or every time that you write something to a database or every time a new customer come in. That's the subscriber app that we have down there. And if you are building these distributed applications, it's quite common that you will have a front-end application or a mobile application that is interacting with all these backend services by using HTTP or GRPC. So you need to think about how those interactions will go. But in real life, we need to replace this cloud that is there for real infrastructure, like our cloud provider services, our databases and all that stuff. So for the sake of explaining the example, I chose to mention Redis here and Kafka. If you want to write data to the database, maybe you can just use Redis for that. But this is starting some complexity on some tight coupling with the application code. You need to add the Redis client in order to write data to Redis. And that's a dependency to your application. The same for the application that is reading data from it. And the same for the application that wants to use Kafka. You need to have a Kafka client in the programming language of choice in order to be able to submit and to consume Kafka messages. Another challenge that you will face is that if you are using HTTP or GRPC between the services, you will need to deal with, for example, HTTP retries and circuit breakers and all that stuff to build in resiliency inside the application code. So Dapper comes to try to help us to solve some of these common challenges by abstracting away the infrastructure and that's the Dapper components, the things that I mentioned before. In this case, for this demo and for the application, I'm using two Dapper components. One is the state store that allows you to abstract storage. So if your application needs to store some state in a database or in this case in a key value store, you can use the Dapper state store component to abstract that away. The same with the PubSub component that abstracts away message brokers. So then, by using these Dapper components, what you can do is you can actually swap the implementation. If you want to change readies for PostgreSQL, you can swap that into the configuration side of things and keep your application code the same because what Dapper does is introduce this Dapper, side catalyze running in there exposing consistent API for your applications to use. So your applications, if you take a look at the application code that is down there in that repo, you will see that it's only doing HTTP requests against the Dapper side catalyze that it's running locally and it's interacting with these backend services, these databases and message brokers just using HTTP. In the same way, Dapper provides more components to solve different challenges like secret stores and now we have workflows as well. And because, for example, here in the frontend application, it's also using Dapper. We can actually reuse all the components and all the behaviors built into the Dapper side cat for retry mechanisms and secret breakers. So you kind of get that resiliency built in in your applications without changing the application code, without making the application code responsible for implementing these behaviors or adding libraries to implement that. And all the upgrades of the drivers. All the upgrades of the drivers, right? Like if you are using readies, you need to make sure that your readies driver or the SDK, it's the same version of the readies that you're using, right? And sometimes you want to run readies locally and sometimes you want to run readies in Google Cloud, for example. And maybe they have different versions. So that basically means that your application needs to have two different dependencies depending on where it's running. That becomes complicated and it's tightly coupled. We are trying to solve some of that with Dapper. And as I mentioned, Dapper uses that side cat approach. It's the most common way of using Dapper nowadays, but we recognize that side cats are not for everyone. So if you are not into side cats and you said, ah, they're using side cats, I'm not going to even look into Dapper, there is a new initiative in the community that is basically expanding that for different deployment models. And I think that Watson might be something else that we will look into the future. So let's take a look at the demo. Again, this demo, you can run in your own computer. Like you can just follow the step-by-step tutorial here. And what I want to show here is that I have an application running in my kind cluster. I've installed Dapper using Helm into my cluster. Just a single line, you get Dapper installed. You can actually see that I have the Dapper system namespace running. And then because I've deployed the application modules, like the one that is writing data, the one that is reading data, the one that is subscribing to notifications, we can see the pods that are running here. And because these are Dapper-enabled applications, you will see that Dapper is injecting a sidecar into each of these modules. Let's take a look at how the application looks like. The application looks like this. Not here. Here. The application looks like this. Pretty simple. We are writing messages. Hello. Here we come. Hey. Right. That's writing into the database. That's being written in Redis. Again, if you look into the source code of this application, there is no Redis code into the application that it's writing this data into Redis. It's just doing HTTP calls by using the SDK, the Dapper SDK. But you can do just plain HTTP calls or JRPC calls. And this pod below here, it's receiving the notifications asynchronously. So when I click the button, it goes to the service and fetch all the notifications that that service has received. It's pretty simple. But I would love to do something like, hello, Qcon. All right. I want my emojis in there. Unfortunately, that's not working just yet. But we will try to fix that in a while. One more thing that I want to show you here is that Dapper is running inside Kubernetes. That means that we can configure Dapper using Kubernetes resources. And for that, as I mentioned at the beginning, we have components. Right. So for this demo, you need to see here that I have two components configured, the state store component and also the pop-up component. Again, abstracting away the infrastructure that is being used to store data and also send async notifications between services in this case. So let's move to talk a little bit about WebAssembly now. So I can click again. You can click a bit now on the presentation. Let's do this. That's why you brought me here. Check this out. Click. Hello, everyone. Two clicks. I'm sorry that I cannot be there. Let me introduce myself first. My name is Daishan. You can call me no one, will. I'm a Microsoft MWP, Dapper Nintendo, and now I work in Intel. So what is Dapper? Dapper is distributed application runtime. This also uses any code or free work to write her microservices and run in nCloud or age in infrastructure. And this uses HTTP API or GIPC and GIPC API to provide many useful and amazing features. This is the architecture. As like a service mesh, it's a Dapper drone as a sidecar. We can use this directly with all the code changes. But the questions come. How can I change the logic inside the Dapper itself? For example, my company has many applications. They all need to check a circle to ensure the security in HTTP request. This is not published, and I cannot upstream this. And I do not want to rewrite this logical in every application code. So how can I do this in Dapper? A first thought, a quick thought is that I can release a new one. But this is very hard, difficult since Dapper have many reports, and we need to think the upstream version. This is a big job. But Dapper provides and think this and provides a useful feature, Midwire. So the Midwire supports users to do some changes in request or response. But this is built-in, which means this is hard code with limited configuration. Users cannot drive their old Midwire, which is not public. This is not flexible. So how to do this flexibly? We can use a Wasim. With the help of Wasim, we can do this. We can dramatic load, and with highly communicable. This is more than flexible. So how we implement this and how to use this pace continue to say. Thanks. Great. Thanks a lot. All right. Let's click. I thought it was going to click. I don't know where I am. There you go. I do need practice. So anyway, long, which we'll thank along the way in different ways, was the first person to take action on the WebAssembly kind of idea streams that came out of Dapper. Well, I didn't join into this conversation until later. It's interesting that this whole what to do with WebAssembly and Dapper started in the end of 2019. And as I was starting doing the like archeological digs in the issue list, the first thing I found was people wondering, well, should PsycharLogic itself be compiled down to this Bikove format called Wasim so that an application can embed the Psychar literally in the same process, which is kind of an interesting thing. Because WebAssembly is basically a way for you to run other third-party code inside your same process without actually launching another process or anything else. And then there was also questions of the other way around. Should Dapper allow applications who have been compiled to this Bikove format to just basically run workloads or to like how Dapper might run a container. And I think something happened in 2020 that got everybody busy. So there wasn't so much action going on in the WebAssembly front. Well, people were putting their case in virtual education and stuff. But the lead half of 2021, this whole WebAssembly, otherwise known as Wasim, just like the file extension name for WebAssembly module, that picked up again through some tension about how to handle this kind of like continual crisis in Dapper, which is like we've got a lot of community, a lot of different types of features that want to get into the same build. You have conflicts. People are like, well, okay, what about the size or the memory usage and the maintainer is going to be able to afford to do code reviews and patch maintenance on hundreds of components in the same repository or repositories. So basically you could see this trend of like what to do about extensibility, like how do we make it more flexible. And one of the options there, which I'll have a slide next on is like WebAssembly as it happened with discussion around GRPC. But anyway, timeline-wise, a proof of concept happened in design form in late 2021, but it wasn't able to be action for a little bit later. And yeah, so Dapper first had WebAssembly support 1.8, and then revised it again later last year. So let's get into it a little bit more. I know these things are kind of abstract, and I say about like embedded. So literally meaning that if you are using WebAssembly inside of an application like you have a say a Go application that's compiled with WebAssembly runtime inside, you wouldn't be able to know it's actually launching these VMs because there's no side effect, no process IDs or anything else like that happening. It's just local code. So when you look at this as far as a way to run third-party code compared to GRPC, GRPC has some side effects on your deployment, right? Even if you're doing fancy stuff like Unix domain sockets and things, there will be some side effects. At least you'll have to define a service definition and share those between the code bases and stuff. And both of them can get the job done, but one of the interesting things is is that WebAssembly can be used, and it can't be used for all tasks because it's limited. There is no deployment real task there. The only thing you have on your hands is what code am I going to allow to be loaded into this process and where am I going to get it from? Are you going to get it from a desk or an OCI repository and stuff like that? So effectively, if you can, chop pieces of functionality out. If you're in a GoLang ecosystem, you may just do that without even caring about supporting multiple languages. You may still be writing your components in Go. But of course, WebAssembly being a virtual machine, you can compile other languages to that byteco format. So it is a polyglot solution as well. So the design that Long had proposed to the community was to use a component, literally component middleware for HTTP middleware, and allow that chain to have a filter that could be implemented in WebAssembly as well as the native built-in features. And those of you familiar with Istio and Envoy will know that Envoy also has the ability to have WebAssembly in its filter chains. So it's not like a completely different idea, but it's an easy way to get through sort of like analysis paralysis, like what to do with something. If you pick a type of technology that's easy to relate to and it's one that tends to need customizations, it's, I think, a perfect choice. The problem was that this was at a point where the WebAssembly community wasn't so strong in Go yet, and there wasn't a way to embed a WebAssembly virtual machine into a Go process without relying on platform shared libraries that were like C libraries. And DAPR has sort of like a strict no on dependencies of shared libraries, right? So that was parked until long discovery of the project that I work on with originally just with Takeshi and Eda who also did a lot of the work on the Envoy and Istio WebAssembly stuff, but it's a large community now, which is a zero-dependency WebAssembly runtime for Go. And so that sort of like fit the bill. The first version long basically just, you know, handcrafted WebAssembly to prove the point, but that's not very developer friendly. Basically, I don't even know if the source was there, but we said, okay, well, if you're going to do something in a fairly difficult programming ecosystem, like WebAssembly, you're going to need SDK support. So we leaned on something called WAPiC, which would allow you to, you know, have functions like this lookup by name, and then you just implement like a byte handler in and out to do whatever it was. And so in this case, the first simple version was like, okay, let's just rewrite requests and get this shipped out the doors so people can play with it. Immediately, people ask for more. They're like, okay, well, I don't want to just do rewrites. I want to change everything. And that's exactly what you want to do with the proof of concept. You want to get end users, and that's just like theoretical stuff going on. And during that period of time, people were asking, well, why don't we use the same thing that Envoy does, which is the proxy wasm, which is what they do. The problem with that was that proxy wasm was basically modeled around Envoy itself and the type of lifecycle and also the hooks were very much related to that. So for example, it had, you know, layers below HTTP abstraction and even gRPC abstraction, even some time-based tasks and things all in the same box. And that just wasn't a clean way for a component that's literally supposed to be HTTP and HTTP alone. So we actually created a different, like SDK base in WebAssembly and compiler uses where like ABI instead of API. So basically if APIs are for services, APIs are how compilers communicate. And we designed this to be a lot faster than proxy wasm, but the main thing was actually to make it more developer friendly. So for example, you can do async hooks and like offload things on the request response, but you can also do completely synchronous. So you can see this here that you have not just some URI substitution, but also the ability to do static response as straight in the WebAssembly. So that actually doesn't ever jump out to the host in those lines. It's far, far faster. And yeah, so that's how it kind of came out from a design perspective. And Mauricio is going to show you like exactly it working. Yeah, we just like see it working. Like in the example that I was showing before, we are just going to extend that. And for extending the example that I've created before, I show you the application sending some requests to the database and stuff. And what we're going to do is we want to extend that using the HTTP middleware component that was introduced by Long here and Adrian. And we are going to use the was zero runtime that it's already embedded into the that preside card to run that. In order to do that, we need to create the filter wasm file, in this case like an HTTP filter kind of thing that we are going to include in our HTTP chain. And for that, we need two resources, two configuration resources in Kubernetes, the middleware component and the configuration resource that it will allow me to connect my application and said, okay, I want the filter in this specific application. So let's jump into that. Let's do that pretty quickly because I'm pretty sure that we will run out of time. So first thing, let's take a look at the filter goal. Again, we are writing a filtering goal, and then we are going to use tiny goal to compile that into WebAssembly. So let me cut this and let's see if you can see it okay. This is a simple HTTP filter like Adrian was showing, but it's a little bit more tailored to the application that we are writing here. You can see that the handle request code here, it's basically receiving an HTTP request and an HTTP response. And then inside the body here, you can just actually do whatever you want with the request. You can write any kind of filter here. And what we are doing here, simple thing, we are just parsing for emoji tags and then just replacing that with the emoji code in there. And unfortunately our platform... This is code. This is business code. This is business code. This is whatever we want to do. And this is kind of like the extension that we want to build without changing neither our application or the dapper binaries in this case. We want to inject this code in the platform. Unfortunately, our platform doesn't support cats today. We had a CDE, and we will need to patch it later on. But for now, this is kind of like what we have. So as I mentioned before, we can use tiny goal, another open source project that allows us to take that goal file and transform it into a wasm file. We just run the compilation. And then of course, because now we have a file here in our file system, we need to make it available to our application that it's running on Kubernetes, right? So what we have done now, and that's kind of like the initial solution that we have, but we are looking into different approaches, is to, you know, I will wrap this into a config map, put it in my cluster, and then mount it as a volume in my pod so I can actually consume it from there. But as I mentioned before, we have two files here, two resources. The first one is called middleware. As you can see here, the middleware component allows us to say, okay, we have like an HTTP filter in this case that it's going to be written in wasm, and we are allowed here to set a path of that file. So in this case again, it's a mounted volume. Somebody will need to mount this and then just consume that file. Here's where we are looking at the options for OCI registries so we can just actually fetch these wasm files from container registries, right? Like it should be much easier to make that happen. And finally, the other thing that I wanted to show here is resources configuration, which is how we wire things together, right? Like again, it's an HTTP chain, right? Like we can write multiple filters. That's why we have an array here. And for now, we just have this simple filter that it's implemented using the middleware component called wasm, right? I already applied this to my cluster. So if I do get components again, you will see that there is a wasm component there already. And I also have like a configuration, an upper configuration in this case, that it's again the app config that I need to wire into my application. So let me apply my resources that are a little bit with all these modifications so we can mount the volume. I will show you that after applying it. So this is basically now configuring my application, just the configuration change to actually start using this filter. And let's take a look at the deployment file for the application. Resources wasm. And what you can see here is that my frontend application now, it's actually mounting a volume from a config map that contains the wasm filter. And then remember that I don't want my application that is running, the frontend application doesn't have anything to do with the filter itself, right? The application is not even aware of that. That's why I need to use this adapter annotation to let the adapter sidecar know that it needs to mount the volume under this path and that we are going to consume that config that basically contains all the filters, the array of filters that we want to define for our application. As soon as this configuration is applied, and let's take a look at the pods here and make sure that my latest version was deployed like 46 seconds ago, we can go here and do again the port forwarding. So we can access the application and let's go here to Safari. Let's refresh to make sure that it's still there. It still has the data, right? But if I say hi now, we should be able to see this working. Hey, there now we have emojis, guys. This is impressive. But if you thought that HTTP filters are boring, take a look at this. This is gold. And because it's happening at the front-end level, all the backend services now, for example, the notification service, will have already that payload. Everything else, we will actually see the result of that filter. As I mentioned before, unfortunately, we don't have cats. So if I do hi-cat, that's being censored. This is where you add your business logic, whatever you want to do with the request you can do there. But if you actually are a dog lover like myself, you have some dogs, again, in every service. So that's what I wanted to show. How do you add things up together? This is, again, just the start of one of the things that you can do. Yeah, some interesting things that I thought about was that if you did get the cat-enabled version, right? Exactly. You just basically roll it out like a config change. The other thing is that you didn't see any pod side effects because it's embedded. It's in the same process as the sidecar. So the sidecar is actually inlining that logic within it. Excellent. So that's kind of cool. Yeah. So we saw how Dapper in general can do a couple things, which was the first part of the presentation. And this is the first experience with WebAssembly is to tune the HTTP middleware chain. And everything we talked about is not like, okay, it's wait until next year and then you can download it or do some patch, hack things. This was already out in Dapper 110. So you can just do this now. No hacks required. What's next? Well, we did notice that Istio has a little bit more experience in WebAssembly. So for how to source the WebAssembly files, we pretty much decided to just go ahead and do exactly how Istio is doing that for Envoy, to use like OCI-based paths or the HTTP graphs, whatever. If you're interested in this kind of work, it's actually not terribly difficult. So you could help us contribute there and get some experience with WebAssembly. It would be fun. We also have another component which is pretty interesting that's already half implemented, output binding. So because in Dapper, it can handle things, not just HTTP, you saw that there's messaging, subscriptions, all sorts of stuff, right? So output bindings, you can imagine, you have inbound events and then you can have a message processor that's just dangling on that and actually run with a Wasm module instead of routing into another component. So you can get like FAS capabilities in Dapper that way. And then we still have the ongoing discussions about generic extensibility. And I think the thing is, is that bringing in more experience about different well understood places of code and using WebAssembly is a great way and much easier than just going to generic extensibility first. There's too much context to learn. So let's sum it up. Okay. So we've got Dapper, it's got this distributed application framework basically. And we have a way that developers who basically yielded to the framework and said like, okay, I'm going to give all this responsibility to the sidecar. Now they can actually influence the behavior of the sidecar without custom builds, which is awesome. That was made possible in part because of Was here, the project I work on, because Dapper happens to be like a pure go project. That was quite cool. So we joined forces to try to make that and people will be able to have more flexible infrastructure without having all the work of doing custom builds and the maintenance there. Yeah, I think that's a very important point. We are adding a different level of configuration and fine tuning for platforms and for projects that are already being built. And I do see this happening more and more often in different CNCF projects. This is becoming a way of extending projects without changing them and having a way of deploying configuration code instead of upgrading the entire stack because you need to add new extension points. So we're going to hang out a little bit for questions if you have any, but thanks a lot for joining and we do want to thank Long even though I couldn't physically be here because he was a really pivotal one in all of these design and implementations. Thank you very much. Any questions? I know that we throw a lot of information at you. Yeah, file system support, that's for you. Yeah, so the question was what about file system support? So WebAssembly, by default, doesn't have any ability to access files on the host that's running it, but there is a system called WASI which allows you to basically have system calls like file open, type of commands. Was there the runtime that Dapper uses supports that? So sometimes people are using files for configurations or they want to have a SQLite database that they're using inside of their app logic and it's not actually a very difficult thing to add. It's more like questions on how to bound that and what I would suggest for folks who have a use case that requires file system to bring that use case with the request into the issues list and that way we make sure that whether it's read-only or read-write or virtualized files so that that gets into the configuration. Right now the configuration is pretty bare. We just have WASM and that's it. So basically it's an iteration and I don't see any problem being able to implement file support. It's just a matter of how and what people want to do with it. Other questions? Okay. No, when for example the sidebar starts it's going to load this every time or it's just one time and loading one of these web assemblies will take the same amount of time. Yeah, so the question is what's the life cycle of the WebAssembly module? And basically at the moment Dapper is not doing anything like file watching or dynamically looking at the file system to reload itself so the compilation phase happens before a request occurs and that's a one-time thing. So for example when it loads a module it translates into machine code that actually invokes. It's amazing. We have a page on how that works. It's really cool. But then that's just held there in memory and all requests go through it. Concurrency is actually handled in Dapper itself. So Dapper has controls for how many simultaneous requests can go through and so that's actually what's gating how many of these module instances are going on but the actual module is held static until reboot basically. Yeah, so from the Kubernetes point of view when the pod starts it mounts the volume and then it reads the file and that's the version of the file it's going to use for its own life cycle. Exactly. So if you want to update it right now we don't have any refresh functionality you basically need to update the file in the config map and then of course restart the pod, right? Yeah. But that's coming, right? Yeah, some folks were like should there be a dev mode? Watches the disk and things like there's actually threads on this topic and the main thing is we really wanted to have all this stuff user driven so if you have some specific use cases that'll make the config switches much more relevant for everybody. Yeah, and again the idea of having that repository with a step-by-step tutorial is that you can try it out. At the end of that repository there is also a much more simpler example it's like you don't really need to install all these things you can just start with WebAssembly and TinyGo but yeah, for a full version of an application running you really want to try the lifecycle operations and all that stuff, give it that a try. We're out of time. Thank you very much folks. Thanks again.