 Hi, everyone. My name is Aaron Schneider, CTO co-founder of Diagrid, worked at Microsoft before Diagrid where I created the Dapper project, also KEDA, Kubernetes Avenger. If not the scaler, it donated both to CNCF together with Red Hat, at least for KEDA, then Microsoft has donated, sorry, Dapper has done CNCF for Microsoft and, yeah, started my own startup in January of 2022, working with lots of major enterprises across the world for their microservices journey and adoption in Dapper. And, yeah, really happy to be here, speak about what we do in the Dapper project for Wasm and developers who are going to be looking into moving to Wasm together with Michael who has also contributed a lot to the Dapper project. So thank you. And we're going to start talking a little bit about Dapper first. So what is Dapper? As a developer, you might be seeing this fictitious e-commerce application here. This is basically a general blueprint. You might have seen millions of these, right? You have a bunch of services now. These are like seven or eight services here, but it can go to a hundred or even 500 microservices, you know, we've seen, and they need to talk between themselves or to a database or to a configuration store or a cache and they need to be long running, stateful, event driven. But, you know, it looks simple on the surface, but at some point you get two hundreds, if not thousands of little problems that hinder developers which are distributed systems challenges. So you've made your choice about running on Kubernetes or running on AWS or whatever platform you're running on. But then when you get down to how do I actually create partitions against Kafka, right? Or how do I secure my access not just between my services, but between my app and the database? How do I apply zero trust holistically across my entire infrastructure stack? How do I create consumer groups? And if you're in a multicloud world and you need to maintain multiple code bases. And this is becoming a major issue as your scale grows. And this is where Dapper comes in as a set of distributed systems APIs to help developers focus on their business logic and not the underlying infrastructure code. So Dapper gives developers these primitives to write long running, stateful applications or event driven applications and be able to get state or secrets or configuration and handle state at scale from multiple services. And it does that in a very agnostic way by exposing HTTP or JPC API. So as long as your application can talk these protocols, you're good to go with Dapper basically. And on top, we add some SDKs and integrations into the major platforms that exist out there today, Express for Node.js, Flask API for Python and many others. And that really runs as a process on top of any infrastructure. It doesn't need to be containerized. It can run as a process on your local machine, dev machine, Mac, Windows, Linux, whatever you're running. It can just run as a process. Obviously, most Dapper users today run it on Kubernetes because that's the de facto model of choice for running multicloud applications, but it can really run on just bare VMs. So does Dapper replace a database or a PubSub? No, it does not. The reason why Dapper has become so popular is because it integrates with whatever existing stack you have. So if you as a developer want to become more productive and get these best practices APIs out of the box, which are secure and reliable by default, you don't have to give in your infrastructure. You can still use Kafka or whatever cloud infrastructure you're using and just connect and pair up Dapper with those. So at this point, you might be thinking, hey, is Dapper a lowest common denominator? And the answer is no. In many cases, Dapper will actually add features on top of what you would find if you're using the native client drivers. One example of that is write concurrency and Redis. So with Redis, you don't get first right wins or last right wins. It's always last right wins. Pair up Dapper and Redis and suddenly you get that on an open source version without needing to pay anything to a vendor that's going to offer you these very expensive enterprise ready binaries. Okay, so Dapper adds a lot of these features and it has over 120 components available to connect to basically every type of cloud service or open source infrastructure service that you can think of. And if we don't have that, please open up an issue on our repository and someone from the community will pick that up. Talking about community, we are the 10th largest project in CNCF rapidly growing built by a great many number of companies. Yes, we have almost 3,000 contributors. I think if the people here in this room were to open a Dapper issue today, we would surpass 3,000 because we're pretty close. We have over 6,000 discord members. And yes, the project is rapidly growing. We see a lot of major vendors working to improve Dapper because their end users are asking for Dapper specific features. So now that we have covered with Dapper is we can talk about how micro services are represented in wasm and then go back to seeing how Dapper can benefit wasn't developers, Michael. Thank you, Erang. And yeah, so I think the journey started, I think a year and a half ago, you know, so last year, about this time at KubeCon, we talked about, you know, collaboration between wasm and Dapper. So originally, you know, in my mind, there was just, you know, maybe wasm gonna provide a new way to run micro services in Dapper. But in turns out, Dapper also helps wasm to get adoption as well, you know, because in this microservice ecosystem, you know, there are just a lot of, you know, it's a single ecosystem. So it's a lot of components that's, you know, gonna raise each other up, right? So I'll start from, you know, like, you know, why wasm is a good choice for microservices because that's perhaps is one of the big use case for wasm today, you know, is that people want to run serverless functions and microservices using this technology stack, right? You know, so because this is wasm kind, you know, not introductory audience of wasm, so, you know, so I think everyone probably have, you know, seen this diagram before, you know, so, you know, the history of virtualization, you know, started from the virtual machine, which is a heavyweight and most like a computer, and then moved to the container, you know, in a cloud-native environment. It's lightweight, but it's basically similar to operating system. It's still dragging in the whole operating system. And then we have WebAssembly, which is, you know, because the word lightweight has already been used by containers. So we talk with, you know, Kubernetes people to say, you know, lightweight virtualization solution, they kind of think about container. So wasm needs some other word for it, right? But in the end, wasm is something that is, you know, doesn't look like operating system anymore. It looks more like a Java virtual machine, but it performs the same kind of functionality. So you write application, you have SDK, you have APIs, and this application would have access to operating system features like the file system and networking and, you know, things like that. And but in the end, it is, you know, as the chief benefit is that very fast, very small, and can truly be fitting to the word microservice, right? So I think I've shown this before, but, you know, I want to emphasize this again, you know, so this is early this year. We created two demo applications in wasm, right? You know, we wrote the mean rust and connect them to databases and to build a typical three tier application, right? You know, so the business logic is all in wasm. And the wasm container, the wasm container itself is managed by Docker desktop. So, you know, so you have three containers, the front end, the database, and the application server written in wasm. So the entire application server written in wasm is under one megabyte, right? You know, it's in this case for the connecting radius is 700 kilobytes for connecting to Postgres is 800 kilobytes, you know. So coming from the Java world or the Python world where, you know, the container image itself is hundreds of megabytes. You know, this is a refreshing to know that this, you know, using wasm, we can truly make the microservice micro, right? Put micro back into the microservice. And so our project, you know, the project that I'm part of, it's called wasm edge, and it's a wasm runtime under the CNCF, you know. So our goal is really to make it more, to add more cloud native features and cloud native, you know, functionalities into our wasm runtime. And because we put such a focus on, you know, the cloud native use case, so we collaborated with a lot of organizations and projects to make it manageable by existing container tools. So those are the tools that currently support wasm edge, right? You know, so there's Docker desktop, it's bundles wasm edge, and you can use it in Kubernetes through container D, you know, that's the work Microsoft has done. And the whole open shift and Red Hat stack is supported as well. So C-Round, Podman, and it's upstream in Fedora Linux, it's upstream in Red Hat Enterprise. So, you know, so you have all those tools that comes, you know, with wasm edge pre-installed, and you know, say if you have Fedora Linux, you can just yum install wasm edge, and then you would have it with open shift environment, that's you can have a wasm edge image that's running in the open shift environment there. So there are a couple, you know, when we say wasm edge is the cloud native wasm runtime, it's fitting for microservices, what does that even mean? You know, so that means we have added features that is not yet standard, but currently being standardized into the runtime. And perhaps the most important one is high performance networking, and it's perhaps the most, it's a basis of our collaboration with STAPR. It's just because if you look at WASI, you know, one of the glaring myths is that, you know, they added socket networking very later, and the socket networking that's currently in WASI is rather simplistic, meaning there's no DNS, there's no TLS, there's, it's blocking. So once, once the application starts to open a network socket, it blocks, it can't open a second network socket. So in the microservice environment, you know, you have, you open a socket to receive HTTP connection, then you, it blocks the main execution loop, you can't open a database connection to query the database anymore. So problems like that, you know, so, so we put a high performance networking, you know, a non-blocking networking into the, into the WASI stack that's associated with wasm edge. By doing that, it allows us to use existing networking libraries that existed in Rust and in JavaScript, for instance. You know, so in Rust, you can use Tokyo, you can use Hyper, you can use TLS, you can use RAP, you can use Request, and you know, all those networking libraries that currently in Rust, you can just compile to wasm and run in wasm edge. And JavaScript is the same, you can do fetch, you can do node.js server, you know, that's by having a quick.js runtime running in wasm edge as well. So that's some of our, so this is one aspect features. So through the networking socket support, we can support, you know, connections or drivers to infrastructure services, for instance, if you say, how do I connect to MySQL database? And you know, the most fundamental way is to open a socket connection and have the MySQL driver, right, you know, and have the MySQL driver to handle the communication between the, between the database and the application server on a socket level. So because we support the non-blocking socket in wasm edge, so all those database drivers that are written in, say, Rust or C++ can be compiled into wasm and be used directly in the wasm application. So that allows us to integrate with things like MySQL, Postgres, you know, all those databases, you know, and also messaging queues like Kafka and Red Panda and, you know, things like that. So there's, by adapting those existing driver libraries, we are able to make direct connections from wasm edge applications to those external services, right, you know, that's, you know, so that's part of our work, which, but that's not enough because there's, you know, go back because there's so many, only so many things that we can adapt and recompile, you know, there are other things, you know, because each of them has their own little problems, you know, when you compile the wasm, maybe there's things, you know, maybe they're missing a library, maybe the database is using a particular encryption library that you have to support and, you know, things like that. So to do it on a per SDK level is a chore. That's where, you know, Dapper would definitely help a lot, which we're going to cover soon in this presentation. But before that, let's talk about some, sorry, let's talk about some other, huh? Yeah. Oh, okay. Let's talk about some other, what we call cloud native features that was made a chance, you know, so one of the, um, um, perhaps more interesting things about it is that it's, uh, through the specification called WASI, and then WASI neural network, it supports, um, all the, uh, all the popular, um, machine learning back, backends or, you know, um, you know, AI inference backends. So meaning that, um, it can, um, so in that regard, it's a Python alternative, you know, so it's a, uh, so Python is building on top of C, you know, those are all C libraries, you know, you have a layer of Python building on top of it, and then you can use Python to do machine learning and, you know, things like that. But using, um, but WASM is the same thing, you know, you can use, you can Rust-compiled WASM or JavaScript in WASM, uh, or Python in WASM, and then you can interact with the underlying framework. So the WASM bridge, we can, we can directly manipulate those, um, you know, run, run model inference in those, uh, in those, um, AI frameworks. And through the, the integration with the data processing libraries, so for instance, libraries like OpenCV, FFMPEG, we can do image, uh, video processing, image processing, language processing and all that stuff, right, you know. So all those are, uh, API plugins that you can, um, that you can plug into, uh, the WASM Edge runtime and, um, have your WASM application have access to all those libraries, right. And by having OpenCV and PyTorch together, we will be able to run a large number of, um, you know, um, um, machine learning models in WASM Edge, right. You know, so in WASM Edge today, you can run the whole media pipe library from Google. It's a, it's a, it's a, it's a large, um, um, um, um, collection of, say, um, uh, stream processing libraries, you know, that's, um, you know, um, you know, image recognition and, you know, things like that. You can run YOLO libraries. You can run large language models like MetaS, Lama and Lama2, you know, um, by, um, you know, um, because those models are in PyTorch as well. So you, so, you know, so you, you just build the, the, the language token either in Rust and then have the, um, have the PyTorch, um, plug in your WASM Edge, run the inference and then, you know, once the inference is all come back, you, uh, you, uh, you construct it back into language and then you turn to the users, right. You know, so we have demos, you know, other, um, uh, you know, other talks if you're interested, you know, we can talk after that. So, but, um, going, while WASM Edge supports all those things, it can't possibly support everything in the, in the, um, in the, in the micro, in the microservice ecosystem like Eran has said, there's over 120 enterprise components, right. So that's where Dapper can improve the WASM experience. Thank you, Michael. Okay, so let's talk about the problem statement first. Let's say you're a developer, you've just said, okay, look, I'm gonna write my first code in WASM or I'm gonna port something, I'm just gonna start something new. But there is a problem which is what Michael just finished his talk with, which is most cloud services are just not available in WASM because you can't just take the client driver you used to use in a container or on a VM and just use it in WASM. It just won't work. You're dependent on many things. For example, the WASM runtime of choice that you're using and their implementation of web sockets, sorry, sockets in general or the networking primitives required to create these connections. And then you have common microservices patterns are just not codified because WASM gives you many great things like secure sandboxes and high performance if you really need to spin up processes that handle requests per process at a very low latency but from developer perspective you still have all of these distributed systems challenges you need to face. So how does Dapper improve that? Well, we didn't intend to. The Dapper project didn't really have WASM on its radar for a very long time but today Dapper can connect you through very simple JRPC and HTTP interfaces to over 150 cloud services. So if you're running in multi-cloud world that's awesome because you have to maintain a single code base but even if you're running on a single cloud that's still awesome because you can give different development teams who have a polyglot experience the same APIs and really have those APIs be secure and reliable out of the box. Dapper also has built-in authentication so with Dapper you can do things like hey, every time I'm running to a database actually kick off this OAuth2ROIDC authentication provider and create this authentication to my infrastructure of choice transparently and you can apply rate limiting and concurrency controls you can apply timeouts and circuit breakers to whatever infrastructure you're using. Right, so Dapper gives you all of these feature platforms capabilities for your code and developers use that today but if you're running in Wasm you're more constrained and so this becomes more important and then of course policies so Dapper has a bunch of Wasm integrations today first of all we can execute Wasm binaries as middleware so if you're for example writing code that says to Dapper hey, save this data to a database you can execute the Wasm function which will take in the data do some transformation then write it off to the database or you can also apply this transformation when Dapper consumes events and sends them to your application through Pub-Sub for example so all of these things can be given to Wasm developers and these are important because these are really really hard to use when you're running in a constrained sandbox so let's take one example real-world scenario I bet you've seen this thousands of times if not millions, right we have an application and this app let's say needs to concurrently you know subscribe to events and process them so today you know you have the host you have the Wasm runtime you have the model here left and you need to process a batch of events and I think I have a slide missing but I'm just going to talk to this so if you had let's say 100 items all delivered to your application and you need to process them not sequentially because you wanted to speed up what would you do okay you have code it receives 100 items you don't even have to do like database calls you just want to parallelize the work what would you do in Java for example you would probably open up a thread right multi-threading the most basic primitive in the world not available in Wasm unfortunately yet I think it's being worked on so concurrent IO is being worked on that's nice if you need to access a database but if you just need to spin up a few threads to just batch the work and complete faster you'll actually get lower performance if you're using Wasm in that case but what you can do with Dapper is because Dapper doesn't adhere to those limitations because it's a process that sits outside it can actually connect with ever cloud infrastructure that you're using it can subscribe on your behalf it'll do the multi-threading for you because it doesn't have any IO or threading limitations and it can deliver all of these events in a batch to one Wasm function okay so instead of spinning up multiple Wasm functions this is what you would do if you wanted to handle 100 question parallel you just spin up 100 Wasm modules so you can use one and the reason why you would want to use one is because while Wasm has very very low latency it's still it is still latency for example if you were to spin up a thread that would in most languages be eight times faster than spinning up a Wasm module don't take my word for it test it yourselves so with Dapper you can actually get these events and deliver all of these events to a single Wasm function where you don't have to do the processing sequentially yes so this is great and we can do that with Dapper not just for PubSub but also for bindings and to connect to all sorts of infrastructure services and also through Wasm because of improvements that Wasm and IST made to Dapper and Wasm Edge you can actually do another thing which would be very hard to do from a Wasm module you can actually make an outgoing HTTP call back to Dapper to talk to a database over 150 databases or components that we support so for example if you're using I don't know CockroachDB for example and that doesn't have a client driver natively supported within Wasm you could just issue an HTTP call which is supported in Wasm most of the time supported really really nicely inside of Wasm Edge just to talk back to the Dapper sidecar which will give you that interface to your cloud provider of choice so yeah I think we have one more slide about how Dapper is supported with Wasm Edge today I'm really happy to announce that Second State and Michael have donated a Dapper, Wasm sandbox project to Dapper which means you can now have an SDK that you can use from within your Wasm applications to talk to the Dapper sidecar yes you could use ROG TTP but it's a much better experience if you can just you know download the crates for us for example and talk to the Dapper sidecar directly so that is supported they've done the work to support most of the Dapper APIs and it's really really great and again thank you for that and our steering committee has voted to accept that into the project and that's going to be a major part of you know how Dapper is going to be developers where they are so we want to make it easy for developers to engage with Dapper if they're running inside of Wasm code which is more restrictive this is our way to make sure the Dapper really integrates with the best of greed run times for Wasm out there so there's going to be lots of more API support coming and yeah you can through that basically use your code to talk from a Wasm module to all of the underlying infrastructure components and Dapper supports it's just a few but you have over 150 and underlying all of these you will get out of the box telemetry right have you tried instrumenting your code for open telemetry from within Wasm today well you're in for a doozy if you try that one but with Dapper once you talk to an API it'll actually instrument the calls on your behalf so you get observability out of the calls tracing metrics open telemetry integration resiliency you can put retries and circuit breakers before and after your Wasm code executions which is pretty powerful and I think you want to talk about that Michael oh yeah sure yeah thank you Eron yeah yeah so yeah we are really excited to be to have our SDK to be part of the Dapper community because you know that's you know you may you maintain multiple SDKs and now we have a Wasm SDK the Dapper community and I think this is you know like Eron said we have instantly gained access to 150 or more enterprise components and and that's still growing because that's a community where you're building components and the Wasm community is not building those components but through integration with Dapper we now have access to the components which is really nice right you know so so you know so so the site before we have seen you know how the how the SDK connects to you know all those Wasm all those Dapper services but this this will use inside the Wasm application so this this box this purple box is is a Wasm sandbox right you know so the application is divided into three parts one is the HTTP server you know that's where we we create a server in Rust and say using the Rust Hyper API or the RAP API so we open up HTTP ports that lesson to the incoming connection the incoming connection come from say a load balancer or HTTP proxy that's that's you know in a typical you know a microservices pattern right and then once the the request comes in it's it has an internal router it knows you know what's the business logic function it's going to invoke how to process this request right so if this request requires an external service for instance it requires a message from an external messaging queue or it requires authentication or it requires a database access you know things like that it then use the Dapper SDK for for Wasm for it's being renamed to Dapper SDK for Wasm Edge right you know and then to open up a connection a socket connection to the Dapper sidecar to communicate with the Dapper sidecar to ask the Dapper sidecar to perform the functionality for instance to go to the database or go to the messaging queue you know to you know to get the results and then you can complete that when the results come back you can complete that in the business logic function then return the results to the server so it's a you know I'd say it's a very straightforward process for the for the Wasm application developers you know it's a just you know all the all the functions that we talk about I embedded in the in the in the Dapper SDK function you know so you just from the Wasm developer point of view you don't even need to know about say you know opening a socket and you know things like that all you see is that I want to call my Dapper sidecar that is attached you know like Aaron said the Dapper sidecar could be in a container or in another process that's in the that running locally so I would say I want to access to my you know I want to ask my Dapper sidecar to perform this function and it would be automatic automatically performed for you right so and so we actually do have a demo you know that's so it's to demonstrate you know how the whole thing work together you know so if you're interested it's a this is a this is a URL so the three green boxes are the three Wasm microservices so those services are written in Wasm and they perform different functionalities so for instance there's one called image grayscale you know meaning that's the HTTP request that takes HTTP posts that contains an image and it would turn the image into gray and then return the result back image classified is a is a demonstration of the AI machine learning capabilities so it's so the the microservice has a pytorch model in it so it takes the image and then do the image classification tells you what's in the image and it returns the results right and then there's an event service that service the both of them so you know so essentially the idea really is that you know when the when the image comes in it's gonna perform the business logic but then it would through the event service it's gonna filter it and then transform it and then store the event into a MySQL database so how do you do that in with you know you know in the decoupled and microservices way is by having those services independently running themselves so they don't know know about each other so image grayscale image doesn't know image classify and these of them knows events event service they know about those service but they don't know where to find those services and how to connect to the services and each service has a diaper sidecar attached to it right you know so the diaper sidecar could be again another container that attached to the to the to the bottom container or another process that attached to the bottom process so from the image grayscale service I would just say I want to send you know someone send me an image and here's the results I return the result but I want to record this event so it tells its own diaper sidecar okay its diaper sidecar knows where to find the event service diaper sidecar and send the events to that diaper sidecar and then have the event service sent to the database of course things can be you know this is just one example of architecture right you know the database can be attached to the diaper sidecar as well you know so it doesn't so the event service doesn't have to manage its own database connection it can be attached to its own diaper sidecar so the idea really is that each microservice only need to know the diaper sidecar that attached to itself it doesn't need to know anything else or the other things that in in a network and those sidecars talk to each other so you know so that's allows you to do discovery and you know you can just say I want the event service find me one I want database find me one so it's gonna you know those interconnected sidecar application is gonna be able to find those and execute your command for you so that's you know those those demos or examples are inherently complex right you know so we have if you go to that get up repository we have we have a whole get up action script set up that goes through the process of installing the upper starting all those services and then you know send them send send one request to one of the services and then check the database that's you know the event is happening on one service that actually get recorded in the database so you can see all that and so I don't have time to really you know or truly demonstrate that because it's a it's a you know if we have a workshop I I'm gonna go through it all for you but but we don't so you know so if you go there and and you know start the service on your own computer you know eventually you're gonna get to this you're gonna get to a web page and the web page is the front end for the image classified service so you upload the image and then it's gonna tell you you know the service around the PyTorch model and tell you what's on the image and then on the back end you would be able to query the on your MySQL database and see the the events get properly processed and recorded you know almost magically because in your image recognition classification service you this service has no concept of the database and have no doesn't know where to look or where to find the events service on your phone right you know so it's all handled by the by the sidecar you know you just make an API call so then there's you know that's so we'll also talk a little bit about future plans you know that's so yeah that's the future plans is really we look forward to your contribution like you all have just said you know we have recently donated this project to the to the Dapper project and we've become part of the you know the Dapper sandbox that's okay yeah the the Dapper sandbox project and we you know we loved and you know people who work in the microservice space to make contributions by enhancing this SDK so you know so the short-term goal is really now we are communicating with the sidecar through through the the HTTP connections and there is and there are still APIs that we haven't fully supported yet you know so it's but those are great you know I think individual contribution work you know because take an API and look at how it's other APIs were supported and then you know just work right you know so so it's so if you are looking to you know you know getting started from making contribution to open source I think this would be an excellent project to get started and then there's a longer term goal really is that to because HTTP still have overhead that's that's that's in Pete's performance although it's it's it's you know we want to get feature complete first but in order to have higher performance and also have more automatic generation of the of the SDKs APIs we want to eventually support you know the GRPC protocol which would be built on top of the was a major non-blocking socket connection you know so once so I think all the elements are there but that's a longer term goal because we have to you know adapt to suitable GRPC library to on top of our socket library to make that work and then you know ports you know those that are GRPC APIs that's really important I just want to strengthen that point so today if you're calling it the upper sidecar by the way you have 0.6 milliseconds of latency so 600 microseconds really is what it takes to call into the the upper sidecar that's very low latency we can even improve that if we support you know GRPC because Dapper also supports UNIX domain so once you can open up a GRPC connection UNIX domain sockets that goes even lower and by the way calling Dapper over network today with HTTP is even faster than launching WASM model again you don't believe me try it out but this can even improve our performance even further and WASM Edge is doing a really really great job at supporting all of these advanced programming models in GRPC is really important because it's widely used and it's great to see that your project is doing the work to benefit all developers not just people using Dapper yeah yeah yeah yes that's you are exactly right once we support GRPC it opens new horizon for other applications that's you know that gonna use WASM as well yeah so I think that yeah that's it you know thank you so much yeah thank you and if you have questions you know we're gonna hand around for a couple minutes here yeah so you know just I think we're a little bit over time but you know if you have a question you can ask now or you can talk to us thank you