 Hi, my name is Kendall Rodin. I am a technical product manager at Diagrid, and today's session is going to be all about Dapper, or the distributed application runtime, and how it makes authoring microservices and building applications easier for developers. If we take a quick look at the agenda, we'll cover the Dapper value proposition. Once again, why should you care about Dapper? How does it help developers? We'll then do a Dapper overview. Talk a little bit about a Dapper core concepts, the architecture, how you use it, and then we'll actually get up and running with Dapper on a local machine. After that, we'll dive a little bit more into the Dapper APIs that are available, and we'll actually see a few of them in action in Kubernetes. Then we'll dive into one of the newer APIs available as of Dapper 110. We're currently on the 111 release, so we'll showcase that new feature of Dapper workflows and do a demo on the local machine. Then we'll talk about some Dapper recent releases, some resources to get involved in Dapper if you're interested, and we'll go from there. We first have to step into the mindset of a developer today in this cloud-native focused world. Developers are basically dealing with a whole new set of problems that come with taking monolithic architectures and breaking them into microservices. As containerization has increased in popularity, the rise of Kubernetes platforms that has an impact on how solutions are actually architected and how code is written. Developers today have to deploy and release faster than ever, and on top of that, take into account a whole new spectrum of challenges that come with taking an application and essentially breaking those processes apart across the network. How do I make sure that these distributed applications actually can talk to one another and discover one another? How do I get tracing and into indobservability for my entire solution? How do I do state management? How do I persist state as an individual microservice and ensure that state is secure and consistent? A lot of these challenges have been around forever, and some of them have really been introduced with the rise of the cloud-native approach. I worked at Microsoft for six and a half years, and one thing that I heard from a lot of customers, how do I create a blueprint for what it looks like to build a microservice? How do I make it super easy for my developers to be productive without getting bogged down by the complexity of a given platform, or instrumenting all of these concerns into their application code when the goal that they have is to really build unique business logic that makes their product or service successful as a business? And so I love that Dapper exists because it really came to solve that problem. So Dapper, or the distributed application runtime, as I've said, really came to alleviate a lot of these challenges that came with building cloud-native containerized applications. And how does it do that? It provides a set of consistent APIs and consistent SDKs that abstract away a lot of these complexities that we were just mentioning on the previous slide. So it allows developers to really focus in on the core business logic that's a differentiator for them by handling concerns around things like service discovery, service invocation, state management. I mentioned this on the agenda side, but workflow orchestration even today. And then things like observability and resiliency really baked in without all of that having to affect and bloat the application code itself that a developer is focused on writing. So in reflecting on what we've talked about so far, Dapper's goal is really to codify the best practices for building microservice applications. And how do they do this? Through an open and independent API model called building blocks. These building blocks essentially allow developers to build portable applications that are language and framework agnostic. And because building block APIs are completely independent, developers can use one sum or all of them as they're building out applications. One of the big aspects of Dapper is that it is community driven and vendor neutral. And we've seen a ton of momentum in the community around Dapper over the past several years. Dapper is now the 10th largest CNCF project out of 157 there are today with over 2,800 contributors on GitHub and a very active community on Discord, which is always great to see. So it's very much a team effort in terms of the progress Dapper has made in such a short period of time. So how does Dapper actually work? We're gonna do a quick at a glance. We talked about the Dapper best practice building blocks and that's what we're looking at here in the middle of this screen. So we see there's several building blocks, there's service invocation, there's state management, publish, subscribe. Once again, a lot of these representing a series of challenges that developers typically face when building distributed applications that are now codified through Dapper. So you're able to offload this responsibility and communicate essentially to Dapper over an HTTP or GRPC API from your code. So what we see here is the migration of a lot of this plumbing code from the application logic that developers are responsible for and into Dapper. Dapper can really run on any infrastructure, Kubernetes anywhere. We typically see Kubernetes as being the target, but you can also run it on your local machine or on virtual or physical machines as well. So now let's talk a little bit about how our application code can actually access the APIs provided by Dapper. So Dapper exposes its HTTP and GRPC APIs through a sidecar architecture. So essentially running as either a container or as a process. So this means your application code is completely separate from the Dapper runtime. In addition to interacting with the Dapper sidecar through GRPC or HTTP, there are also eight language specific SDKs that Dapper provides. So all of these essentially just provide a typed language API for interacting with these Dapper sidecar building block APIs. So we'll see here a list of a few of the APIs, all of which follow a very standard structure. And what's nice about this is that because Dapper's language agnostic, you can really use any combination of application frameworks and languages in order to use this consistent API set. So now that we've talked about how to consume these APIs, I thought it would be interesting to just show an example of what it might look like to have an application solution that essentially makes use of a variety of different Dapper APIs because ultimately it is very plug and play. You can use one, you can use many. It really just is up to you, which makes Dapper really incrementally adoptable. So let's just take a quick look at a simple example. So in this case, we have Service A and let's say Service A is triggered by some type of resource binding through Dapper. So maybe a file was added to a file store or an event happened on a database. That could trigger application logic within Service A. And then in addition, Service A might make use of other Dapper capabilities like retrieving state from a key value store via the state API or using the Dapper configuration building block to retrieve application configuration from an application config store. In addition to interacting with other infrastructure resources, we can imagine Service A might want to communicate and with other microservices within the solution. So in this case, we could use the Dapper service invocation building block. So Service A can communicate directly to Service B using the invoke endpoint. And at the same time, Service B might have its own requirements or features that it wants to make use of. So in this case, let's say Service B has secure secrets that it makes use of in its application code. It could use Dapper to call out to a secret store, whether that's running locally or in the public cloud and retrieve those secrets that it will then use in order to maybe establish connections or that just might be secret values that they don't want directly in their application code. So if Service A also wants to make use of a different communication pattern, maybe something more event driven or persistent, then it can use the publish API through the pub sub building block to publish a message to a message broker. Now, Dapper not only allows you to publish to a broker, but also will subscribe on behalf of a series of services based on either a programmatic or declarative subscription that you can apply. Now, let's say we have two services that subscribe to this broker in this case. Service D might, for example, be running a Dapper workflow. So in this case, there's a workflow SDK and the workflow code lives inside of Service D and based on a recent feature, it can actually wait for external events. So maybe a workflow is waiting for human intervention or for a particular type of message or event to arrive on the pub sub broker. So in this case, it did. So the application code might notify its Dapper sidecar, hey, Dapper sidecar, please resume orchestrating this workflow. I've received an event that I was waiting for. And then Service C, another microservice, might need that same payload and message to take another action. So in this case, Service C could use an output binding to potentially store the message somewhere, generate or receive that goes to blob storage, trigger another event on a database. Once again, the opportunities and possibilities are limitless, but I think this just kind of helps understand what it might look like to use and leverage multiple Dapper APIs within a series of microservices. Now, something that's really nice about Dapper is that all of these APIs are great, but it has more value than that. It adds other capabilities like cross cutting concerns that would be applicable to this architecture pattern. So when I'm building distributed applications and accessing infrastructure resources, I went into indobservability. I wanna see traces between application invocations as well as invocations or interactions with external services. And Dapper provides that. In addition, Dapper also has a ton of configuration that you can apply in order to make your architecture more secure. So think things like access control lists, middleware. So once again, lots of good configuration settings to make sure that you follow security and governance constraints. And then it also has resiliency, which ultimately we talked about this. We know that failure is inevitable in these type of systems. So if we take a look, we can actually see that there's a concept of Dapper resiliency. And the reason this was created was because we understand that failure is inevitable. You know, a pod can go down, a node can go down. And you might have to retry these direct service invocation calls or even these calls out to external infrastructure providers. So Dapper provides some built in resiliency. For example, in service invocation, it performs retries out of the box. So you'll have a back off interval of one second up to a threshold of three retries. However, if you want to define more fault tolerance policies like a circuit breaker, for example, you can do that and apply a resiliency manifest. And these apply once again to a variety of APIs provided. So PubSub, you can apply it to retrieval of secrets and state. You can apply it to those service invocation calls, so on and so forth. So once again, a lot of these cross cutting concerns also handled by Dapper. So we've talked about the APIs and we've talked about how to consume them. And we've even talked about some of these cross cutting concerns. But what we haven't really talked about yet is what is this API abstracting, right? We understand that there's a lot of things a developer might need to put in their code to talk to a state store or to talk to a secret store. But how do they do that now? How do they define the infrastructure resources that they want to use? And that's where Dapper components come in. If we take a look here, we can see that there's a variety of Dapper components, over 115 of them, many of which, all of which were contributed by the community, which we talked about this in a previous slide, the momentum around the Dapper project and that the number of components just continues to grow with every release, making Dapper more and more flexible. Let's imagine in this previous example, when we're using things like the state management API or configuration API, those can all target a different implementation. So we can say, hey, V1 state, go call out to DynamoDB or go retrieve state from Redis. And there's a consistent format for calling out to that. All I do swap out the component. You don't have to change your application code in order to leverage a variety of infrastructure services. So let's take a look at this component swapability with a little bit more contextual example. So let's say that you're running on Azure, you could have three services, one of which retrieves some secrets, publishes to a message broker, and then you have at least one subscriber that is storing state as a result. So in the case of Azure, you most likely are targeting Azure services. So for example, you could use an Azure Key Vault component for secrets, you could use an Azure Service Bus component for the PubSub implementation, and you could use Azure Cosmos DB as a component implementation for the state API. Now, without changing your application code, because we have the abstraction of both components and the API schema, we can easily deploy to AWS with very little effort, right? All we have to do is swap out the component implementations for AWS specific resources like Simple Notification Service or DynamoDB. In addition, you could also be running on GCP and potentially using a more generic component model where you say, hey, I wanna use Hashicort Vault, RobinMQ potentially running inside my Kubernetes cluster and Redis. So there are cloud specific components and more generic components that you can leverage. So once again, you could potentially be running on all three and DAPR really provides that portability layer, but also you can really choose the components that make the most sense based on your current architecture or the knowledge within a given team. So we've talked a lot, I've talked a lot. It is something that I'm quite good at. So we're gonna take a step back here and we're gonna dive into a demo and get away from these slides for a bit. So we're really gonna focus on three APIs, the service invocationized API, service invocation API, state management and publish, subscribe. So we're gonna walk into a local demo. I'll show you some observability as well with that. Here's kind of what we're gonna be seeing. So we'll have three services, one called the customer service and it's going to use the service invocation endpoint to invoke the publisher. And then this publisher is gonna do just that, it's gonna publish. It's gonna take a payload, it's gonna publish it to a PubSub broker. In this case, we'll use a Redis as our implementation running locally. And then we have a subscriber which will use a declarative subscription to subscribe to that broker and we'll essentially take the message it receives and we'll publish to a state store. And in this case, once again, we'll use Redis for that as well. So let's dive in and see how to get up and running with Dapper on our local machines. Okay, so we're now dropped into a Visual Studio Code terminal. And you can see that I've run the very first command to make sure that my local development machine is ready to use Dapper. And that means I run a Dapper init using the Dapper CLI. So just to highlight, we're currently on Dapper version 111 which is the latest version. And I see a few things got installed into my home directory under the dot Dapper folder. I can also see that it's setting up some components. What's nice about Dapper is that out of the box, you get a default PubSub and a default state store component, both of which use a Redis container that's set up locally on your machine running on Docker. I have a placement container that Dapper created. This is for running actors. I have that Redis container that will be used as my PubSub and state component. And then I have the Dapper zip-in container which will be used to give me that distributed tracing out of the box. So now that Dapper is initialized, we can actually take a look and see those containers running. And we can also see the home directory where my Dapper components have been installed. We get that default PubSub component and the default state store component. Now, typically people move these into a more accessible directory. So let's take a look at where I've put mine. If we check out the resources folder, I've moved all of the default components into this directory. This is a component manifest. In this case, this component is called PubSub and it's of type pubsub.Redis. So this tells Dapper that I'm using a PubSub component and I'm targeting Redis as that backend service. And then same with the state store, we're also going to be using that local Redis. The only thing that really changed is we now call the component state store and we have state.Redis instead of pubsub.Redis. Now, one thing I've added to my default components is scopes. So scopes are a way in Dapper for you to control what given Dapper applications can access. So in this case, because the subscriber is the only application storing state, it's the only one that needs to load this component. The last thing I'll show you is in addition to having component types, we also have a subscription. So we talked about the fact that Dapper supports both programmatic and declarative subscriptions and this is what a declarative subscription looks like. So we can see that this subscription is called the order subscription. It's particularly targeting the orders topic. All of the messages that receive on this topic will be sent to the orders route on my application and the PubSub that I'm targeting is PubSub, right? And remember that component name is called PubSub. My subscription is targeting that particular component. And then once again, my subscriber is scoped to this as it's the only application that needs to subscribe to this particular topic on this particular broker. So now that we've seen the components and the subscription, let's go ahead and see how our applications make use of this. So if we take a look, we're gonna focus specifically on the publisher and the subscriber. So my publisher app is a simple.net web API and I am using the Dapper.net SDK. So we can see here there's a singular endpoint called the orders endpoint. And essentially what's going to happen is an order, the order endpoint will be invoked either from another application or directly via an API call and it will receive an order payload. And all it's gonna do is turn around and publish that to a broker using the Dapper PubSub API. So we can see here we get that SDK typed API wrapper. So we don't necessarily see the V1 publish API endpoint, but that's exactly what this method on the Dapper client is doing. So we still pass in three bits of information to that API call. What's the name of the PubSub component we want to target? The name of the topic and the payload that we'd like to publish as a message. So once that's published, if we look at the subscriber over here, which once again written in C sharp, but could definitely be Python or Go and still have that same interoperability. So all of these are quick starts in the Dapper repo. Feel free to get a feel for them. So we have another orders endpoint, but this one specifically is targeted by a subscription. So if you remember, we took a look at that subscription YAML and we'll bring that back up just for one second. We see that there's a subscription. The scope is that subscriber application. It means that any messages that arrive on that orders topic and the PubSub component will be delivered to this orders endpoint. So once a subscriber receives the messages, it's going to also use a Dapper client to save state. So this is that V1 state API wrapped in the SDK wrapper. So three bits of information. What's the state store component name? A key for the key value pair, which in this case will be an order ID and then we'll pass in the order payload as the value. So two very easy, very simple applications that make use of two of the Dapper APIs. So we've seen the code. We've seen the components and the subscription. How do we run this thing? So what you can do is use the new multi-app run capability. This was introduced in Dapper 110 and enhanced in Dapper 111. But essentially what it allows you to do is run multiple Dapper applications along with their sidecars without running multiple commands. So traditionally you would run a Dapper run for each one of these services, but with this multi-app run, you can consolidate all of that into one manifest. So a couple of things to highlight pretty much all of the configuration you would use to run an app via the CLI are available as parameters in this manifest. So one of the important elements is letting Dapper know essentially where your resources are sitting. So think of this as being your resiliency policies, your components, your subscriptions. And we can put it in common, which means it's shared by all of the apps on execution. We also have to give each individual application a Dapper app ID that's unique. Now, this isn't required, they will provide defaults, but it's nice because it's really critical, especially in things like service invocation, because your app ID is what you use to tell the Dapper sidecar which app to invoke. We obviously pass in some information around application protocol, the directory where the application code is sitting, the command to actually run the application code, which will differ obviously across language and framework, and then also some information around which port is my application running on and where do I want my logs to be sent, which in this case can go to file or console. I really love that they added console in Dapper 111 because you can essentially see all the logs streaming very easily. For the initial demo, we'll use the multi-app run file specifically to run the publisher and subscriber applications. So what we can do here is a Dapper run. We wanna pass in the name of the multi-app manifest, which is Dapper YAML, which we can see here, and then we will execute. So what we can see now is that we have a Dapper sidecar with the app ID publisher, and the publishing app is up and running. So those blue logs are coming from our application, the white from the sidecar, and then we'll see we have the subscriber sidecar along with the subscriber application. In order to actually test the APIs, we can do a quick check. So right here, what I'm gonna do is I'm going to post directly to that publisher endpoint. It's going to publish a message, and then ideally the subscriber will receive it and post it to the state store. We were able to execute the post. We can see that we successfully published order 200 to the topic orders using the PubSub component, and then we can see that the order 200 was successfully persisted. What's nice is we can actually go and check our Redis instance. So here we can see the contents of our Redis store. You can see here I'm using Redis insights and pointing at that address where my local Redis is running. And I can see that the subscriber app ID stored order ID 200. The data is that order payload. So our key value was successfully stored in the backing Redis component using the Dabber APIs. So we've seen the PubSub API and the state API in action. We have one more to go, the service invocation API. So if you remember in the previous demo, I did a direct post to the publisher application in order to kick off the process. So what we're going to do now is use the service invocation API in order to invoke the publisher app from the customer. Now think of the customer as essentially just being a generator of orders that are coming in from customers. So if we take a look at the application code, essentially what this will do is directly invoke that orders in point on the publisher application. So we're posting to the publisher app ID on the orders method and we're sending in an order payload. We should see Dabber service invocation in play which should then publish messages from the publisher and lead to orders stored in state by the subscriber. I've added this to our Dabber YAML so we now have three applications that will get kicked off whenever we do that multi-app run. Without further ado, let's finish this up by running all three at one time. Our subscriber, our customer, and our publisher are now up and running. So now we're starting to see output. We can see that multiple orders are coming in and being sent to the publisher and then we can see that the publisher is receiving them and essentially the subscriber is then storing them. So this will happen about every five seconds and new order will come in. So let's take a look at Redis and see what's happening on the backend. So in returning to Redis insights we're able to see 10 orders were successfully stored in state meaning that our order generator did in fact successfully post using the service invocation API. Now I'm in Zipkin once again on my local machine did nothing explicit to configure anything with relation to tracing. What we can see is that immediately we're able to see a dependency graph between the services that we were running. We can click in and see more granular information about the traces and the calls and the number of errors. We can click in, we can look over a period of time maybe the last day and we can run a query to kind of see various calls and traces. See an entire call stack all the way from the customer application to the publisher to the subscriber to the state store. So once again, all of this just out of the box which is pretty awesome in my book. We saw the demo running locally. Now I wanna dive a little bit deeper into a few of these Dapper APIs to provide a bit of additional context. We'll start with service invocation. Keep in mind that the Dapper application ID essentially allows individual applications to communicate with each other putting the burden of service discovery on the Dapper runtime. So if we start with the customer app the customer app made a GRPC called targeting the publisher service. But initially this call will go to the local Dapper sidecar running next to the customer app. Dapper then discovers the publisher's location using the name resolution component running on the given platform which in the case of my local machine was MDNS. Dapper will then forward that request to the publisher's Dapper sidecar. Now keep in mind this is all captured with Dapper traces and logs and metrics and open telemetry format that can be sent to whatever monitoring backend might make sense for you. So the publisher services Dapper sidecar forwards that request to the specified endpoint within the application code and the publisher will then send a response back to the customer application. So one thing we didn't touch on quite as much is the fact that we can do MTLS using Dapper. So you can authenticate calls between Dapper applications as well as between Dapper apps and the Dapper control plane. In the event of any call failures or transient errors there is that service invocation resiliency feature that performs automatic retries. Once again you can create your own custom resiliency policies as well. And then a couple of other things that are worth calling out. The concept of middleware. So you can do things like Dapper OAuth which enables you to use OAuth authorization on Dapper endpoints for your web APIs. And you can also create things like access policies in order to dictate what a calling app can do on a call app. So stepping over to publish and subscribe there's a couple of important things to call out here that we didn't necessarily highlight in the demo. So one is that the PubSub API uses cloud events by default. So cloud events being a packaging standard for messages you do not have to use cloud events specification but it does follow another cloud native best practice but it can't be disabled if you're interacting with legacy systems that can't receive cloud events for example. There are over 18 pluggable components for the PubSub building block and there's ways that you can add additional configuration and security settings. So for example, you can limit which applications can publish or subscribe to a given topic through topic scopes. You can also set things like dead letter cues and resiliency policies on PubSub API and you also get that traceability. So we saw that the open telemetry traces and the metrics go way beyond just service to service but also extend to PubSub API and other building blocks. One thing that is worth calling out as well is that the subscriber will get at least once message delivery. The broker will make sure that each subscriber receives the message at least once that's placed on the PubSub broker. Last but not least, I wanna briefly touch on the state management API and some of its features. So within the state management API you have the ability to choose things like strong consistency or eventual consistency. So setting specific requirements on certain operations. You can do optimistic concurrency control with eTags and you can even do transactions. There's also the opportunity for you to set state time to live. Applications can set a time to live per state store and these states essentially won't be retrieved after they've expired. And then ultimately you can also implement things like state encryption which allows for automatic client encryption of application state with support for key rotation. So we didn't cover those today but once again, it's very powerful that we get this flexibility of state store interoperability and that component model with that consistent API but there are definitely some additional features the state management API provides. So once again, feel free to explore. So I hope you're excited. Now it's time to jump into the exact same demo application but with a slightly different scenario. So instead of running dapper locally on my machine we will be targeting a Kubernetes deployment on Google Cloud Platform using Google Kubernetes engine. And instead of using Redis in this case for the component implementation for PubSub we're gonna switch that out with Kafka. They won't make any application code changes but we'll see how we can run dapper on Kubernetes and how we can make use of that swappable component model. Okay, so welcome back. We are back in Visual Studio Code and we are ready to dive into this Kubernetes demo. One of the first things that I wanna show you is just a few resources that I've already deployed to the cluster. So we can see here, I have a Kafka namespace I actually deployed Kafka as a home chart to this namespace for use as our PubSub broker. And then we have a Redis namespace which is hosting a Redis deployment and that will be used as our state store implementation. And then finally, I'm gonna show you the dapper system namespace. Have the dapper control plane up and running. And really the two major components to call out here are the sidecar injector which is responsible for ensuring any new deployments that have dapper enabled to get a dapper sidecar and the dapper operator which really handles those component updates and things like that. So how did I actually deploy dapper on this cluster? It'll look familiar pretty easy. It's a dapper emit gosh K. So essentially all you're saying here is you wanna initialize dapper and target a Kubernetes cluster instead of your local development environment. So what are we changing between the previous demo and this demo? The main thing to call out is really our component implementations. So if we take a look at our resources here we can see the manifest that we'll be deploying to our Kubernetes cluster. So instead of using a component for pubsub of type pubsub.redis we're going to use pubsub.cofca. We're gonna pass in some information about the locally running Kafka instance in our cluster and the same will go for our state store components. So we are still targeting a state.redis. However, we're changing the information around the host and the password in order to connect to the one running in the cluster. One thing to highlight that we haven't touched on is that you can actually make use of dapper secret stores within a component. So what this is telling dapper is that the Redis password is stored in Kubernetes secrets. And so instead of putting that plain text value dapper will actually go and retrieve that Redis password from the Kubernetes secret store. And then last but not least we see the same subscription that we had before we're just now deploying it to the cluster. Okay, so now's our opportunity to deploy everything and make sure it runs as expected. So we'll go ahead and do a K apply. But what we can see is that we have a service and deployment that were created for the publisher same for the subscriber and then our two components in our subscription. So in an ideal world, we should see that they're up and running and that they both have a dapper side car. So let's give that subscriber one more second. Now it's ready. There's two containers running within the pod. One for application workload the other is a dapper side car. So how did dapper actually know to inject these deployments with dapper side cars? Let's take a quick look. We will describe one of our pods. And when we do, we should see a few important annotations. So we can see that here. We can see the dapper app ID, the dapper app port and most importantly, we can see that dapper enabled is set to true. So that's what indicates to the dapper side car injector that this application is interested in becoming dapper enabled. Awesome. So another thing we can do is get the logs for one of our pods and make sure that everything is up and running as we expect. So we can go ahead and check out the application code. So we can see here, everything is running as expected. And then we can do the same. This time we're going to get the dapper D side car logs. So here we can see a ton of output from the dapper side car and really the most important thing we want to look for is that the components we expect are loaded. So it took me just a second but I was able to find that important piece of information where we can see that the PubSub component was loaded and that's exactly what we're looking for. So let's go ahead and let's get the logs for the subscriber as well. And we're also going to look at the dapper D logs. Once again, this is the first place that I go anytime I'm trying to make sure that my components are appropriately loaded for each of the respective applications. So we'll do a quick search here. So we see that the Kafka broker was loaded and then in this case, we're also looking for a Redis state store. So if we look for a Redis, we can see that the component loaded for the state store as well. So everything should be up and running. So now we need to test this. So before we posted to the publisher service on a local address, but now we should post to the one running in Kubernetes. So if we get the services we created, we have a service of type load balancer here that we can swap out. And now what we should see is when we make this post, we should see the subscriber receive this. So let's go ahead and trail the subscriber logs. Okay, so let's send this request and we see that the order was successfully persisted, which indicates to us that the publisher was successful and it was received on the broker and that the subscriber was able to retrieve that. So with our demo application successfully deployed to Kubernetes, let's dive a little bit into one of the newer APIs that happens to be my personal favorite, the Dapper workflows API. Dapper workflow makes it easy for developers to write business logic and integrations in a reliable way. Since workflows are stateful, they support long running and fault tolerant applications, which is ideal for orchestrating microservices. The workflow building block works seamlessly with other building blocks like service invocation, PubSub, state and findings. So now we're gonna dive into a little bit more detail on core concepts, how workflows operate and interact with the Dapper sidecar and then we'll show a quick demo. With Dapper workflow, you will essentially write a series of what are called activities and compose those activities together to make up that workflow. The workflows themselves describe how these actions are executed and the order in which the actions are executed. So workflows themselves don't make any external service calls or do like complex computation. Instead, they delegate these to the activities which perform the work. Dapper workflows also allow you to schedule essentially these reminder like durable delays that can be for a time range of minutes, days or even years. So imagine a scenario in which your application, I don't know, needs to wait for a certain period of time like five days to receive approval or five days in order for a user to perform some type of verification. And if they don't, then you essentially want to take a specific action and that's what timers allow you to do. Workflows are able to do this by maintaining a write only history log of all of their operations using an event sourcing pattern. So because of this, you don't necessarily want the history on one particular workflow to grow unbounded, which can happen if you're executing thousands of activities within a given workflow. So that's where child workflows come into play. You can have a workflow that schedules, child workflows that have their own instance IDs, have their own history and really help distribute tasks across workflow instances. And then last but not least, external events. So this is a new ported capability as of Dapper 111. But essentially what this allows your workflow to do is wait for an external event. So it can schedule a wait for an external event task that will subscribe to a particular event and await those tasks. So the workflow will block any execution until the event is received and then essentially can take mitigating action based on the result of that event. So a good example of when you might need this is for human intervention during a workflow execution. So how does the Dapper workflow running in your application code actually interact with the Dapper sidecar? So the Dapper sidecar is really responsible for the scheduling and management of your workflow activity execution. So it's really the execution engine, whereas the workflow itself is something that you write in your application code using the Dapper workflow authoring SDK. And then the Dapper engine will store a event stream of all of the activity execution results and things of that nature, which can then be replayed as needed and help to ensure that stateful reliability. When your workflow application starts up, it will use the SDK to send a GRPC request to the sidecar and it will then get back a stream of workflow work items. So this could be anything from like start a new workflow or a scheduler particular activity and then in response, it will return the results back to the Dapper engine, which will store those in state. I'm so sad to say that we are on our last demo of the session, but super excited to show off a little bit of how Dapper workflows works at a very primitive level. So we're gonna start with a very basic Hello World example. If you take a look at this post request, you can see that we will actually target a Hello World workflow and we're going to use the start API passing in an instance ID, which in this case is 10, but it could be an order ID or another significant piece of data that can represent that particular workflow instance. When we do that, we'll pass in a single input, a name, and essentially we want the workflow to create a greeting and then return that and then we'll consider the workflow complete. So super basic, just a single activity, but we can imagine how this would grow and evolve as you add more activities and more complex logic. So yeah, let's dive into it. Okay, so we are back in Visual Studio to check out a simple Hello World workflow example. So the first thing I wanna call out when you're using Dapper workflows is once again, there's an authoring SDK that's required. Right now there's support for C-Sharp and Python. So let's check out the program.cs, just wanna call out one important thing that you need to do. So one is that you're going to register your workflow. So in this case, it's called the Hello World workflow and then you're also going to register activities. The activity will perform all of the computation, business logic and any external calls. Let's check out that workflow. We have a workflow context, which will provide information about the given workflow that we're in. And then we also have a workflow which expects a particular input and a particular output. So in this case, the input will be a string and the output will be a string. And then we're calling one activity. So in this case, we use the call activity async method passing our input of type string and then invoking that particular activity and returning the result. So if we look at that creating activity, you'll see that an activity also has a particular input and a particular output. So in this case, we take in a string name and return a string greeting. So all we're doing here is selecting a random greeting and appending that to the name that was passed in. And then we'll return this as a result from the activity back to the workflow. The activity is where all of the error handling will be and everything will then be propagated back up to the workflow itself in order to dictate what the next action should be based on the return result. So let's see this in action. What we're gonna do here is we're gonna do a dapper run and we're gonna kick off the workflow instance. So just passing in some of that basic information like the dapper app ID, the application port and then the path to our state store that will be used as the work item stream persistence. So when I do this, we should see some pretty good output. The major things to call out here is that you can see we're registering a component for the dapper workflow engine and that we're initializing the dapper workflow component. And then we can see that we established the work stream which is really critical to help us know that our application and the dapper sidecar are up and running and we're ready to go. So let's go ahead and trigger the workflow using a simple HTTP post. So you can see here we're invoking that workflows API using the dapper engine for workflows. We're passing in the name of the workflow that we want to start and then we're passing in a unique instance ID. So this could be 10, it could be 100, it could be a particular order ID or something with more business context or it could be a GUID which is what we'll use here. And then we pass in a simple name with the goal of returning a greeting response. So we kick that off, we can see that it was accepted and then we can get the status and see that immediately it was completed. We sent in Kindle and got back Konichiwa Kindle. We could do this again and get a different response. So we can see here that there's a runtime status. One thing that you can also do with dapper workflows is return your own custom status that help provide more context in terms of where your workflow is in the processing pipeline. And obviously if you had a longer workflow you could pull this iteratively to make sure that the workflow is still running and still healthy. So we can see here too that we're getting log output from the workflow instance, it got kicked off and then it was finished with a completed status. So with the third and final demo complete it's time for us to wrap up today's session and talk about ways that you can continue to stay engaged and involved if you're interested in learning more. So what did we talk about today? We talked about a lot of the benefits that dapper provides when building microservice-based applications. So one of those benefits is the ability for the developers to focus on writing business logic. Instead of creating complex applications full of plumbing code to handle a lot of these distributed application concerns, developers can instead adopt a consistent dapper API standard and offload that responsibility to dapper. In addition, developers are empowered to follow best practices by using a lot of the common patterns that we see in distributed application development. In addition, you gain a ton of flexibility, right? Dapper is completely agnostic from a language perspective and powering polyglot development and also provides the swap ability of components making infrastructure lock-in a thing of the past. With dapper we get out of the box cross-cutting concerns around things like resiliency and observability and we can also adopt dapper incrementally. So if you're interested after today in trying out just the state management API or just the PubSub API that's totally available to you and totally okay. If you want to use all of the APIs, once again, all the better. And then last but not least, we get to provide consistency for developers, making it easier for them to get up and running quickly building out new, modern applications in the cloud. In addition, today we've also covered how quickly the dapper project is growing. There's been significant contributions over the past couple of releases, seeing a couple of new APIs being introduced and also the stability of the configuration API. So the dapper APIs will continue to evolve and grow to make sure developers are empowered to build applications more effectively. In addition to the inclusion of new building block APIs, there's also been other significant highlights from the past couple of releases, dapper 110 from February 2023 and dapper 111 which launched in July of 2023. So a couple of just major highlights, I would say a lot more granular metrics, metrics around service invocation that have been enhanced. We now have metrics around actors, timers and reminders, even resiliency policies. Resiliency also went stable in a recent release, which is really exciting. And we also see a couple of other major features like the ability to invoke non-dapper endpoints using HTTP. So we talked about the dapper service invocation API, but now dapper allows you to communicate with endpoints that don't use dapper. So this still provides you with that consistent API, but also the ability to use resiliency policies, get that tracing enabled for observability purposes and even use access control lists. So we'll continue to see additional advanced capabilities in future releases with dapper 112. The momentum of dapper contributions and the growth of the project has continued to empower more and more organizations to adopt dapper and to use these APIs as a standard. So it's great to see the dapper user base continuing to grow. In addition, there's also been a recent case study published through the CNCF about de facto's use of dapper. So definitely check that out if you're interested in seeing a more real world example. Finally, a list of resources if you want to continue diving into dapper after our session today. So definitely check out the dapper website and go through the dapper quick starts, many of which inspired today's demos. There's also code available at Diagrid Labs, especially if you wanna dive deeper into the workflow example. Join the Discord community. There's a group of over 6,000 on Discord that talk about dapper. So we'd love to see all of you there and check out our YouTube channel as well. You can also follow dapper dev on Twitter to keep up with the latest updates on the project. And if you're interested more specifically in running and managing dapper at scale, definitely reach out to us at diagrid.io.