 Well, hey, nice to meet you. My name is Kendall Rodin, and this is Alice Gibbons. We both work at Diagrid and have been involved in the Dapper project pretty much since it started back in 2019. And who has ever actually played with Dapper outside of the context of today's conference? OK, so a few. For some of you, who is this like an entirely new concept? Never really heard of the project. Awesome. Perfect. Well, you're in the right place. We love talking about it. And you're going to have to shove us off the stage in 24 minutes, because we always like to jam a lot of content into less time than we have. So we'll go ahead and kick it off, Alice, if you want to go ahead and just move us over. So yeah, this is something I just wanted to bring up. We'll pause here. How many of you at some point or another in the past, I don't know, one to three years have had conversations internally about different kinds of hosting platforms? Kubernetes? Should we go CAS? Should we go FAS? How many of you have had that? Yeah, probably all of you. Debates over which hosting platform to use, which cloud provider to use, how abstract did you want the infrastructure that your applications are running on top of. But what's typically missing from this conversation is probably also what's kind of missing from this conference, which is application developers. Can we agree? Yeah, people, yes. Clearly it's a popular topic, because all of you showed up here today. So ultimately, really there's a gap in the conversation. We've made code a lot more modular in the sense that we can put it into a container, run it on Kubernetes, get consistency across cloud platforms, and Kubernetes providers. But ultimately, that doesn't actually make your application code portable. So if you're using a variety of different cloud-hosted services, like you're running on AWS, you want to use S3 for storage, when we bring in all of those libraries and SDKs to communicate to these specific services, we're actually locking ourselves in at the application layer. So containerizing that doesn't actually solve the application portability problem outside of the DevOps space. So what we really need to focus on and what we see a trend towards in this industry and within the application development space is creating an API layer through which developers can consume infrastructure services and communicate to other applications. So that focus more on application portability and creating an area through which platform engineers can create and produce infrastructure that can easily be consumed in an abstracted way by developers without losing productivity. So that's essentially what Dapper was created to solve, providing this unified programming model through which developers can consume underlying infrastructure and communicate to other microservices or applications running on top of the hosting platform of your choice. So it's really that consistent unified set of APIs that you can use to develop distributed applications or cloud-native applications. And we'll dive into a little bit more about what that looks like in practice in this next slide. OK, so we saw this earlier. How many of you were here earlier for the test containers, like Mauricio's talk? OK, perfect. So you probably recognize this slide. Just going to do a quick kind of overview of the core Dapper architecture. So it really starts with any code, any framework. Earlier today, if you were in this session, we talked about Java, Spring Boot. But really, Dapper can be accessible from applications in any frameworks written against any coding languages. And that's really where the polyglot support comes in. And then basically, through a set of HTTP or gRPC APIs, you can access a set of building blocks. And these building blocks are really just common patterns used to build distributed applications. We just talked about choreography, orchestration, saga patterns. A lot of that comes into play here. How do I do distributed communication across services? How do I get tracing and observability out of the box and implement resiliency policies? How can I make sure I directly invoke other services in a secure way using MTLS? So a lot of these capabilities provided out of the box through these Dapper best practice building blocks. And then last but not least, really focused on portability at the infrastructure layer. Ultimately, 90% of users today in production run Dapper on top of Kubernetes. However, you can run this on your local machine. You can run this on a set of VMs. So it isn't necessarily just isolated to Kubernetes. All right, awesome. So what does this look like in practice? The other slide is very theoretical. But when we actually look at the APIs, they run on top of Kubernetes as a sidecar, essentially. So you have your application code. You'll inject the sidecar into the same pod as your application. And then you'll essentially access a series of APIs through that sidecar on local host. So here we can see the set of standard APIs. So all of them are going to be communicating with that Dapper sidecar over local host within the Kubernetes pod. And then you basically have a standard set of APIs you can invoke. So obviously, a standard in the sense that they all follow a very standard pattern. If we want to invoke another service, instead of calling that directly, I can use the invoke API. And the Dapper sidecar will facilitate forwarding that request using MDNS on your local machine or using the DNS within your Kubernetes cluster. That's a kind of pluggable element there. There's also a state API. So if my application wants to maintain its own state within a backing state store, instead of implementing the Redis SDK or talking directly to S3, I can instead call this generic state API. And the Dapper sidecar will facilitate that connection to whatever back-in broker I choose. So the same goes so on and so forth with the other set of APIs. And ultimately, today, we're going to focus on workflows. So we're going to be talking about the Dapper workflow SDK. We're going to be talking about the Dapper workflow API and the way that it can empower you in your development journey. So how does the API actually target these back-in infrastructure resources? It uses the concept of Dapper components. So as I mentioned, I want to talk to Redis, but I don't want to use the Redis SDK because ultimately, what if I migrate to a different platform and want to use something on Azure like Cosmos DB or move to AWS and use DynamoDB? And maybe even locally, I want to use something running in a container. I can actually just swap out the back-in component. So that state API will point to any number of state components. And without changing my application code, I can just swap out that component manifest and point the API to a different infrastructure service. All right, who's ready to get into the Dapper workflow? Woo, yay, okay, awesome. So we're going to dive in and we're going to talk specifically about one of the newer APIs provided by Dapper. As of Dapper 110, we're currently on the 112 release called Dapper workflows. All right, so I want to go through, and I love, this is my animation fail, we got challenges early, so apologies in advance, but I just kind of want to go through like a traditional business process that can resonate with almost everyone. Pretty easy to imagine, right? Let's say I have an order process that I want to implement in a code first way. So I want to start my process and the first thing I'm going to do when someone submits an order is check inventory, right? Is this available in inventory? Can I fulfill this order? I'm going to need to coordinate some kind of transaction or some type of call to some type of database in order to retrieve the inventory and actually see if I can fulfill that order. So once we've determined if there's sufficient stock, if it's a no, the workflow's going to end, right? That process is over, we can't fulfill the order. Sorry, it's out of stock, please try again later when we've restocked. And if it is sufficient, then I want to actually process payment. Based on if I'm able to process payment successfully, either the workflow will fail, in which case we'll tell the user I'm sorry, you don't have enough money, you've been shopping too much. Or if the payment succeeds, we're going to go ahead and update that inventory. But before we do so, let's think, booked an airline flight, gotten through the whole process and then been told that there's no more seats on the plane. So we know the whole like, oh, that's because it's eventually consistent and yada yada. Well, let's say in this case, right, we have to go back and check inventory because what we were selling was very popular, app developer con seats. And by the time that you actually completed the order, there's no seats left, right? So we're going to go check inventory. If there's sufficient stock, we'll fulfill that order and complete the workflow. And then if there is no longer sufficient stock, we're going to refund them their payment, right? Because, hey, sorry, you can't actually get in, it's too popular. So when we think about this process and modeling it in a code first way, there's a ton of challenges developers are responsible for in order to coordinate this entire process, right? So what are some of those challenges? So we want reliability, right? We want consistency of our database transactions and our state management. We want to know that if the workflow is to fail, for example, if I go to process payment and I process it successfully, but then can't access the database to update the inventory, then obviously I want to be able to roll back. If for whatever reason the database isn't accessible, I don't want that workflow to just complete, right? It's not complete. I want it to be able to wait until that state store or that database comes back online to resume that entire workflow and have the output that I'm expecting, right? So service coordination, right? What if my process payment activity needs to happen in another microservice across the network, right? How do I actually make sure all of this happens within a single atomic transaction? And that's ultimately what the workflow API with Dapper allows you to do. So I want to talk a little bit about some of the core concepts within Dapper workflows. And these are pretty familiar primitives. How many of you have actually worked with some kind of like workflow editor, workflow engine at the code level? Okay, awesome. And most likely if you haven't, you've still implemented some kind of transactional pattern in order to try to achieve some of these capabilities, just maybe not in the way that's quite as concise or out of the box. So within Dapper workflows, the biggest thing to consider is you're going to have a workflow object, but really the workflow itself doesn't perform any complex computation, any external API calls. All of that's going to be delegated to the smallest unit of work within the workflow, which is called an activity. And this activity can be, if we look back on that previous slide, which maybe else it's worth doing, these blue boxes will all be separate activities that within Dapper workflow can be orchestrated within the workflow context, right? In a particular sequence, using whatever workflow pattern makes sense for whatever the workflow is you're writing, right? This can be fan and fan out, monitor pattern, a variety of different workflow patterns all supported because this is very much flexible and agnostic to whatever the type of workflow is that you're running. Additionally, durable timers. So Dapper workflows includes the concept of durable timers. So this can be like an arbitrary timeline or reminder up to a year long, right? So think about, hey, I have a process or maybe I have a trial, right? A product trial and I let someone in for 30 days and I want that workflow to actually proceed and close out the trial after 30 days. The workflow itself will unload itself from memory while it's waiting for that event to fire or that reminder and then proceed, right? So for up to 30 days, for up to a year, these durable timers will allow you to run workflows that are flexible based on timeline. Additionally, you can create child workflows. So these would basically within the context of a particular workflow, maybe within the context of an order workflow, you have another process around procurement or shipment, right? You could actually start a child workflow and what this allows you to do is make sure that the workflow state and the number of tasks that are being executed isn't so long running that you basically have too much in the state store so the replay time takes a lot longer, right? So that's just a best practice. And then last but not least, another really critical component is being able to wait on external events. So let's say that I'm in the middle, I'm creating some kind of game simulation, right? I wanna be able to wait for a particular event that a user takes in order to then formulate who's the winner of this particular trivia round, right? I need everyone's answers to come in before I can announce a winner. I can wait for a particular event. That event can be fired off via an API call to the dapper sidecar and then ultimately I can close out the workflow based on whatever the event was that was received. All right, awesome. So when we think about this from a technical perspective, there's a few things to consider. So like I said, dapper runs as a sidecar process. If you're running locally on your machine, it's just running as a process. If you're in Kubernetes, like I said, it's that sidecar. So your workflow is actually your code, right? You're completely responsible for authoring your workflow and we provide a set of dapper workflow SDKs. So right now this is available in .NET, Python and Java with more to come. But ultimately whenever you fire up your application using that workflow SDK, it's going to make a call to the dapper process and initiate an RPC stream. So essentially what's gonna happen here is through GRPC streams, the sidecar is gonna be communicating to your application and managing the execution and sequence of the workflow within your application code. So the main thing to call out here is that your application defines the execution steps and the dapper sidecar is really working on facilitating the management and the execution of those steps within your application code. And you can, yeah, you just click through. Okay, so here we can see an example of what it would look like to actually kick off a workflow. So we'll make a single API request to start a workflow. It'll generate an instance ID. Every workflow will have a unique instance ID that can be correlating to a particular order ID. It can be business centric or it can be arbitrary and random generated by the dapper sidecar. And basically what that will do is the dapper sidecar will then notify your application, hey, start this workflow, resume this, complete this based on a reminder and it's handling all of the scheduling and the management. And the way it's doing this is because it uses an event-only, excuse me, event stream, append-only event stream, which is how it notifies your application as to what to do, right? So what's really nice is that when, you know, the workflow unloads itself from memory, it will replay itself whenever the workflow continues. So this is optimal because it's resilient as well, right? If you go retry, it'll load up all of the activities that have already executed and then complete whenever, you know, with the await action essentially. So that's the way it works. So in this case, Alice will show you a demo where the workflow state is actually being maintained in Redis and that's where it's reloading and replenishing itself with every action that occurs. So thank you so much. That was a lot of content. Alice is gonna get into the nitty-gritty in code. Are you excited? You ready to see it? Okay, here we go. Awesome, yeah, I think it was, can you guys hear me? Yes, okay. Yeah, so it wouldn't be app developer con without showing some code, right? That's what I hear. So we have to do that. But first I just wanna talk through a little bit how this works and sort of the mechanics of it. So as Kendall mentioned, right, we're gonna be running the checkout workflow today. This is sitting within the checkout service and this is not the workflow engine because that's provided externally by Dapper. That's provided by the Dapper process or the Dapper sidecar in this case. So you can see on the slide here, I have the checkout workflow and then what this is doing is it's registering a number of activities. So you can see it has the notify activity, the check inventory activity, process payment, update inventory and then refund. And then the one that I'm really gonna call out here is the process payment activity because typically, right, we would be calling an external service to process the payment. In this case, we're gonna be calling out to an external HTTP endpoint that's non-Dapperized. So there's no, it's just a square, the square payment service from the application. All right, let's see what that looks like. So if I jump into BS code here, how's this on the very back? Can everyone see this? Is it big enough? Do you want me to make it a bit bigger? Okay, very cool. So this is a, a little bit. It's not. Oh. Is that how I'm gonna make it bigger? It's like great. Well, it's going. All right, it's big. Okay, so this is a .NET application. As mentioned, the Dapper SDKs for workflow are offered also in Java and Python. But what we're gonna start off with here is you see how we're gonna be utilizing the Dapper workflow SDK. So we're importing the Dapper workflow SDK and then we're calling out, we're adding the addDapper workflow to my program file. And what this is gonna do is I'm gonna call it the checkout workflow and then I'm gonna register a number of activities that I want to be a part of this workflow. And these are all important because the activities are actually where that business logic is gonna happen, right? Yeah, the workflow itself doesn't contain any of the logic. It just orchestrates the activities where that business logic lives. From there, I can also, and then this is actually what's gonna be kicking off that GRPC stream or initializing that GRPC stream from within my application. And it's gonna reach out to that Dapper sidecar or that Dapper process. And then it's a bi-directional communication from there. So let's take a look at the checkout workflow. The checkout workflow, it has an input and an output. All workflows in this case have to be deterministic. So if they have the same input, we'll always produce the same output. And I'm inputting a customer order. So I'm gonna receive a customer order. In this case, we're ordering Dapper T-shirts, right? We all have Dapper T-shirts here. They sell out quickly. So you better go get yours at KubeCon. And then we're returning a checkout result. And that is specifically whether something has been processed or whether it failed. Each of these activities can then be called out from within this orchestrator workflow, right? So we first have, we're calling this notification activity. All this is really doing is printing a message. Not very exciting. This next one here is checking the inventory. So this is the one that Kendall mentioned, we'll check that database, right? To make sure that we have enough Dapper T-shirts that everyone wants. I'm gonna jump over to one of these activities now just to take a quick peek at what this looks like. You can see I'm also utilizing the Dapper client from within this code. And then I'm reaching out to the Redis state store in order to check that inventory. Now one thing you'll notice about this code is there's no Redis SDK in here, okay? There's no, you don't see any mention of Redis. You don't see any, there's no SDK, right? All I'm using is the Dapper client in order to reach out to that database and get the data back. And then if I kind of, yeah, so this gets state async, what it's gonna do is it's gonna reach out to that state store and then return the value of the amount of T-shirts in this case. And I can also check to make sure that this is actually running because if you look at my state components, so this is Kendall mentioned Dapper components before. So this is how we're actually talking to that Redis state store behind the scenes, right? We're using a YAML file in this case. We all love YAML here, it's KubeCon. And we're calling out to, in this case, my Redis is living on my local host. It's just a Dapper container. And it's calling out to that and it's, yeah, saving and getting state from there. Heading back to the workflow itself, some of the other interesting activities here, I'm going to do that payment process activity. So this one is the one that's gonna be calling that external service. Again, I'm gonna be reaching out to that square external endpoint. And then also because I'm using Dapper external service invocation here, I'm getting things like metrics, retries as well as tracing built in. So jumping into the process payment activity, you can see, let's scroll down here. What this is, again, this is also using the Dapper client, but it's calling out to, again, this square payment service. And this just goes to show that within this workflow, right, you have the opportunity to use a number of the different Dapper APIs available to you. Again, the activities are where all of that business logic is running and then utilizing Dapper, it's completely platform agnostic, but we're still accessing a number of different services and a number of different infrastructure providers from within the code. From, what's happening in the payment process service is I'm reaching out and I will either get a, it's randomized and I'll either get the okay or the declined success or a failure message from the square APIs and then it'll return and it'll process the message back within to my workflow. Last but not least the, yeah, the one thing I wanted to call out is that we can also put some compensating transactions within the workflow, right? So as Kendall mentioned, say if you think about that business process workflow that we were showing before, imagine if you have an application that's running and you go and you're placing your order and then by the time you place that order, the inventory doesn't exist anymore, we're gonna have to have a compensating transaction that refunds that person. So essentially what's happening in this case, we have to issue a refund if that task failed, in fact, in the previous call. So who wants to see this in action? Okay, love that. All right, I'm going to- Note, 25 minutes is very hard to put a lot of content in so I hope you all are all keeping up. Okay, so what I'm gonna do is I'm gonna first kick off, I'm gonna first start by running the application and I'm using the Dapper SDK or the Dapper CLI here and so what I'm actually doing is I'm giving it a bunch of arguments, I'm giving it an application identity, a port and then the port that did the Dapper sidecar or in this case, process is running on. I'm gonna kick this off and you're gonna see a number of logs kind of showing up here. The most important ones, you'll see both the sidecar logs as well as the process logs as well as the application logs and the things that I really wanna watch out for is, hey, you see that this sidecar streaming connection has been established for when the app logs, that's that GRPC stream I was talking about and then we also see that this workflow engine has started, again, provided for you by Dapper. From here, I'm gonna kick off the workflow so I'm actually just running, this is just a HTTP client plugin within BS code and what this is actually gonna do is it's going to instantiate a new workflow instance with an ID, so each of these workflow IDs has to be unique and they actually allow you to model after business processes so in this case I can use an order ID for my order but I'm gonna kick this off and you can see I'm making an order for Dapper T-shirts, my name's Alice and I want 101 Dapper T-shirts and what this is gonna do is within this app code it's gonna receive, it's gonna go through every one of those activities sequentially, right? So that first one, it's gonna say, hey, I received that order for 101 Dapper T-shirts, no luck, no such luck, there's only 100 Dapper T-shirts so I guess that's it, that says for the inventory. Then from there, the workflow will itself get canceled because there is insufficient inventory available so this is one of those paths in that business workflow that I'm looking at. Okay, yeah, okay, and then the other thing I can do is I'm just gonna order 10 of these T-shirts now and we're actually gonna see it go through to completion here. So it's gonna go past the inventory, it's gonna, it's found 100 different T-shirts here and then it's gonna call out to Square to process that payment. And then the last, since we are closing on time, this actually failed the payment processes in this case and it actually populated that error back up to the workflow itself. The last one I wanna do though is just do a request and then I'm actually gonna cancel the process. So what I just did there, right, is I ran through, I ran the application, I kicked off a workflow, you can see that two of those activities actually got started. But then say, I'm running in Kubernetes and my pod died, my application died, let's pretend that happened. And then I'm gonna just rerun the application. I'm not gonna re-kick off the workflow and you're gonna see that workflow is actually gonna pick up right where it left off. And what's actually happening here, so you can see it's rerunning, it's found the T-shirts in inventory and it's gonna run to completion. And so what's actually happening here is the reminders that are running within Dapper, the Dapper workflow engine are actually, it keeps track of every single activity that's been ran. And so since that activity did not run through to completion, the actor was not canceled and it'll continue processing, again, using partially that append-only log as well as Kendall mentioned earlier. Okay, so we're out of time. Did y'all enjoy it? Did you at least enjoy it? Okay, if you wanna learn more, come hang out with us. Yeah, there's a couple, so we'll send out, I'm pretty sure we'll send out these slides, so you'll definitely get access to them. There's a lot of fun stuff to learn about Dapper workflow. Just wanted to call out, please give us feedback. If you can take a quick pick, that'd be great. I actually remember to put it in here. If you say we talked too fast, just save it, we already know. No, I'm just kidding. But yeah, join us, continue to get engaged with Dapper, take a picture of this slide, that'd be awesome. And then basically we did have one more thing. Please come see us at the Diagrid booth. We are giving out these beautiful books that Mauricio has written. If you give the password of freedom from fragmentation, you will be able to get one of those books for free. So just keep that in mind and then come talk to us. We just launched your product today. How about clapping for that? That takes a lot of work. Yeah, so if you want to come talk to us about that, please do. And then the last, but very not least, is if this was interesting, if anything on Dapper workflows was interesting, we have a full session on Dapper workflows on Thursday, November 9th at KubeCon. So come to that one and learn a little bit more of what we teased here. Thank you.