 So, welcome, it's fantastic to be here at the Open, this is quite a, is this the sound of the echoing? It's pretty echoing, that's good, all right, perfect. Okay, it's wonderful to be here at the Open Source Summit, I'm thrilled to be able to talk to you today about a technology we think is really important for developers about building long running fault tolerant applications, and we're going to be talking to you a lot about DAPO workflows. We'll introduce ourselves first. My name is Mark Fussell, I'm the CEO of Diagrid and one of the original co-creators of the DAPO Open Source Project. We'll be talking about DAPO quite a bit and what that is. And the role of Diagrid itself as a company is to boost developer productivities, particularly focusing on developing APIs and tools for developers who build and distribute applications, particularly those in sort of the cloud native space. Hi everyone, can you hear me okay? My name is Kendall Rodin and as Mark said, we're both from Diagrid. I'm one of the product managers at Diagrid and super excited to jump into workflows today and show you a couple of great demos. I'm super happy to be here. So let's frame the discussion because it's always really important to think about the problem space. So really what happens today is, you know, you as developers, you're building these large scale or even small scale applications that are distributed in nature. This is sort of a fictitious e-commerce application. And when you look at one of these, when you first inspect it, it kind of seems pretty straightforward. Create some processes, you've got an inventory application, you've got an email application, you put them all together. But as you get under the covers, there's many, many challenges that developers have to deal with. How do I discover other services I communicate with? How do I send messages between them all? How do I manage some of the secrets that I want to coordinate to talk to resources like databases? And one of the ones that's sort of like the hidden thing sometimes is how do you actually coordinate across those services? How is it that I call one service, you know, schedule the next? If there's a failures with those, how do I kind of backtrack and do some of the conversation around this? And before you know it, most developers and most line of business applications are writing some sort of workflow. And that means that they tend to get stitched together with queues and cron jobs and all sorts of things around this all. And so the reality is that many, many applications have a workflow we like a concept in them, whether it's a healthcare application, an HR onboarding application, you know, some thing that just signed up to financial accounts. So your workflow tends to be central to a distributed application. But of course, a distributed application has many problems in itself in order to solve. As we look at workflow, kind of workflow kind of materializes itself in many, many forms. And in the end, it's a form of sequence of tasks and activities that you want to put together to achieve a business goal. But this goal is an HR onboarding system or a financial system. So it literally appears everywhere from manufacturing systems to financial systems. So it's crucial to make most business applications. Now, as a developer, you're thinking, well, how do I build all of these applications? How do I deal with all of these problems that we talk about? And many times, you're told to go off and read a sort of documentation, understand how that works. So this is where DAPA, or the Distributed Application Runtime Open Source Project comes in. Its goal is to make you developers much more productive in building applications. And its focus is to stop you having to repeat the same old patterns but codify the best practices for how you build those applications into a set of libraries and APIs that you can use to build your applications on top of, whether that's running on a set of VMs or whether that's running on Kubernetes or whatever your choice of platform is. DAPA is kind of a key project that we started many, many years ago because we understood the developer challenges that were there. Now, what it does is it's pretty straightforward. It gives you a set of APIs that you can leverage over HTTP or GRPC. You can come at this from any language of your choice. You can be a Go developer, a No developer, a .NET developer. It really doesn't matter. There's a sort of libraries or SDKs helping you do that. And what it provides as an API is in order for you to build these distributed applications. So, for example, say you wanted to communicate between two applications. It has an API called service to service invocation. The role of that allows you to talk from one application to another. It allows you to do it securely. It does the discoverability for you. It does retries. It's all heavy lifting on your behalf. Or, say you want to create long running, stateful applications. It has a concept of state management through key value storage. It's a virtual gaming session state inside that API. So, these APIs now use a developer to focus on business logic, leaving the difficult implementation of building distributed applications to DAPO, the open source project. The project has been hugely successful. It was launched a few years ago, about three and a half years ago, grown to be a large, diverse, contributing community. And it's part of CNCF. In fact, it's the 10th largest project inside CNCF. So, what we're focusing on this today is that, as part of the earlier release this year in January, we released one 10 version of DAPO. And as part of that, we included workflows. And workflows is now being kind of a key technology because we recognize that developers needed a workflow engine to satisfy many of their business needs that they want to create their distributed applications. Now, looking a little bit under the covers about how DAPO works, I strongly suggest that you go and listen to a talk that happened yesterday and was recorded about the DAPO 101 that dived into this deeper and talked about what DAPO did. But what it does is it runs as a side card to your application. So, in fact, when you run your application, every instance of your application it runs has a DAPO side card that runs next to it all. It does all the heavy lifting on your behalf. You simply call an API. So, in this case, say you wanted to call the order application of the... the order method on the cart application wherever it's running. DAPO will do that for you with a simple API call here that allows you to discover what that application is, call it, and does, say, retry security, everything on your behalf. Straightforward, easy to understand as a developer. It makes your code concise, consistent, and portable about this because the portability, you know, is independent of the platform it runs on. So, there is a suite of these APIs. We particularly today, as I said yesterday, if you go back to talk, they cover service invocation. They cover pubs up inside that. We today are going to focus and dive down deep into workflows. It's a new API that we've introduced. You'll see it evolve over the rest of this year. But it's also very powerful in terms of what you can do today. And we're going to show you demos around all this, how you can use it all. And then our goal is that you can go away, try this out, and then really get into the DAPO community and give us feedback about how this can be improved, how it satisfies from your business needs and where to take it all. So, with that, I'll let Kendall dive in. Yes, okay, thanks, Mark. Let's give it up for Mark. I know we're at the end of the day. I get it. But thank you all so much for coming. So, who's ready to learn about DAPO workflows? Okay, awesome. You're some of the first people that are hearing about this feature, because as mentioned, it's pretty much brand new in Alpha. So, this is the first talk we've done on DAPO workflows. So, hope that excites you a little bit to get into the meet here today. So, we talked about what a workflow is in general, right? It's a process that has a series of activities or tasks that we want to execute in a particular order. And in a lot of cases, those processes may be manual, but in the case of DAPO workflows, it's pretty much the exact same thing, except we're bringing it to the software-defined level, right? So, we're essentially saying it's a sequence of software-defined tasks or activities that are executing in a particular order to accomplish a particular business goal. So, here you'll see the first quick overview. I really want to emphasize DAPO workflows are written with the workflow SDKs that we provide. So, essentially, you're building a microservice and within that microservice, you're defining a workflow and your application code. So, let's dive a little bit more into some of the features and functions of DAPO workflows. So, the first thing I want to touch on is activities, which we talked about. Really, the basic unit of orchestration that this workflow will have. You write activities as small bits of business logic within your application code. And this is where all of the computation and, you know, external calls are happening that are being orchestrated by the parent workflow. Workflows also have a capability called Durable Timers. And this really shows that workflows are extremely durable and resilient. So, if you've worked with workflows before, you don't want necessarily a super-arbitrary time-out, right? That workflow could take 30 days, even a year to complete. So, a good example of how you could use a Durable Timer is within a workflow you could actually say, hey, you know, my product has a 30-day subscription period and I want this workflow to trigger after 30 days and essentially kick off the workflow to, you know, end that person's trial period. So, it's pretty impressive. It can offload itself from memory for those whole 30 days, storing its date, and then basically coming back alive after that specified duration. So, really, really powerful capability there. Child workflows are really interesting because you see the value of this when you're running at scale. So, basically the way that dapper workflows work is that they're storing the state of the workflow's progression in an append-only log. And if you're running a dapper workflow that's executing, you know, tens of thousands of tasks, you can imagine that that history when a workflow is replaying itself and reloading itself from memory will take a lot of execution power. So, if you're running at scale and you're executing a lot of tasks, you might want to actually spawn off a child workflow which essentially has its own instance ID, maintains its own history, those sort of things. And then, last but not least, another really cool capability is that very likely in a workflow you might have something that requires manual intervention or some type of payload to be received in order to trigger another activity. So, you can use external events to essentially do this. So, imagine you're in the middle of a HR provisioning workflow for a new hires onboarding and you actually need to have a manual, you know, step where somebody goes and assigns a particular machine to a user or approves an order for a machine. You can actually have your workflow wait until an event comes back that this activity has been completed. So, the reason I put a star here is this isn't in the current alpha implementation, but it is already planted in progress. Awesome. Is this making sense? Are we tracking? Still with me? Okay, cool. So, we're going to jump right into a demo. I wanted to try to bring a demo up to the front. This is the first demo of the day. We will have another. But I just want to essentially break down this really simplistic demo that shows you what it looks like from a code perspective. So, today I'm using the .NET SDK to author my workflow adapter. That's what's supported today. Python in progress coming this month, and then we're working on things like Java, JavaScript, and extended capabilities from a code language perspective. So, if we take a look at this workflow, we have an application that's essentially called the Hello World Microservice that contains a Hello World workflow. We want to kick off that workflow somehow, right? The way that we can actually instantiate an instance of a workflow is by using the management APIs, essentially. So, you can see here, Mark showed a great slide earlier that talked about the simplicity of Dapper APIs, and this is just a further example of that. We can see v1-alpha-1, because it's an alpha API. We're using the Dapper workflow engine, which is specified here in our URL. We're telling it which workflow we want to instantiate, and we're passing in a unique ID. What's cool about each workflow is that it has its own instance ID, and because this is part of a business process, it's really easy to essentially use business entities to make these unique instance IDs valuable, right? So, think about, like, an order service. You might want to use the order ID or the instance ID to represent that order. So, it's super easy. You can generate that in your code. And in this case, we're essentially going to kick off that workflow. We give it simple input and a simple output. And return a single object as a result. So, in this case, we're passing in a name. We're going to pass it into a particular activity. In this case, the workflow only has one activity, so super simple. And then we're essentially going to return a string as the output value, which is just going to be the name plus a particular greeting that we generate. So, okay, let's dive into the code. Okay, so let's kick it over here. Can you see this green? Okay, let me zoom in a little bit. How's that looking? Okay, a little bit more. A little bit bigger. Okay, so what I'm going to show you now is a couple of important things that you'll have to do to actually get your code up and running using this workflow authoring SDK. So, the first thing that we'll see is that we have to include the Dapper client and the Dapper workflow library. So, these are essentially just saying, hey, include the authoring SDK capabilities and include a Dapper client that I can use to make use of the Dapper APIs and invoke that sidecar that Mark talked about. If we look at the program.cs, and once again, this is all in .NET, we can see that we're adding the Dapper workflow service, and in that, what we have to do is register the workflow itself and any activities that will be kicked off within this program. So, in this case, we're registering the Hello World workflow, and we're essentially registering a single activity called the Create Grading Activity. So, if we take a look at the workflow itself, it's really basic, right? We can see it inherits from this workflow base class and essentially the string input and string output are specified. It receives the workflow context, which is essentially used to do things like creating durable timers, kicking off child workflows, scheduling activities, and then we take an input, which is what we're going to post a name. And then we can see here that we're essentially awaiting the call activity method. What this is going to do is essentially send in the name to the Create Grading Activity and return us a string. And then we're going to finish the workflow. Sounds pretty simple, yeah? You want to see it? Okay. I'm going to show you. All right. So, what we do to run the Dapper application itself, so my, like, Dapper workflow, the Hello World app, along with the Dapper sidecar is we can do that in a single command. So, I'm going to do a Dapper run. I'm going to assign an app ID, which is essentially a unique ID that Dapper uses to identify your application workload. So, in this case, it'll be Hello World. And then in addition, I'm going to pass in the app word. So, this is basically just saying, hey, Dapper, my application's running on port 5000. My application knows that the Dapper sidecar is running on port 3500. And then I'm passing in the command to actually kick off my app, which is .NET Run. So, let's kick this off. I'm going to make this bigger for everyone so you can see what's going on. And really, the important thing to note is that when you see the blue text here, that's my application logs. And when you see the text that says info, that's actually coming from the Dapper sidecar. So, all of that's just visualized to me because I ran both in a single command. And the main thing I want to call out here is that we see the app has connected to the sidecar and that the sidecar has started the workflow engine. So, those are really the big things to call out. So, now we're going to invoke this Hello World application. And how are we going to do that? Well, we have to instantiate an instance of the workflow. So, we're using that workflow management API that I showed on the previous slide. And we're passing in input is Kindle, which is my name. And now I'm going to send the request. Sound good? Y'all ready to see what happens? What's the greeting? Am I going to get back? We don't know. So, I'm going to send the request. I see I got a 202 accepted and it returned the instance ID of the workflow. And then I'm essentially going to use a secondary command, which you see down here. I'm invoking the status of that workflow. So, I want to retrieve and basically pull the status, which is what I'm doing here. So, I'm sending in that instance ID. And I can see here that my... We'll close this. My workflow runtime status is completed, which means that the workflow ran successfully. And you can see that my input is Kindle and the output is High Kindle. Unfortunately, I got the most boring greeting that there was. But essentially, I could run this multiple times. I could even reuse the instance ID and overwrite the historical data there. I can spin off a new workflow with a new GUID. I could even run these sequentially and essentially wait for the output to return. So, while this is using a sequential flow, I could also do a multitude of workflow patterns like fan out, fan in. I could use sequential. I could do monitor patterns. So, there's a lot of different patterns you can code that are all supported by Dapper workflows. Okay, cool. So, that was fun, but not that exciting. So, why don't we get a little bit more advanced? Okay? So, before we do that, I kind of want to dive into some of what you just saw and demystify some of the logs that you probably are like, okay, that was a black box to me. So, let's dive into that. I can do this. Okay. So, we talked a little bit about the Dapper sidecar. And I just want to call out here that the Dapper workflow engine is running inside of that Dapper sidecar. So, your Dapper workflow engine, the main thing to note here is that it manages and schedules your workflows and your activities. It does not execute them, right? All of the executions taking place within the workflow that you've created in your application code. Another thing to consider is that the Dapper engine also stores and maintains states. So, I know I've talked about this append-only log. And I'll show you an example of what that looks like. But essentially, you have to imagine a workflow gets scheduled and activities are getting scheduled and then the workflow engine needs to know, hey, have these completed successfully. And they also need to be able to replay the state if and when the workflow ever needs to rerun or if one of the services goes down. So, I'll show you what that looks like as well. Another thing to consider is that if you're running at scale, we talked about practices like child workflows. There's also something called Continuous New where you can essentially have a workflow restart itself with a new instance ID and a new history. Those kind of things go well into execution at scale. And if you are executing at scale, running across multiple, you know, virtual machines, the engine will actually load balance the tasks and activities across these machines. So, let's focus on the workflow part. So, we've talked about the Dapper workflow authoring SDK. But what does the workflow actually do? It's a definition, right? It's telling the workflow engine, these are the activities I want to run and these are the orders I want to run them in. And what's important too is that the workflow itself doesn't execute any type of computation. It's not making any external service calls to other microservices. It's delegating all of that to the activities. So, that's really important to keep in mind. And it also behaves deterministically. So, it expects that the same input will always result in the same set of actions being executed. So, this is why it's really important. Imagine I kick off a workflow. It's a long-running workflow. And then I make a modification to the code and push it, right? That can be very challenging because it might change the historical log whenever an existing workflow instance replays itself. So, you really want to version your workflows. So, v1, v2, when you're making changes to activities and things that are going to change the execution history if it was to replay. What's really neat here is you'll see a line going between these things. So, when your workflow, basically, when you start the runtime of your application or you're creating a runtime instance, it will actually use a gRPC stream and it will initiate that gRPC stream to the dapper sidecar. So, the dapper sidecar isn't calling into your app or, you know, using an API to hit your app on a particular endpoint. Instead, it's using this gRPC stream that's initiated from your application by the authoring SDK. So, let's talk a little bit about what's going on in that gRPC stream to actually make sure that the workflow is running. So, we can see here we've got that initiated gRPC stream from your application code and what it's first doing is essentially getting a series of commands or management, you know, capabilities or command execution steps from the engine, right? So, it's going to say things like, schedule this workflow, schedule this activity on behalf of this particular workflow instance. In order to be able to track the state of completion, then your application code will be reporting back the results of this, right? So, it's all, I'm telling you what to do. The application code is executing and returning the results. That make sense? I'm going to wait just a second because people are taking some pictures, so. Okay, awesome. Do you want to see a more advanced example? Okay, that's what we're going to do. So, we're going to put together all those concepts we just talked about. So, we have our dapper workflow engine or our dapper sidecar. We know that there's going to be that gRPC stream initiated and I'll show you this in the logs as well. And this time, we're actually going to have a, I want you to think about it more from a retail perspective. So, imagine this is a single microservice called the checkout microservice. And within that particular microservice, we have a particular workflow that handles receiving, think of it as like a basket or like a checkout, right? So, let's imagine a user's already gone through the process of curating what they want to order and we're basically receiving that order payload and executing the checkout process. So, within that, we no longer have one activity, like the Hello World one, but we have five. And not all of these activities will execute every time, but they are deterministic in that if we get a particular input, the output will always follow the same path, right? And that comes into compensating transactions. So, we'll talk a little bit about when a particular activity would fire versus when it would not. So, we're going to kick this workflow off the same way we did with the previous workflow, except we're now changing the name from Hello World workflow to checkout workflow and we'll obviously pass in a new unique instance for the workflow itself. And then the input I'm passing in is going to be more like an order payload. So, I'm going to take in a name, an email, and then a particular order item. And in this case, I just made a conference ticket. So, if you wanted to buy, you know, 50 tickets to KubeCon or 20 tickets to the OSS Summit. Make sense? Business context all there? Okay, cool. I'm going to switch over. Go ahead and let's do a quick to the demo gods and hope that everything works out, because it's a little more complex than the previous. Okay. Let me make sure this code is all good. I'm going to stop this just so we don't have, you know, multiple things happening. Okay. So, right now, we are dropped into a retail application which has two services. Now, imagine this is obviously multiple services. We might have a shipping service, a fulfillment service, the list goes on, but in this case, we're going to focus specifically on checkout and payment. So, in this checkout service, I've done everything I showed in the previous slide, right? I've already registered my workflows, I've registered my activities, and now I'm authoring a workflow. And is this zoomed in enough? Can you see the content? Okay. Okay, cool. So, the big thing here that you'll notice is that I'm no longer taking in a string and sending out a string, right? I've created entities or classes that I expect to receive an output. So, in this case, we take in a customer order, which is that payload in the post, and then we're going to return a checkout result. And in this case, the checkout result is very simple. It's a processed flag that's either true or false. It was either processed or not. So, we can see here, I'm using the instance ID that I received from that management command to set an order ID, which is really nice, because, once again, it's creating business context for us to say this particular workflow is tied to a particular order ID. The first, can we just make some noise for a mark that he's still standing here? I know he's probably like, I don't want to sit down, and you're welcome to mark if you want. But the first thing we're going to do is essentially use an activity that just notifies the client that, hey, we received an order and we're going to kick off the process. Now, you'll see here, I actually set some custom status. This is a feature in dapper workflows in the context object, so that when I'm pulling the status, I get more clear picture of where this particular workflow is in the process. So, I might want to set a custom status if I'm waiting 30 days for the trial to complete, because every time I pull it, it's going to be processing. But if I didn't have a custom status, I might not actually know which state the workflow is in. So, I'm not going to go through this entire thing into end, but let's just talk through the business context, right? So, I'm essentially calling my first activity, which is going to check inventory. So, I'm just going to go out, I'm going to hit a state store using the dapper API, and I'm going to say, is there enough inventory to fulfill this order that was requested? It's going to return to me an inventory result, which basically is a boolean value. Is there inventory to fulfill this order? Is there not? And what's interesting here is that if there's not enough inventory available, this shows you one of the past the workflow could potentially take, right? Because there's not enough inventory, there's no reason to execute any more activities. Instead, I'm going to set the custom status and then return a result. So, that's how the, like you'll see when I first started working with this, I kind of expected like, oh, the workflow is terminated, right? Like it didn't finish. But it did, because I returned a checkout result. I just returned a result that the customer might not be as happy with because they really wanted that order. If it does fulfill that particular step, then I'm basically going to create a payment request. So, just think of this as me saying, what's the total amount that I need to charge the customer for this particular order? And I'm going to call that payment activity, right? Process payment. Now, what's interesting about this, and I'll show you an example of this activity, just so you can kind of see what's going on in one of the particular activities, I'm making a service-to-service call to another application called the payment service. So, we talked about Dapper's APIs, right? I can use service invocation, I can use PubSub, I can use State. And this is a good example of orchestrating across multiple services, right? I don't want to create a big monolithic application that's running and executing every single thing and piece of business logic that my workflow needs. So, if we take a look at my process payment activity, it receives a payment request, which is essentially, here's the order ID, here's how much to charge, here's the name of the user, and then it returns a payment response, which is actually going to be, was this payment successful or unsuccessful? So, if we take a look here, the thing that I really want to highlight is taking this call called invoke client post as JSON async, and what that's doing is using that Dapper client to call my service with app ID payment at the API stripe payment method. So, I'm using service-to-service invocation and as mentioned, we went through this yesterday in a session, so if you want to learn more about service invocation specifically, check it out. But what I get with this is automatic retry policies, right? I can set a custom circuit breaking pattern. I get MTLS to the other service. All that's handled on behalf of my application by Dapper. So, if the other service returns a success code, I will obviously process the payment, that the payment was successful, and if not, I'll return a 500. Now, how does the workflow know if this failed or not? Well, like I said, I want to return that the, like I want to wrap this in a try-catch, right? Because if there's an error that happens in a particular activity, I want to surface that up to the orchestration of the workflow to mitigate for that, right? Okay, cool. The last thing I want to call out here is the decrement of inventory. So, and how am I okay on time? We're going on time? Yeah, I'm going on time. So, the last thing I want to show is, let's say the payment, you know, processes and is successful, and I'm using the Stripe API by the way. They have a really cool test API that you can actually use to emulate payments. So, if you ever need to mock, like, a payment service, it's great, so I just want to call that out. But let's say that I go through and I'm like, okay, process payment, the inventory's all there. Well, unfortunately, they released the KubeCon tickets and everybody went to buy them, right? This just happened to me for, like, ACL, I think you're like Taylor Swift tickets, right? Maybe it was there before you finished checkout. Then it goes back and tries to decrement and is like, oh, your ticket actually got sold and purchased by somebody else, right? So, it's eventually consistent. It's not necessarily going to be, that inventory might not be there by the time you actually finish executing the workflow. It's no longer there, which we check by saying, you know, let's, sorry, I'm on the wrong part here. Where is it? Oh, here we go. If I go to update the inventory and it's like, oh no, that stock's no longer there, well, I've already taken payment, right? So, I have to do a compensating transaction. So, in my workflow, I can actually do that, right? I can say, hey, in my update inventory activity, which we can take a quick look at, I can see here that, oh no, I tried to go, I got the state from the state store, that dapper, once again, dapper provides to me. I didn't have to put any redis code in here, but I'm calling a redis state store. And I can say, hey, there's actually no more order quantity to fulfill this order. So, unfortunately, once again, very sad for the user, but at least you want to refund their payment, right? We don't want to keep their money. So, we will do a compensating transaction by calling another activity called refund payment. So, what that's going to do is basically go, take a compensating action, refund the payment, and essentially tell the user, I'm sorry, your payment was refunded. I'm going to call this inventory for the order that you made. And ultimately, the goal is, we've returned checkout result every time, but it never processed. And in this case, when I get to the very end, I've updated the inventory, everything's good to go. I'm going to return it as a succeed. So, do you want to see, I'm going to do four little examples of different ways through this workflow and kind of what the results look like. Does that sound good? Okay, awesome. So, give me just one second here. And we will kick off the application. The exact same way we kicked off the previous application. The one thing I will say, keep in mind, I already have the payment service running. So, just imagine in the background, right, this microservice was already kicked off using Dapper. It has a Dapper sidecar. It's running. We don't really, this is out of scope for the session, but I just wanted to let you know it is making that service to service call to the payment service. Another thing I'll show you really quickly is the inventory database, right? So, thank you, bigger. So, here you can see a local Redis instance that Dapper creates for me, because I used the Dapper CLI. So, essentially, when you use Dapper locally, you get some built-in components you can use for state and PubSub. So, I just wanted to kind of showcase, when I'm going and checking against inventory, it is literally calling a state API and hitting this local Redis instance to say, hey, how many tickets are available for DapperCon? There's 100. If it needs to decrement that, it will. And if there's not enough inventory, this is what's driving that. So, let's kick off the checkout service. It's the last piece in this puzzle, so I'm going to do a Dapper run. I'm going to give the app ID of the checkout service. I'm telling, once again, telling my Dapper sidecar that the app is running on port 5000 and that telling my application that Dapper is running on 3,500. Okay. Okay, everything's good, right? We're halfway there. We're halfway to battle. Okay, so, does it make a little bit more sense now when you see the app say the sidecar work streaming has been established, right? That's that gRPC stream. And you'll see that a lot of these, the Dapper sidecar logs make a little bit more sense because the Dapper engine has started and we know that's really what's driving the scheduling and execution of these activities within our workflow. So, I'm going to kick off my workflow now. And how am I going to do that, right? I'm going to post to the checkout workflow with an instance ID and a payload. So, in this case, think of fail checkout as just an easy way for my demo to essentially pass in either valid credit card to charge or an invalid credit card to charge. But the main thing is, my name is Kendall, this is my email, and I'm in order 20 cube con tickets. Now, hopefully, if I run this and everything's successful, it'll go all the way through and we'll take a look at what that status looks like as it's progressing. So, y'all ready to kick it off? Can I get a whoo? Okay. Thank you. Okay, so I'm kicking it off. We get the instance ID. So, what I want us to do is start polling. What we see is that it's running and you can see it's checking product inventory. So, the custom status is going to continue to get updated as we make progress in the workflow. So, now, it's processing payment. That's great. I actually put some sleep in here just so, like, I could simulate that. You could see the statuses. So, it should process for about 10 seconds and the checkout's completed. Okay, awesome. So, throughout that entire workflow process, I knew exactly what step was being executed. I knew that the checkout was completed. The cool about this is, you know, I don't have to, and I can see my output, right? So, yeah, it's pretty nice. And what's, I'm not going to go into the details here, but I do want to show you. I talked about this state that's being stored by the dapper engine in order to replay this. It's not pretty and we're not going to dive into it, but I'm going to show you it's there, right? So, let's take a look. You can see here that we have the Hello World and the checkout one here. But you can see we're getting activities. We're getting workflows that are being kicked off and the historical state is being saved, right? So, if for whatever reason, and I'll show you an example of this, my workflow got killed while it was processing payment. God forbid. We will be able to essentially restart it and I'm not going to have to re-execute it, right? It's going to pick up where it was, right? It's going to replay all the activities that already happened and continue through to complete the process. Okay, awesome. So, now let's just take a different flow through the workflow, right? So, instead of a successful payment, we're just going to make the payment unsuccessful. What we should see is that it gets to processing payment and then the status of the workflow will still be completed, but it will be with, you know, payment failed, your order wasn't processed. So, let's take a look at what that looks like. So, let's get the status here. Ooh, I've done that literally 800 times. So, we're going to send the status. It's payment processing. Let's see how far we get before we get a failure result. Payment failed, right? So, my custom status is payment failed and we can see that essentially the workflow is still completed but at least we know why it failed and that's because the payment couldn't successfully be executed. The card was declined. And you'll see here we have a lot of logs coming out of both my application and the dapper side card. So, I'm basically saying, hey, receive this order for 20 KubeCon tickets. I found 80 of them in inventory. Great. Let's move forward. And then essentially I'm like, oh no, an error occurred. My card was actually declined. So, I'm actually getting that error back that my card was declined and telling the user that the payment processing failed. So, I think personally is really cool and it kind of goes into this next demo that I want to call out is do you see all of these reminders? You see that it says reminder in a lot of places and if I do a control F, you might be able to see it highlighted. So, essentially what happens is if you think about my workflow engine, it's going to be calling out to my application saying, hey, run this workflow, run this activity. Well, what happens if that activity goes down, right? How does the workflow engine know that it was unsuccessful because this reminder is running? So, if the reminder is not canceled by the engine, it means that it never received a response that it expected. So, this is how you get that durability, right? Is essentially, hey, this reminder still exists. When the workflow engine comes back up or when your app comes back up, that reminder is going to fire letting it know it needs to complete the workflow. So, that's actually what's happening there. Okay. So, let's take a look at what happens if I just burn the world down and kill the app and kill the sidecar in the middle of the workflow. Once again, everything's been good so far. So, if this one doesn't work out, we'll just take it as an L and we'll move forward because it's been pretty good so far. So, give me just one second here. I'm going to clear out the screen so you can see what's going on. Actually, no, I'm not going to clear it out because I'm going to kill this. Okay. So, the first thing I'm going to do, I'm going to kick off the new workflow and I'm going to change the payload just to kind of keep things interesting. So, we'll change it to Alice. I don't know any, but it seems like a good name. And just kidding. So, I'm going to kick this off and I'm going to go back to the workflow. Does that sound good? Okay. So, let's send a request and then I'm just going to control-see this, baby, okay? Okay. So, we can see it did find 80 KubeCon tickets in inventory and then I just killed the dapper engine and I killed my app. So, obviously, this workflow cannot functionally be running, right? It'd be impossible. If I try to go get the state, it's obviously going to return an error because, once again, done that so many times. If I try to retrieve this, that app isn't running. There's nothing for me to post or get. Okay. Do I want to kick it off again and see if we can hit the status and get a completed workflow? That would be kind of cool, I think. So, let's do that. Let's rerun this. Okay. So, I'm not going to touch anything. What's happening right now is that that replay state and that reminder are essentially telling the workflow to continue running. So, did you see I didn't re-hit anything and now what's happening is that state's being replayed store and all of a sudden I can go here and hit that status. And once again, like I said, I didn't re-trigger it, right? I didn't say, hey, rerun this. It went down. And bought a bang, right? Payment failed, which we expected, but the workflow completed and that's really what's important. So, the last thing that I want to show you is we've talked about workflows and it's like why dapper, right? Like why dapper workflows. There's other technologies out there that do this. There is, and it's like looking at dapper as a bigger picture, right? I hit a service endpoint. I used state. I didn't codify anything about Redis in my application. Very streamlined code. And in addition, I get distributed tracing. So, I didn't configure anything, right? I didn't deploy Zipkin. I didn't deploy any type of external tracing engine. But using open telemetry, dapper actually has open telemetry instantiated as part of this workflow. And so I can actually move over to Zipkin and I'll do that here. And once again, I didn't send anything up. This is like nothing was done for me to do this. So let's refresh this. I'm going to hit here and all of a sudden, oh, wow, like we can actually see all three of the apps that we're running. And what we would see typically if they were still running is we'd see the actual, when I zoom in, it's actually getting smaller. So, oh, there we go. You can actually see a couple. I don't know if you saw them, but you see lines moving where it's like making a request. So that's replaying the request from the past 15 minutes. So I see my services here. And what's really cool is I can actually look at the traces. So let's run a query. And we can see the different workflows and we can see the activities actually going through the process. So when it completes successfully, I'm going through six different activities. So I have six spans and we can see a couple calls that I've obviously not made it all the way through the workflow. So I can actually take a look here, click show. And without doing anything, I can see a complete trace of my entire workflow end to end. I can see when it started. I can see if there's latency in a particular activity. And like I said, what's really nice about this is it's all done for me. So Dapper provides a lot out of the box that makes building these applications a lot simpler than if you were to try to do it, you know, without any help or without this runtime, essentially. So I think we have just enough time to close it out, right? 405 plus 40 minutes. Yeah, I think we have a minute. Yeah, go ahead. Yes. Yeah, let's address this question. So I mean, the way Dapper Workflows is written is they're really developer-focused. They're code-first. I mean, workflow is a long, long history. I mean, it's a 40-year-old technology in many, many ways. And you can take it away back to the 90s and things like this. And there's been many, many workflow engines. But a lot of the kind of more recent workflow engines have been very developer-focused. So you can be a developer-first paradigm of how you write code like that. And that's really where we're taking more depth at. Eventually, will there be a declarative format from top of this? For certain, yes. There is just isn't one there today. But a lot of workflow engines started from a declarative-only format. And that was okay if you wanted to create visual tools. But it wasn't that developer-friendly in some ways because it wasn't a native programming language. Whereas, you know, what we see now is that you really were going to satisfy both worlds around those, you know, the graphical, business-orientated, easy to talk about. And at the same time, developer-orientated those two. So the answer to your question is, yes, there will be a declarative format. We will certainly go, in fact, there's a standard body inside CNCF that's driving exactly that. And we're working very closely with them. But where we've started today is actually we're starting with a developer-focused mental approach to this. And that makes it really good for developers to think about the logic and how they write their code. Yeah, I mean, you could have that form of declarative model put on top of this. Yes, I mean, there's nothing wrong taking some existing BPL model and mapping it down to this code. There are many declarative formats around this. It's just not there today. Could it be there? Totally. But, say, the developer-focused approach to this. And if you look at, you know, a lot of the other more recent engines around there, workflow engines, they kind of follow a similar model around that, you know, where they have a developer focus on them and not just purely a declarative only one. So I think, can I do you mind if I jump in at all? So I think you're like, you're thinking right, right? There's different personas, though. So we wouldn't go to a business administrator and say, hey, use Dapper to write your workflows, right? We're talking about people who are already building distributed applications that are using PubSub and state management, right? And a more coordinating communication across these things. So we're very much focused on large-scale distributed application developers that need to coordinate logic across a series of services. But 100%, there's other great tools out there that provide the ability for the citizen developers. And we will eventually potentially get there. But I think our audience is a little bit different from a target perspective. And just building on that thing, the premise of Dapper as a project all up is for developers to build applications. And what happens today is you get very isolated viewpoints. You get people who just build a workflow engine. You get people who just build some state management. You get people who just build a messaging service. The Dapper project is actually very encompassing where it covers messaging and event-driven architectures. It covers state management. It covers secrets management. And many of those things, and when you combine those together, you actually end up with a very complete platform to build these things. And so I think that's one of the powerful things about this. Whereas sometimes when you just use another workflow engine, you've still got to go off and figure out how you combine that with Kafka because you're doing messaging and bring that all together. And there's not those two. I'm looking at it from that perspective as well. And if you're going to look at Dapper itself, it's also designed to be very inclusive to existing code. It's very incremental in terms of its adoption. You can just use a little bit of it. It's designed for building into brown-filled applications and expanding from there. It's not designed for throwing away and start again. So that inclusive nature of any developers coming out from existing code, building on top of that, with things to run the scalers is very much in its paradigm as well. So we think of it in the scope of that as well. Yeah, so I want to be conscientious of the next session. So we'll wrap up now. We're happy to stick around and answer any questions. Thank you all so much for coming. Definitely check out the Dapper project. We'd love to see engagement. We have a really active Discord channel with over 3,000 members. But also, if you're interested in kind of Dapper running it at scale in Kubernetes, definitely reach out to us at Diagram. We'd love to have a conversation. So thank you so much. If there are any questions that people want to ask. Yeah, I think we're out. Yeah, we're five minutes over. Yes, but we'll stick around 100%. Yeah, we have over 100 components for the different... So yeah, this is probably one of the areas of context you don't get when the session is a little bit more specific. But yeah, essentially think about state, PubSub. All of these have different brokers. You can actually bring it in and out using a component manifest. So think about it, you know, you're using this API. You're calling just the state endpoint. You could hit Redis. You could hit Azure Storage. You could hit, you know... MySQL? Yeah, MySQL, literally, there's... I don't even know, 20? No, there's over 30 state stores. 36 stores. No code changes required. Actually, just to be clear, there's a certain subset right now which are transactional state stores right now. Oh, yeah. Yeah, so you want one that's transactional or can be used as an actor state store to use with workflows. But if you were just using the state API, you can use pretty much anything based on... Yeah. Yeah.