 Thanks, everyone, for coming, and welcome to our presentation about Cadence. So I'm Endur Demirkaya, I used to be one of the old take leads of the projects, recently Sivish the management, and Emra is here, he's also one of the take leads. Emra will take off the presentation and then I will take over after his section. Thank you very much, Endur. Hi, again, this is Emra, and as Endur said, I am one of the take leads for the Cadence project. So today, we will be talking about basically Cadence as a workflow orchestration engine, but the question is what is a workflow? So everybody in this room knows what a workflow is, I guess, but the problem is everybody defines a workflow in a different way. In every, I'm not going to say in every company, even within companies, every organization defines workflows in a different way. So I think it is a good start to tell what a workflow is. In the most generic terms, it is an orchestrated and repeatable pattern of activities. This could apply to all aspects of life, to engineering, to software engineering, and to the problems that we solve every day. So in software engineering, so at first we started with like big data pipelines as workflows, where there is a repeatable action that you repeat on a lot of data, and people realize that a lot of things needs to be reused. So they came up with data pipelines as the most common workflow engines. And then people realize that, well, we need a workflow engine for state machines, and then later on for scheduled operations that are also known as Chrome jobs. And in the next form, the workflow term kept evolving, and then it started applying to microservice orchestration, batch processing, and event-driven processing. And it got even more and more complicated, and people started using workflow orchestration for execution dependencies, relationships between operations, APIs or external interactions, signals, timers, async processing. And it kept going more and more sophisticated and complicated, and it started applying to more and more use cases. Basically, in any program, in pretty much every use case, wherever you come up with multiple steps in a business operation, you can tell there is a workflow in there. Workflows may include systems, it may include operations, it may involve events or people. Pretty much any program that you are dealing with includes a workflow. And then the tooling that people or we have been using throughout this evolution has also evolved. Because as the requirements kept evolving and becoming more and more sophisticated, the tools also had to be. The initial work of engines or orchestrators were config-driven. So you have a implementation and you have some configuration that directly tightly couples to a implementation, and you have this config-driven orchestration. And then as people started getting more sophisticated use cases and they wanted to abstract away more complexities, and perhaps make things less dependent on implementation, but more focused on contract definition. They came up with DSLs, Domain Specific Languages. Basically like YAML and JSON files, XML files and stuff like that. That defines what the workflow steps are going to be. What their sequence and dependencies and all that. But as people started using workflows for more and more complicated use cases, even these DSLs had to keep evolving to support more flexibility and more sophistication. And then we ended up in a state where we need a native programming language, all the capabilities of a language to define a complex workflow. So basically this is the evolution of requirements and evolution of the solutions. So workflows are not only about functional requirements, they also have a lot of non-functional requirements. So in any distributed system, regardless if it is involving some workflows or not, you have a lot of requirements that are going to apply. So let's start with the API. For any operation, for any kind of workflow, you need a way to start it. And you need a way to basically send more input to it and more signals to it. Or sometimes your workflow does not do what you want, you need to stop it, you need to rerun it. Or when things happen, you need to respond to it, right? So all these interfaces, which I call as meta operations, has to be supported by some orchestrator or some engine that will run your workflows. So in the bottom, we have some icons for foundation. So wherever you are running your code as a distributed system, you have to make sure that it is up and running all the time. It has strong SLAs and contracts. And then it scales very well, it is fault tolerant. And also it is monitorable, you can tell whether the system is healthy or not, and you can mitigate it. You can respond to bad things that are happening in the system. And then on the monitoring side, if you zoom in, you need a good set of metrics coming out of your system so that you can tell if your system is healthy or not. And you can also, you also need some tools to debug things like logs or intervention tools, some admin tooling that will help you mitigate problems as they happen. And finally, you also need some sort of a monitoring capability, either in the system or at least some extension point that will help you monitor your system and be alerted when something bad happens. Of course, these are all the requirements that relate to running a distributed system, but that is not enough. We should also love ourselves a little bit. We should focus on developer experience as well. So we want this whole experience to be developer friendly. We need to be able to focus on the business problem, rather than worrying about infrastructure. We need to have a way to locally develop things, but slowly move them to production. And the experience that we have in production should not be drastically different than what experience we have on our local machines. So this developer experience should also be good. These are all the non-function requirements that apply to every distributed system. And in the scope of work flows, we want to basically have solutions that cover those as well. So industry has been moving towards stateless services. And I think it is a great trend. It gives us a lot of capabilities. When you have stateless services, your systems are basically scaling very easily. You just keep adding more processing power and they will scale horizontally very nicely. The design gets a lot more simpler. You don't have to worry about things like state management, sharding, and all that stuff. You just keep adding machines as long as they know what they're supposed to do. As a herd of machines, it is much easy to reason about the overall architecture. And then systems are more fault tolerant because a single machine or sometimes even an entire cluster or a data center is not really a single point of failure because everything can replace each other. And things get less complex. There is no state or session management usually in the stateless services. But that does not mean that stateless is awesome and everything is going to be stateless. What that means is you still have to build stateful applications using stateless services. As a very small example, we are working at Uber. We try to build a lot of stateless services. But that doesn't mean that there is no state management. For every shard, for every order, for every order, for every ride, you have to keep the state somewhere, right? So all the session management, state transitions, scheduled operations, execution dependencies, transaction, and all that needs to be handled. And there needs to be a nice way to handle those. And somebody at the end of the day has to manage the state. So if we are saying that the service owner should not worry too much about it, then somebody else needs to maintain your state. So these are basically like some of the solutions that people came up with. So some people implement client-based orchestration with tokens. So their tokens have the state information and they send back the token to say, hey, this is where I left off earlier and this is the step I want to take. Can you do it for me? So this is how they pass back and forth the state that was there earlier. Or sometimes people manage multiple services just to keep track of states. Or you need to build ways and tools to ensure your operations, your workflows actually are recovering from failures when they happen. And they always are guaranteed to reach to a completion, right? So liveness of a workflow and eventual completion of a workflow is really important. And another way that people implement this is, well, sometimes they build like fully stateless applications that do things as simple as mathematical operations. Like you just call one of the machines, it will do some computation and return you some response that is pure stateless. But in real world, you need more, usually you need more sophisticated use cases where your machines keep some state in the cache. So you either need to rebuild the context after a failure so that you know where you left off or what you are supposed to do when a new request comes in, for example. Or you keep your state in a database. You explicitly write to a database saying that this is my state. This is the last known good state. If there's a crash, you go read from the database to reconstruct your state so that you can show requests again. So depending on the needs, there are more and more sophisticated and more complicated solutions for building stateful applications in stateless architectures. So talking about all of these like functional and non-functional requirements and how stateless basically makes our lives easier and systems more reliable and scalable. Basically what developers want is all the benefits of stateless but a simplified state handling for your application development. They want the development experience to be easy, starting from the coding all the way until everything is in production and running at scale and monitored very healthily and easily. And they want to abstract away the infrastructure. They don't want to worry too much about, well, the network may not be super reliable. Well, what if my database goes down? What if one of the services that I'm calling is not available right now, right? So all of these infrastructure related problems, people want to abstract them away. If there is an entire region failure, it should be as easy as hitting a button to fail over to a different region and keep running your stateful application on another region without losing much due to the fault. And then they want to have all the built-in observability. This is something that everybody wants, but not a lot of people wants to spend their time on like adding a bunch of metrics to all over the place is useful, but you can always miss it. And you may not get it right on the first attempt. So it would be great if you get observability out of the box. And then we also want, as developers, we want the service management to be into the van easy. So I think here I will let Enner to continue to talk about how we can do that. Thanks, Emra. Sounds like a hard task, right? There are a lot of requirements and there are a lot of things to achieve here, but what Emra is really pointing out is that all the companies are building products and eventually you really want to focus on your product, not your infra, and eventually more and more engineers. Basically, if 30% of your company is focusing on your infrastructure, there's an overhead to the company. The lower that number is, the better for the company so the more people can focus on the product. So let's go over a concrete example. So I'm hoping you guys are familiar with a food delivery system. So let's start with a concrete example and let's write one. This is a fake food delivery workflow. And as we mentioned, pretty much any program is a workflow. So let's start what this program is doing. So it's handling an order. The order comes through an order with a card to a restaurant and the customer information. And at the beginning of the order, we start a notification thread so the user will know what's happening with their order. And then the card needs to be sent to the restaurant. The restaurant will accept or reject. And then once we get the acceptance, hopefully they won't reject. They will prepare the food and then we will dispatch the driver. The driver will pick it up and then deliver the order. And then hopefully in an hour you will be done with your food and then we will ask how much, how do you rate your food, right? Normally writing a program like this should be as easy as this. Really you should be able to define your black boxes as easy as this and you should be able to write it as easy as this. But while I say it is easy, it's not that actually easy there. What we do at Cadence is we try to identify the building blocks of any program and we try to automate those. So if you look at closer in the beginning of your order, we start a separate thread that will notify the customers. And then we have functions, think of like a Lambda functions, we need to scale that could send cars to the restaurant or it will deliver the order or it will send a feedback request to the user. Sometimes you need callbacks. There needs to be a user outside of your program needs to hit a button, do something and your program should block until they hear from that customer or from that user. And sometimes you have delays like this, you need to wait for an hour. So in a food delivery system, this can take two hours, maybe even three hours. So how are we gonna do that? So there are two options, either you can keep a busy thread that's gonna kill your whole infrastructure or you need to orchestrate it. If you need to manually orchestrate it, then you need to think about all the problems here. So it needs to scale, it needs to be reliable. It needs to do what you are telling it to do. And again, it needs to be available. You would be very upset you put your order and all of a sudden your order is not available. It needs to be durable. You are putting a workflow for that's gonna run for two hours. Let's say an engineer decided to make a deployment. So what's gonna happen? So it still needs to continue running your workflow. It needs to be fault tolerant, as Emrah mentioned a little bit. It needs to support region payloads for disasters and stuff. And it should be very easy to develop. You shouldn't really need to learn yet another config or yet another DSL. You should just program it the language that you are familiar with. And again, you should get metrics, default metrics. You can already see some metrics here. You can see, say like how many cards was created, for example. These are your BI metrics we will dive into in a second. And whenever something goes wrong, it should be debuggable. And for long term purposes, mostly for legal, you need to have archival support and another thing. Let's say you're a fancy company doing food delivery. And you decided to streamline them even more. Now you are combining multiple orders. Now the whole state machine is gonna change. It's a big mess. How are you going to handle that, right? These are big problems. So with cadence, all you do, you write your code with the programming language that you are familiar with. And you import the cadence library. And cadence library will intercept with your code. And take all the non-functional requirements and it will handle it for you in the background. So we will talk about how that magic is working. But in cadence terminology, this is how we see things from our perspective. So the whole function is called workflow. And a child workflow, we can start any child workflow from any workflow that will behave like a separate thread. And then functions, we call them activities. And callbacks, we call them signals. Somebody needs to signal to your workflow. And then the thread sleep functions are called timers. So only thing you need to do when you are using cadence, so I use a pseudo-code here because it changes from language to language. In Java, it is as easy as just annotating your functions as activities, just annotating your classes as workflows. In Golang, you need to call them, instead of saying send cart to restaurant, you need to say execute activity, and all the rest is the same. You need to say, send cart to restaurant and all the parameters will follow after them. So it will be very close to how you see the code here. So that's the eventual goal. You shouldn't see non-functional requirements in your code. That will also make it much easier to understand. So how does it work? So we said that we wanted to use Stateless because it is getting popular. It has its benefits. It's also a lot misunderstood. But maybe we can fix some of these. So what we want to do is you will deploy your code to workers, and cadence server will give your workers one task at a time. Only do one thing, and that thing, and give me the result of it. And I will orchestrate the whole thing for you. So for semantically, we have two types of workers, workflow worker or activity worker. So workflow worker will do the state transitions. So every step here is called decision. So workflow worker will handle those decisions. And the activity worker is more like the lambda functions, or any cloud function you would know. It will take a function with parameters, and then it will give you the results. That's all we need from cadence side. And again, you don't need to know about this. All you need to do, write your code, and say that I want to handle my decisions by polling this task list, and I want to handle my functions from polling this task list. And those tasks are completely up to the user. They can create just by using any string. And cadence server will hold your workflow state, and task lists are kind of like the cues to your task. It will persist the schedule and signals for you. And in an appropriate time, it will deliver those tasks to your workers. And your workers will execute them. So let's explain some of the key concepts to understand cadence a little bit better. So everything starts with a domain. Domain is more about allocating your resources. You define your umbrella capacity. And workflow types, you can think of them like services within your domain, or classes within your domain, depending on how you think either distributed systems or object-oriented design. And then instances are each instance of every workflow type. So workflow ID is given by the user. So it's the business ID. And run ID is generated by us. It's unique in case the workflows will repeat each other. So this is explaining it a little bit. Domains is our isolation unit. It is the tenancy in the cluster. And it will have set of workflow types. And you can define your whole capacity. So it will not go over that capacity. And workflows and activities we just saw in an example. Those are more like services and classes. Activities are more like functions. And task list is a very easy way to manage your capacity and traffic management. So if there are some functions, whenever there is a job to do there, if you want to put them on a highway, execute as soon as possible, you just create a different task list, allocate different resources. And it will be done much faster. Or if you want to put everything into one task list because you care about simplicity, you can do it that way too. So then how is the user experience? So I created my workflow. And now I want to see what is happening. So the first thing is our web UI. So this is what you will see when a workflow starts. You put an order. You can see what is in the cart, the idea of the items when it was put. And these are your workflow options. If something goes wrong, how am I going to retry? You define it only once in your code. You don't need to do it over and over again. And you just say, if things go wrong, do you want to slow the retries? Or do you want to retry as soon as possible? Those are all can be configured. If you don't configure anything, that's also fine. We have defaults based on what people have been using so far. In our UI, you will see list of executions, activities that are being currently run. You can query or filter workflows based on different criteria. You can even, we saw that workflows can have child workflows. You can see the hierarchy, how the dependency looks like. You can even see the current stack trace of your workflow. And we provide you default metrics. You don't need to write any of these metrics. It comes out of the box. So there are some core counters and latency metrics per request, per workflow activity. And we talked about pause and decisions. And then it will show where you are active and if you are doing failover. So in this customer, they have one in staging for their testing and one in prod. And looks like they are doing some batch processing. They just jumped here. You can see what is happening without you building any metric. And you will see which tasks lists are busy. Where do you need more resources? Or you can see if your business is growing. You can see which activity is taking longer so that you can optimize your workflow and shrink the time that it is taking. And you can see if your workers are healthy. Each worker has some cash. After you learn more about cadence, you will see those things. You can see if you need to adjust your cash or scale out your service. And you can also see if your workers are healthy and if you have any errors and panics. So we talked about this thing. It looks magical. It looks awesome. Does it work? So these are the numbers currently being used at Uber. In our public workspace, Slack workspace, we have over 100 companies, some very major companies like DoorDash, Coinbase, HashiCorp, they are using all cadence. And we only know the numbers for Uber. So we have 12 billion workloads a month powering over 1000 services. And in total, we get over 100,000 updates a second for workflows. And that maps to almost 300 billion updates a month. And we still guarantee three nines of availability. And if I categorize some of these works, we have long running processes, synchronous interactions, microservice orchestration, batch processing, distributed Chrome, and singleton in distributed systems. So it is also an exciting time for cadence after over six years of development, we finally released V1. And I wanted to, this is our last slide. I wanted to mention what it means for us and what it means for the community. So obviously it's been tested a lot and it became a major product over the years. Now we feel that it is reliable, scalable, and operable even though it is growing 100% year over year, we are able to scale with a similar size of team. So now we think that it's a good time. So we released our V1. And we also started V1 that's going to continue and we started V2. So what it means is in V1 everything will continue as it used to be. It's going to be backward compatible. We want to make cadence as intuitive as possible. So we will focus on observability and ease of use and intuitiveness. You might have heard that cost savings are getting popular in the industry. So we are focusing on that too. So we will introduce some new concepts and different workflow types that will, some of them will work for example, in memory only, some of them will be more reliable and like for example, global consistency is a popular topic coming up too. So we will focus on different concepts to support cadence in different modes. And the other thing people are asking, can you create me a running template? So it is better to edit something running than writing something from scratch. So we will focus on that. And code sharing is another popular request. Every company has a deployment manager. Every company has a data pipeline manager and the list goes on. They say, why am I writing the same thing over and over again? Why don't we have some sort of code sharing mechanism that's also something we are working on. So over time, things should get only easier with cadence and you can just download a package and start using as your deployment manager for example. And what does V2 mean? There are some popular requests that cannot be implemented in a backward compatible way. And there's some real pressure. Customers are really wanting those features. So we will start implementing those. We will redesign our API. We will redesign our server core. And we will also provide a guide to migrate from V1 to V2 seamlessly. So while we are supporting with V2, we will also support the migration as well. And these are the references. Feel free to join our public Slack workspace. That's where we are the most active. And all the other references, we still monitor Stack Overflow. And our documentation is in our web page. Feel free to follow us from LinkedIn and Twitter. And we are also putting some education videos on our YouTube channel. So this concludes my meeting and looks like we still have some time, right? So we can accept some questions. There's any. Oh, I need to give you a mic, sorry. Hi, what infrastructure does it require? You know, I see a message queue there. But do you need persistent databases? So right now we are depending on three things and you can change them. You need a storage engine. At Uber we are using Cassandra and something similar to CRDB. But we know we support MySQL, Postgres, and we have, I think, Dynamo and MongoDB solutions. We are testing them. So you can pick any of those that are suitable for you or you can write a new plugin by yourself. Some of our customers did, for example. And you need a visibility engine. Again, you can either pick Cassandra, MySQL, or Elasticsearch, OpenSearch, any other similar technology. And we do use Kafka for replication. I think these are pretty much the only major technologies you would need as a dependency. Then that's about it. All right, thanks. Any other questions? Thank you. In regards to polling activity on a central messaging server to get around blocking on single threads, how does this differentiate from something like RabbitMQ? Great question. So basically I think using things like RabbitMQ or Kafka, those are the kind of the trivial ways to implement Workforce, right? Like, for example, you may have a workflow that is not written anywhere, but semantically you may have a workflow that consists of three steps. One server may pull a queue, another service may pull another queue. And basically when the actor one is done, they may populate a message for actor two in the other queue. And then the second actor would pull from that queue and process and generate a message for the third actor to basically act on what happens, right? So this is a very intuitive way of implementing a workflow, but the problem is since it is so kind of sparse, it is very difficult to tell where exactly your workflow is. Is your workflow stuck in one of these steps? Is it currently waiting for one of the actors, but the queue is backlogged? That is why your workflow is slow and stuff like that. So building your own workflow orchestration by using a lot of independent actors created the need for a service like cadence. So like people stepped back and said that, look, if this is the pattern that we are repeating all over the place, why don't we generalize? Why don't we come up with a solution that gives people an easy way to define their actors and orchestrate the whole thing as part of a single brain basically. And all of your stateless services or your workers are pulling the same central message broker. In this case, it is cadence. The broker is generating messages for the right queues basically. So I would say one way to define cadence is it is a message broker where the messages are generated by cadence, not by the user. The user is just creating the code for the intents and then cadence intercepts your calls, understands what you are trying to do and it creates the messages for you. And then the task polling and execution part is very similar to Rabbit M. John Kafka consumer kind of pattern. It is just that cadence is in charge of generating those tasks and keeping track of them rather than you having to do it sparsely. So to follow up, would you say that cadence would be better to implement, say, a distributed finite state machine rather than using the distributed message queue as a part as a... Yeah, exactly, exactly. So I'm not going to say that cadence is going to replace every message processing system out there. There may be cases where it is just simpler to use a single consumer proxy that pulls a massively scalable single queue. But as your use case gets more complex, like two, three more steps are involved in your workflow, I would say all of these are implementable on cadence. And in an ideal world, you would use Kafka or Rabbit M can all that for simple point-to-point communications rather than these kind of like multi-point or multi-actor workflows. And cadence should be able to replace them. I can maybe add one more thing. When you are using technologies like this, it gets into your code. So now the developer needs to know about those technologies, learn what it is doing, and needs to know all the nuances about them. But when you are using cadence, really the code, this is not for presentation purposes. Your code will literally look like this. So you don't care if it is Rabbit MQ or Kafka or something else. Any other question? So I would love to understand. By the way, fantastic presentation, loved the slides. I would like to understand the server part of it, like in one of the diagrams you showed the cadence server. And I imagine like you can horizontally scale that part. Is there like a minimal requirement if I want to get started playing with the cadence? Is there some sort of like a minimal requirement I need to start with? Like, so I would love to know more about that. I can take it and you can add. So cadence server itself, it is hard to believe but it's a single binary. So that single binary, basically when you are starting up you are giving it some parameters saying that this is your role in the cluster, right? So if you are starting really small, you can just start that binary somewhere and then there are four roles in a cadence backend and you can tell that binary to serve all the four roles at once in a single setup. So this would be the exact same setup that you have locally on your machine or this could be your production setup as well. As long as your storage is also accessible. With that one binary you can begin and then as your needs increase you can separate those roles into their own fleets basically. As long as over the network they are all able to talk to each other they will do their own kind of sharding within themselves to scale horizontally. And at Uber like the numbers, the billion numbers that we have is coming out of a handful of cadence clusters. Each of them could start as a single binary deployment somewhere all the way to massive clusters basically. Thank you. And how do I monitor the health of these servers? Does it show up in the similar Grafana dashboard you were showing? So those metrics will be flowing there or? Yeah, we have templates. We are also planning to update recently we invested a lot in observability. So we have Grafana templates and like recommend setting alert for certain things. And also in our documentation saying when you see this you need to tune this. This is what it means kind of helping people how to operate on it. And again our Slack workspace is very active. We have team in the US in Denmark too. So in wherever you are in the world you will get support to answer those questions as well. Very nice, thank you so much. So a small thing to add is like the metrics. So are emitted both from the cadence server side and from the client side. And the integration is basically like through a plug-in model. So if you are using a certain technology within your company or if you have your own custom for example metric collection system you can basically bring up a class that implements the interface that we have and then bring your basically your own stack for observability. Cadence will use your stack to emit the metrics both from the server side and from your client side. And they will all be collected in a central place. Very nice, thank you so much. So what sort of message delivery guarantees does cadence provide? Like everybody wants exactly wants processing I guess. So I think exactly wants is a very hard problem as you already pointed out. So you define it in your retry policies. If you basically want to fail in the first try you can configure it that way or if you want to try it multiple times you can do that. So one of them is more like at most one the other one is at least one but exactly wants is a tricky one. That's also, I mean that's a very large topic I'm talking like only a bits of it. I'm also not sure like which part you are more interested in. There's also the idempotency part of it. So we tell our users write their workflows as idempotent as possible. So when things go wrong, when you need to retry it's not going to basically add two orders for one customer so you need to track those things as well but as far as like exactly wants goes I think it's a hard thing to achieve. Yeah I think just like Enner said it is very easy to support at least once or at most once because all the activities in this code for example like the activity line over there if you want you can specify retry policies. If you say I don't want to retry this it basically means at most once. Or if you say I really want this to be delivered at least once then you specify aggressive retry policy that maybe retries thousands of times. So at least at most are very easily supported with cadence. Exactly once just as Enner mentioned is a hard problem and it all boils down to what kind of capabilities you are getting from your storage. So what we do is we try to use commodity hardware cheap databases to give you the eventual consistency that solves most of the issues but there will be cases where things like Google Spanner for example as a globally consistent database is just going to work better but it comes at the cost. So everything is a trade off there is no perfect solution at all and our claim is that if your use case can be handled with eventual consistency and if you are able to work with at least or at most once then cadence supports pretty much everything that you need. So the example here I mean assumes that you want the workflow to complete. What if you were to stop the work? How does it interact with in an actively running workflow? Yeah so let me make sure I understand it correctly. You have a workflow running and you want to stop it. Is that the question? Yeah so stop it midway. Yeah so there is we have CLI and web UI. So in web UI it's just a button click in CLI you can say terminate and you can stop it either way. There is also even more interesting problem maybe it is worth mentioning we call sagas. So let's say you are doing multi DB transaction which is a very hard thing to do right because we're calling one let's say you are making payments. You need to make one payment to one bank the other one and the other one and if the last one fails you want to revert everything. So you can do that with cadence. You can define how you want to revert things if things fail at the end and even like you exhaust everything there then you will see it as a failure. At least it's gonna be in your records. You will see exactly where it went wrong when things recover you can still fix it and you can even like reset and like recover from the same workflow of execution you don't need to write a secret for example. That's I think another use case to mention. Let's say a chain of A, B, C, D, E right? Like my initial order is at stage D and my cancellation does it have to start at A and catch up to D or I have to intercept at E or something. It stops it immediately so the workflow will not continue. I think the activity the current activity if there is a running activity it might complete. Okay. So I think this is one of the discussions that happens a lot with the cadence users. So sometimes they say that, hey like I have this workflow but in this state I need to stop the workflow, right? So if this is business as usual I would claim that this is part of your workflow. Like if for example like you are receiving a call from the customer saying that cancel my order, right? So of course you can go brittle and just kill the cadence workflow and say that I killed it. So it is not going to proceed but it may not be a graceful termination. You may need to clean up something from database for example, right? So if it is business as usual to kill workflows or stop workflows the solution to that is like create a signal that waits for a cancellation signal and basically build it as part of your workflow. Your workflow is not single-threaded. It can wait for multiple things at once and whichever event that is being waited happens it will react to it. So you can basically listen for a cancellation signal and in most cases you wouldn't receive the cancellation signals but when you receive a cancellation request from a customer you basically send a signal to your workflow saying that the signal has arrived and then the workflow is going to have a return somewhere in there when the signal is received similar to this accepted. It will say well the workflow was requested to be canceled so maybe after steps A and B it will just return because the signal arrived. So you know the workflow doesn't need to do C and D anymore or you can do your cleanup steps after the cancellation signal as well. So there is a lot of things that you can do and Cadence gives you all of that. So the CLI tooling that we have is for admin usage. Like you started a workflow but it was a bad idea. Maybe it was buggy, whatever. So you just want to kill them. The thing that I want to highlight is if stopping or canceling workflows is business as usual then it is part of your workflow and it is better to code it here so that everybody knows what is supposed to happen when a workflow needs to stop early. Okay, yeah, I think I get it. There's like a checkpoint in your workflow. It may be a race condition between a successful workflow and a stop order, whichever comes first, right? Yeah. Thanks. I have a question on this. I was just gonna say I don't know how much time we have. I'm gonna do a time check. If the room is not taken by another person we can continue taking questions. Okay, go ahead. Okay, thank you. So you mentioned like when you're writing to multiple databases there's a way you can do the rollback. How does that work? Like if do you hold on to those transactions without committing? Customer needs to define it. I see. Yeah, they need to define how they are going to revert to. Oh, got it, got it. Thank you. Any other questions? All right, thanks everyone for coming.