 Hi everyone. Welcome to our poster presentation today on durable functions. This is our answer to serverless needs. Today we'll be talking about what serverless computing is. We'll be talking about the limitations on serverless's initial offerings and how durable functions gives users more freedom to do what they require. We'll go over a few scenarios as well as where durable functions make something much easier to do or just plain possible. So why serverless? Let's get one thing clear. Serverless doesn't mean that there aren't any servers. It just means that the user no longer has to worry about the nitty gritties of setting up and managing VMs or provisioning services. It means that by abstracting all of that infrastructure, you can focus primarily on your code and really dig deep on a microservice architecture. With a serverless architecture, you focus purely on the individual functions in your application code. Also, while traditional servers or cloud applications might be bulkier and upfront cost, the benefit of using serverless is that you pay as you use your service. This is because you're only charged when your function is called. So in a traditional serverless function, the cost would be calculated by the invocation of that function as well as the time required to run the phone function to completion. And lastly, this also means that your serverless helps you scale. Because the cloud provider manages all of the services behind the scenes, they can scale you up or down depending on traffic coming into your function. Not only that, it's easy to deploy. You can write simplified back-end code without worrying about dealing with other frameworks. And developers can add and modify code on a piecemeal basis. So it's a little bit faster. It's a little easier to deal with. And so ultimately, why serverless? Because it lets the user deal purely with your code by bringing it up into the individual functions that can be invoked and scaled as such. So what's a serverless function? Well, serverless functions are cloud-based solutions for getting serverless out there. Different providers of different names, for example, AWS's Lambda, Azure users' functions, but they all try to solve the problem of serverless architecture. The individual functions that can be broken out from your application then are stateless and short-lived, which means that once they're triggered, that existing code is run and doesn't mean any state before or after the event, it's asynchronous, meaning that each function happens purely based off the triggers and unaware of what happens in other functions. They're flexible because you can use your own dependencies, you have a couple choices of languages, and you can use different deployment and development tools to get you to production. And again, because as stated before, they can be broken up into piecemeal functions that users can work out of with one another. And lastly, one of the things that Microsoft offers in Azure is that they are event triggered with findings. So most of the other functions deal with things that are event triggered, such as going off of timers, HTTP requests, such as being able to do things with storage, but Microsoft offers availability to use something called bindings, which allows you to bypass having to use the SDK so that you can trigger functions based off of when something is uploaded into a storage database or a GraphQL database or Postgrease database. And so a couple of different reasons why people would use functions is because they want to do some data processing, something that does extracting and transforming and loading that information elsewhere. For example, if you wanted to take all of Stack Overflows questions and try and filter it out into different tags, it could have a function that runs every morning at 8 a.m., a timer function that takes in information from a data source. So this would be just a data, you know, collecting function. Then you can do some data processing on that once it's uploaded to the storage, which is a storage binding based function. And then have it uploaded to another database. So this would just be another HTTP function or a database binded function. You could use it for automation, and you could use it also for simple backend APIs. So downsides of servers, downsides of serverless, what's happening? They seem great, but there are a few limitations on what these serverless functions can do. They have a max timeout of 10 minutes. This is normally set at five for Azure functions, but you can set it up to 10. And this is a problem across all serverless functions, regardless of cloud providers. They have a max request size of five megabytes, so anything larger than that needs to be broken up. And they're stateless. So if you want to do more complex architecture around serverless, it would mean that you'd have to store that state somewhere else than pull it out from with another function. So with these few downsides, it could be really difficult or complex or even impossible to do some of the tasks that you want to do. So we have, Microsoft has this thing called durable functions, which just released Python support recently, and it doesn't have limits on time. Request can be broken apart. And through stateful orchestrator functions, you can maintain and manage state. And what orchestrator functions do is it describes a workflow that orchestrates other functions. An activity function is called by the orchestrator function that performs work and optionally returns a value. And the client function is a regular Azure function that starts an orchestrator function. This can be triggered like any other regular Azure function. So I just wanted to go into a couple of examples. The first one is called function chaining. And if you scroll in or zoom in to the code here, you can see that there is an orchestrator function. And this function runs like any other function, but it's a little bit magical. And this takes in function one, calls it, yields it out. And then once that's completed, it calls this function two with the output from function one. Then lastly, function three is called invoked after function two is completed with the output from function two all to get the result in function four. And so function training in general is a pattern where users execute a sequences of functions in a particular order. This outputs, the outputs of each of those functions could be necessary for the inputs of each subsequent one. And it's much simpler to do using durable functions because if you were to use regular functions, because of the state management, as you recall, it would be much more difficult to have to deal with storing it into a database or some sort of blob storage, pulling it back out, going back in, pulling it back out. And it can lead to a lot of sources of error. And because this is all done in one orchestrator function, it's more robust, there's better error handling, it doesn't rely on triggers and bindings. And if there is a point of failure, you can have try retry logic to help you get through it without having to restart every single thing from the beginning again. Secondly, we have this thing called fan out fan in. It's another pattern of executing multiple functions concurrently and then performing some aggregation on that result. For example, if you wanted to back up all of an app's site content onto Azure storage using regular functions, you would have to have one that handles everything or multiple that call one another as increments of that site are uploaded. This has issues of scalability. If you use one function to do all of that, the throughput is limited by the single VM that it's on. Also, if it fails midway or if the process is longer than the function limit, then the entire function would need to be restarted or it would just not be able to finish. And so this is using durable functions is a much more robust approach. And in our case requires writing two regular functions, one that enumerates the files and adds files names to a queue and another to read from that queue and uploaded blob storage. So in this scenario, an orchestrator function is called where F1 gets the work batch, then run all of the parallel tasks in F2 to write everything out. And lastly, F3 would finish up the entire process by returning what these are desired. Again, so this is a little bit, this is more scalable, more reliable, and you can have everything work concurrently within the F2 functions. Lastly, we have human interaction patterns. A lot of automated processes today involves some sort of human interaction. This could be that there are many, many number of serverless functions or other automated things that need to happen. But at a certain point, human interaction might be required. This specific interaction we're dealing with as an example is we're looking for manager approval for an expense report. If the manager doesn't approve within 72 hours, the escalation process kicks in or they can just process the approval. Ordinary serverless functions are stateless. So these types of interactions involve using a database or storage to maintain state. The interaction must be broken up into multiple functions coordinated together using a timeout to get everything working. The complexity is reduced greatly when you have to or when you can use durable functions. The orchestrator function can manage the state phone interaction easily and without involving any external data stores. Because the orchestrator functions are durable, these interactive flows are also highly reliable. And lastly, because they are running for long periods of time, people might think they cost more. So for this instance, we're assuming that a human interaction is required within 72 hours. So in a regular function, you might use a timeout or you might use some sort of time trigger one. In this case, the orchestrator would run and with some magic on our side, we wouldn't charge you for the 72 hours that it's waiting. It would only charge you for the exact instance where the function is invoked and for the duration of that invocation process. So actually, the price and costing of durable functions is basically the same as what you would see in a regular serverless function. So that kind of concludes my presentation with durable functions. I'm not saying that durable functions will be the answer to all of your serverless needs, but they do answer a couple of questions where limitations are met, where you need something to do something for longer than 10 minutes. You need something that requires you to take in batches that are bigger than five megabytes. And in that situation and scenario, durable functions might be the answer for you. Thanks for your time. If you have any questions, please ask me or Anthony on this chat or if you would like to reach out to us on Twitter or GitHub. Microsoft also has a Discord server for Python and Azure offerings. We are very, very communicative there and try to do more for our community there. That's it. Thanks for your time. Does anyone have any questions? Also, I think if no one has any questions, there is also a breakout room. I don't know if I can link to that or not. But yeah, if you scroll down to posters, more durable Azure function is one of the breakouts. And if you have any questions there, feel free to ask questions about it.