 Hey, everyone. Welcome back to .NET Conf. I'm very excited to be here today. My name is Jeff Holland, and I'm going to be talking a little bit more, continuing on if you just saw the last talk. I talked about Visual Studio and some Azure services. We're going to be going deeper into a specific service, which is Azure Functions. We're going to talk about a new framework that Azure Functions provides called Azure Durable Functions. Now, these are functions that allow you to do things that are stateful and long running, even though you're running inside of a serverless environment. So to get started, a little bit about myself. I am on the Azure Functions engineering team. I'm one of the senior program managers on that team. I was one of the original members of the Azure serverless team, and that's a phrase that you've been hearing a little bit about already today, that I'll spend some time explaining and going deeper into as well. So I've been working with serverless technology for quite some time, and I'll even tell you some of my experiences, and some of the reasons that I love serverless. Now, I personally knew that.NET was my favorite language and my true love. In around 2013, when I decided I wanted to write a mobile app, and so I worked with Windows Phone and Visual Studio at the time, and it was a breath of fresh air. So you can quote me on this, don't tell others because I do write functions in JavaScript and Python and Java as well. .NET, that's where it's at, that's why we're all here. So that's what we're going to be focusing on today. If you're interested to learn more about some of the projects that I've been working on, Azure Functions, you're more than welcome to follow me on Twitter or I've got a blog that I post some of the things that we're working on. Now to get started, let's just spend a few minutes understanding what serverless means, because I think an important part about Azure Durable Functions and these stateful orchestrations that you can use for any .NET process, it's important to understand what the environment looks like. So what is serverless? The first part is that there's an abstraction of servers, right? It's kind of in the name and it's a terrible name. I'm very aware of the fact that there are servers behind the scenes that are powering serverless. It is my team that is often woken up in the middle of the night to make sure that those servers are running. So yes, it's a gimmicky term, but there is an abstraction of servers. So that you as a developer aren't having to burden yourself with keeping things up to date, making sure that you have the latest security fixes, making sure that things are available, making sure that things are secured. All of that is managed automatically for you. So those are abstracted away, allowing you as a developer to just write the code that you care about. Now for serverless, another interesting aspect of it is that your VM, your computer, your machine isn't running 24-7. In fact, with serverless, what actually happens is we spin up resources for you on demand. So if you think about a really simple function, and this is one we'll actually shown a little bit, I might want to do some processing whenever somebody places an order. Like I need to charge their credit card, maybe I'm gonna need to send them a receipt. I don't need to have that function running all of the time. In fact, what I actually probably want is don't run this function until somebody places the order. Now that means that there needs to be some event. There needs to be some signal to the cloud to let us know that you want your code to run. So one of the pillars of serverless is that it is event-driven, and we use those events to spin up your compute for you automatically, run your code, and then spin them back down. Now this is awesome though because it comes with an incredible cost savings because one of the cool parts about serverless is you're only paying when your code is actually running. One of the projects that I worked on recently were actually some Azure functions that run whenever somebody comes to my house. I've got an IoT doorbell. I only get like two or three visitors a day, like somebody will come drop off some snacks. That's an invitation to anyone who knows where I live. Mail, whatever. But I'm not having to pay for some VM or some server that's running all of the time just for those three visits a day. In fact, what happens is my function just wakes up when someone comes to my house, it does its job, and then goes back to sleep. So those are some of the pillars of serverless, some of the characteristics about them that will help us understand kind of the power of durable functions, but as well as some of the reasons that it's important. So it's a little bit apparent with the definition, but why is this exciting for a developer? With serverless, you're not managing infrastructure. You're not having to deal with patches and updates and scaling out and scaling in. All of that infrastructure is managed automatically for you and there's no wasted resources. You're only paying for what you're using. So there's a reason, and if you haven't already been following serverless, there's a reason there's so much excitement around this space, because these are all benefits that hopefully we're all getting excited about. In many ways serverless takes care of all the pieces that aren't really the fun parts of building an app and just let us focus on building the fun parts. So at the center of serverless, there's a few different flavors and ways that serverless applications can run, but really at the center is what is called FAS or functions as a service. These are little bite-sized pieces of code that spin up in the cloud and spin back down. Now functions have a few characteristics that can sometimes be burdensome. So the first is that usually a function only does one job. You want it to be reusable, just like other best practices in programming. So if I need to do 10 tasks, I very likely will need 10 separate individual functions. In addition, function execution should be very short-lived. Part of that kind of breaking them up into single responsibility is I don't want one function that charges the credit card, then sends the email receipt, then updates the database and has 20 tasks that it has to do. Each of those tasks should be very short-lived and separate. The other part with functions is it is just running code. There's no actual state involved. These are completely stateless. So in some ways, when you have a problem that requires state, that can be tricky as well. And then I mentioned, I've already gone into a little bit about how these are event-driven and scalable. So the reason I want to highlight some of these characteristics is because oftentimes we are faced with problems where we need multiple responsibilities to happen. We can't necessarily have it be short-lived. There's a lot of data that has to be processed. There's a lot of work that has to be done and I need some state hanging around. So specifically, durable functions, which is a new framework released from Azure last year, help solve a lot of these problems. Now, just one more thing about why I love serverless. I talked about how some of my first apps were on Windows Phone. This is actually my first cloud app that I'm sharing here. In some ways, it was interesting because this was both written for Windows Phone and it was written as an app specifically to interface with the Microsoft Band. So unfortunately, I don't have a lot of users now because there hasn't been a new version of the band for a while, but that's okay. I still got the great experience of writing this. And if I think back of that experience back in 2015 when I was looking to write my first app, then you can see here in the picture what it was is a way for you to have your Microsoft Band give you score updates because I'm into sports and at the time the NBA playoffs were happening and I wanted to know what's the score of the current game. So I wrote this app, which would sync the scores to that watch. Now this app took me about a month to build and even though at the time I didn't have any users, I was still spending $80 a month to host the backend services for my application because I needed to have enough compute, enough virtual machines so that if all of a sudden I did get a lot of traffic, my application could run. So even with zero users, I was still spending a bit of my MSDN credits at the time to host this thing. And the screenshot I'm actually showing here was from an article written in Windows Central, which you might be familiar with. And at the time when this article got posted, I went from a couple dozen downloads to thousands of downloads overnight because a lot of people became aware of this app and they had a Microsoft Band in a Windows phone. Well now I had to figure out what I was gonna do about it because my app wasn't really equipped to deal with those spikes in traffic. So that was really painful for me as a developer. Like I'd spent this time writing this app and now I was trying to figure out, well, what do I do with all these users who are coming in and they need their sports scores synced to their band? And then the only other item is initially I only had sports scores for basketball and football. And when I say football, I mean American football in this case. And one of the first requests I had was, hey, we want soccer stores, scores. Well, I had to go in and kind of peel apart my monolith of an application and try to figure out how it works. Now I only bring this up because if I was to build this application again today in.net with serverless, I wouldn't have been paying any money a month when I had no users. When I got those spikes in traffic, my application would automatically scale to be able to handle that extra peak and that extra spike in traffic without me having to get woken up or to even know that it was happening necessarily. And then finally, if I needed to add new elements, if I need to add new features, hopefully it's as easy as just popping some new functions into my subscription, I don't have to peel apart a monolith anymore. So this is one of the reasons I love serverless is that the different ways and the benefits that you can get in building applications and then moving forward into durable functions, some of those powerful patterns that you can get. So let's bring it back now and let's introduce kind of durable functions. We've spent some time, hopefully we have a good feel for what serverless means and the meat behind the hype. So the question is, what if my task isn't short-lived, simple and stateless? What if it doesn't fit neatly into one of these things that we say are characteristics of serverless? And that's where we still get durable functions. So here's a few patterns that are steered difficult to solve in a serverless platform world or at least are until you bring in durable, spoiler alert, we're gonna have to solve all of these. So I have some sequencing that needs to happen. This is the store example that I've kind of been alluding to. My first function might be charging the customer's credit card. The second function might be sending them an email letting them know that the charge was successful. The third function might be creating a shipping notification, whatever it might be. Well, now I need to chain all of those together. And if I'm not working with the framework, it becomes kind of difficult to do that. I have to go and have queues or some storage and I've got to make sure that everything is connected together. Pretty soon I have all these individual functions scattered around. I don't really know what's listening to which queue and how they fit sequentially. So this can be painful in a serverless world, especially when I care about things like error handling. What do I do if step three fails? What does that mean about steps one and two? Here's another interesting problem which is fanning out and fanning in. So if you think about an example where you have a bunch of data that needs to be processed, it might be more efficient for me to parallelize all of that work, to go and break it into individual pieces and process them one at a time. We're gonna go through an example later in the presentation about doing this with ordered history. Like I wanna know all of the orders that happened in the last seven days and go do some work for each individual order. Well fanning out and serverless isn't terrible. I could have something like a queue and just create a thousand queue items and say, hey, I need each of these items processed. What sometimes gets trickier is how do I know when all of those are done? How am I managing and maintaining? So I know, hey, all 1000 tasks have been completed. Now you can send the summary email. Now you can aggregate the results and make some conclusions. That comes really tricky. It's very hard to pull parallelized compute back into a single thing when you're distributed across all of these different nodes. The external events is another one, right? What if I'm waiting for a human interaction? What if I'm waiting for some event to let me know something can continue on? Long running processes, there's a pattern that we often talk about called like the watcher pattern. One of the ones I've used this for personally and I've written about it in my blog is I wanted to watch the price of a specific currency, in this case it was cryptocurrency. So I was like, hey, let me know if this hits above or below a certain threshold within the last 24 hours. Well I can't have a function running for 24 hours because it's gotta be short lived. But so what do I do when I need something that's constantly watching, constantly monitoring something that might go over the course of a day or over a week? Long running HTTP requests, this is an interesting one too. So I might have a function that runs for five minutes or 10 minutes or a process that runs over the course of an hour. There's some customers I've worked with who have to go process some document, it might take an hour to complete. Well let's pretend you have a webpage and that webpage is going to kick off this job to go process. Well I can't keep that single web request open for an hour, wait for the response to come back in and then send the response back to my website. The internet just doesn't work that well, right? Usually you need to break things up, you need to send a request, immediately get back a response and then check back every now and then. Usually you'll see like the check back in a few moments or we're processing your order or whatever it might be. We can't just keep an HTTP connection open for that long. So in a serverless world, what do I do if I have a task that has to happen for that long? How do I know when it's completed? How do I keep track of that state? Then the final one, which I kind of mentioned a little bit with the external events, what if I have some human steps in the middle? What if you know the last step before my purchase process is completed, I need somebody to actually package up the product and put it in the mailbox. That means that my application needs to be smart enough to know when does that human go and take their step and then continue on with the rest of the flow. How do you correlate those events in a serverless world or even just in a distributed computing world? Like these problems are especially magnified because of what serverless is and that you don't have any long-running compute, but really these are difficult problems to solve regardless. These are problems we've been working to solve and make easier in different ways for a long time that become more difficult once we start talking about the cloud and distributed computing. So now let's talk about durable functions. I've teased it so much, you already know all the problems it's solving, so let's define it a little bit. Durable functions is a framework, a free open source framework for Azure functions that allows you to write long running orchestrations as a single function and maintain state for all of the calls that need to happen. So you can write a single orchestration that might last the duration of an entire week or a month, that might call multiple functions, that might be very long running, that might have to wait for human interaction, but I can write all of that as a durable function and it just works and still runs in a serverless way. This is awesome to simplify these complex transactions and coordinations so that you can very easily map and understand the whole end-to-end system that's happening. Now all durable functions is completely written in code. In fact, you'll see in a second, we're actually gonna write this in .NET code. So I'm not dealing with JSON schemas or workflow definitions or some of the other ways that often orchestration is solved. I'm doing this all right within .NET. In fact, you'll see in just a little bit you can do things like a try catch block, you can do things like a wait, you can do these concepts that you're likely familiar with, but now you can just automatically have this power and this long running that's taking care of from the framework. And durable functions is generally available. We announced general availability in May of this year at the Microsoft Build Conference and it does support multiple languages. So .NET C-Sharp is the one that I'll be showing today. We have some community members who've done some work to write durable orchestrations in F-Sharp and we have JavaScript which is actually in preview today, but if you're interested in writing durable orchestrations in JavaScript, that's an option as well. So I'm gonna talk about some of the components of a durable function and then we'll show what our first durable function looks like. So oftentimes we talk about three different components that make up an overall durable function. Now the first piece is that starter function. The starter function is kind of whatever's going to trigger off this long running process. Very often this starter function is something like an HTTP endpoint. You want to be able to start off some report. You want to be able to go place that order when somebody makes a web request or when somebody drops an item in a queue or when some event happens, you just have a function that's going to start off this orchestration. Now once that starter function gets triggered and it knows it needs to start its orchestration, it's gonna go ahead and send a message through the framework to the orchestrator. Now the orchestrator is where all the state is managed, where all of the calls to different components are taken care of. The orchestrator is really the heart of a durable function. That's where you're gonna put your orchestration logic and we'll see that in a few moments. And the last piece is you likely have different activities that need to happen. Again, going back to the example we've been kind of following with, I might need to charge the credit card, send the email. These are all individual activities. They could be written as separate Azure functions. So the orchestrator is actually going to call all of those activities for you and it's gonna enable you to decide do I wanna do these activities in parallel? Do these activities need to be done sequentially or maybe I just need at least one of these activities to complete before I continue on. All of that's possible with indurable functions. So let's kind of solidify this a little bit and see what the code looks like now that we've seen the components. This is an example right here on my slide of an orchestration function. So this is the orchestrator function, the heart. So some functions gonna start this orchestration and you'll see here I have some other functions that I have written that I'm going to be calling. So the first step I'm going to call function F1. Once that's completed I'm going to be calling function F2 and I'm even going to pass it some data. I'm gonna pass it the result of my first function and chain that to my second function. And then finally I'm gonna call function F3 once this function's done and I'm gonna pass it the result of that. So I'm kind of starting with F1 and I'm passing through sequentially the data from all of my different calls. And till finally I'm going to return the result. So here in a very logical way, in.net I've described this chaining and passing data between different functions. And what's nice here, I mentioned this before, I have nice familiar concepts like a try catch block so that I can catch if exceptions happen and decide what do I wanna do, where did the exception occur, what was the type of exception to have some resiliency in this as well. So the other pieces, again kind of mapping it back to their one, this trigger, this is our orchestrator function that's saying hey this is a durable orchestration context is the type and it will have the context of the orchestration. I'm gonna be calling my three activity functions and finally returning those back. Now I'm actually gonna go out of order here because I wanna do this one, now I'll do this one first. So before I do a quick demo, I do wanna show you one important aspect which is how durable functions actually behaves. So if you look at this example again, this is a very simple function, the next one we're gonna look is very simple to it, call function one, then function two, then function three. It's very possible that calling all three of these functions might take a very long time. So we have to do some work in the framework to make sure that this is durable, that you have at least once guaranteed execution and that everything executes in the order that you expect it to. Now in order to do that, we're using a thing called event sourcing and I'm gonna show you through an example of what's happening behind the scenes. So before I do, I have a very simple orchestration here just to illustrate the point because this is very critical to understand. Honestly, if there's anything you gain from this talk, if you understand this slide, you will be much more successful in writing durable functions. So I have here an orchestration, some sample code up here. I'm going to create a new list, it's a list of strings and all I'm gonna do is I'm gonna add to that list some call to a function, okay? So I'm gonna call a function and the function's name is say hello and I'm gonna pass it to value.netconf, okay? That's all I'm doing. I'm calling a function say hello. I'm passing it the value.netconf. Whatever it returns back, which is gonna be a string, I'm gonna add that to my list, okay? So very simple, all I wanna do call this function, stick the outputs into my list, which is a string. Now how do we actually do that from the framework perspective? So what happens is we have this other component here that I haven't mentioned. This is our execution history. This is the state. This piece is how we store and know how far in the process your execution has gone. So the execution history here, and I'll show you through an example in a second, is Azure storage. And that execution history is gonna be very important into how this functions and how this works. So the first thing that happens, someone calls the orchestration. Our starter function gets the request and it says, hey, somebody wants this orchestration to run. So now what's going to happen is our orchestrator function's going to wake up and it's going to start at the very top of the code. And it's gonna say, hey, I need to create a new list. So it's gonna create a new list. And then it's gonna come down to this next line and it's gonna see that there's an await here. And it's gonna see the await is to call another activity. Now what happens here is instead of just automatically calling the activity function, the orchestration, the orchestrator functions actually gonna go to the execution history. It's gonna look, I've visualized it here with this log. It's gonna look in the logs of the state of this function. And it's gonna say, hey, have I already done this? Did I already tell say hello to .NET comp? Did I already do this piece? And in this case, the execution history is gonna be like, no, you haven't done this yet. And so the orchestrator is gonna be like, cool, that means I need to do it. So it's gonna cue some work. It's gonna say, hey, hey, say hello function. Whenever you get a second, you need to say hello to .NET comp. And then the orchestrator function completes. It just goes away. It's done with its job for now. This is how it can work in a serverless way. That orchestration function, after it schedules the work, is now scaled to zero in this case. So you're actually not getting charged for that orchestration piece anymore. It's gone away. And instead what's happened is our activity function wakes up. And it's like, oh, hey, I have some work to do. And it looks at its work and it's like, hey, I need to say hello to .NET comp. So it says hello to .NET comp. Hello .NET comp. And it goes ahead and it updates the execution history. So it's like, hey, I did my job. So now what happens, this is the most important part. So the orchestrator knew it needed to get some work done. The activity function did the work and it let the orchestrator know that it's finished with its work. Now what happens when the orchestrator wakes back up is extremely important to understand when you're writing and debugging these. The orchestrator function is not going to pick up from where it left off. In fact, what's going to happen is it's going to start from the very top of the orchestrator again when it wakes back up. So it starts at the top. It does that first line of code. Hey, I need to have a new list of strings. It goes to the second thing and it says that await word and it says that it's supposed to await an activity. And this time it's gonna go to the execution history and say, hey, have I already done this thing? And in this case, the execution history is like, yeah. The say hello function already said hello to .NET comp. You've done this part. So now the orchestrator is like, cool, we'll let me go on to the next step. And it's gonna go ahead and can continue on with its execution, which in this case is just returning that list of one. But you can imagine here, if I had other calls that needed to happen, at that point, our orchestrator would continue on to the next call, schedule the work, go back to sleep, wake up, continue on to the next call, schedule some work, go back to sleep. And this can happen with some really complex patterns too. Like you might have when we do this fan out, fan in pattern, that might schedule thousands of work items that need to complete before the orchestrator can wake back up. So the main reason I wanted to show this illustration, which is an awesome, by the way, this is my favorite slide of all the slides for functions written by one of our developers, Katie, is it's important to understand that this is following replay through event sourcing to be able to manage these long running and stateful tasks that might be happening in a completely distributed way. It's very possible that the activity function happened on a completely different virtual machine, a completely different node than your orchestrator function, but it doesn't really matter because you're just scheduling this work across. All right, so let's see what this looks like from a development experience. We've kind of talked about the concepts and how this is powered, but I did want to get you a feel for what it's like to write one of these. So I'm here in Visual Studio, 2017 in this case, and Andrew showed some of this to you before about how I can go ahead here and create a brand new Azure Functions project, okay? So I'm just gonna create this and we'll say hello, durable functions, just a simple solution. So using the cloud workload, I'm gonna say, all right, I want a new Azure Functions project. There's a few quick start templates up here. In fact, we should add one for durable on this top list, but I'm gonna go ahead and select just an empty project for now. Now, this is an important piece to note. It's asking me where I want to keep my state while I develop locally, and I'm just gonna use the Azure Storage Emulator. But if I wanted to, I connect this to an actual storage account in the cloud, but I don't need to right now, okay? So I want a new Azure Function. I want the state to be the storage emulator. And now let's go ahead and add in a new function to this project, okay? So this is pretty empty now. I've got my local settings, which has telling me that I'm using my development storage. There's not a lot here. So let's go ahead and add, there it is, new Azure Function. And we'll just leave this called function one, that's fine. And here, when I add a new function, there's actually this really handy template for getting started, which is, hey, I want to use the template for creating a durable functions orchestration, okay? So let's go ahead and choose that template. Now this is going to add to this function app my durable functions orchestration. And this is just a really nice hello durable function sample, okay? So let me break down the three parts. I'm actually gonna start at the bottom here. So the first part, this is our starter function. In this case, I'm exposing an HTTP trigger. I'm gonna get a request. It's either gonna be an HTTP get or an HTTP post. Both are acceptable. And all the code is really saying is, hey, right away, go start a new instance of, in this case, it's called function one, okay? So go start a new instance of the function one orchestrator. It's gonna create a log and it's gonna return back some status endpoints, okay? So my starter function gets an HTTP request. It says go start an orchestration called function one and return back your response. Now that brings us here to our orchestration. Here's our orchestrator called run orchestrator. It's actually function one known from the function runtime, okay? And this is very similar to the example we saw. It's going to create a new list of outputs called strings. It's gonna call three, well it's gonna call the same function three different times. It's going to call the function one hello and pass in Tokyo, function one hello and pass in Seattle, function one hello and pass in London. And it's gonna add each of those outputs to the list and then it's eventually going to return that whole list. And you can see here from the comment, we expect when it's done, it's gonna have hello Tokyo, hello Seattle, hello London. Now the last piece, what is function one hello? Well, that's our third Azure function right here, okay? So this is function one hello. It's just gonna create a log and then return back hello name using some beautiful string interpolation, my favorite feature of C sharp. All right, so we'll return those outputs. Hopefully that's simple enough. Those are all the components, my starter function, HTTP request, starts orchestration, which is gonna call the hello function three times. Now, one of the awesome things about Azure functions but specifically durable functions is I can actually run and debug this all locally. So let's go ahead right here and set a break point on this first call to function one. Actually, I'm gonna do it on my second call. I think the data will be more interesting. Now, before I run this, I wanna open up the Azure Storage Explorer. Andrew showed this off a little bit in the previous session if you saw. But this is just a really nice way for me to look at my storage account. And in this case, I'm connected to my storage emulator or emulator according to my typo. And you can see here, I have a totally empty storage account. So there's nothing that's stored currently. All right, so let's check this out. Let's click here and run this function. It's going to spin up the Azure functions runtime here inside of Visual Studio. This is the exact same runtime that will be running once I publish my function to the cloud. It's gonna spin up and it's gonna see that I've written these few different pieces. And you'll see here in just a second, it should actually give us this endpoint, okay? So it's, it logged a few things. It's like, yep, I found all your functions. It looks good. And it's actually gonna give us this nice endpoint. This is our starter function. So here, all locally, let's go ahead and call this starter function. So let me copy that URL. We'll open up Postman, which is a really convenient HTTP client for debugging. Thought I had it open before, sorry about that. So here we come in. Let's go ahead and make a request to that URL, okay? So that kicked off our starter function. Our starter function at this point should wake up the orchestrator. The orchestrator called our first function. And you'll see right here, I actually hit a break point. And I hit this break point right before it called the step Hello Seattle. Now I wanna keep it on this break point because I wanna show you something. Let's come back over here to our storage account and refresh it because we should see that a few things got created. The first thing that got created is in our table storage. I'll zoom in here a little bit so you can see a little bit better. Durable Functions Framework actually created some tables to store things. So one of them is a table that keep track of all of the instances and their status. And the other one is that Execution History so that the Durable Function knows what it's doing and when. So let's go ahead and open up that Execution History and this should look very familiar to that last slide. And in fact, I can see here and usually when you're running in production I don't really recommend that you dig in here to your tables to try to debug it. It can get very messy, but for the purpose of understanding you can see here all of the different pieces that are happening. So here the execution went ahead and started. It's scheduled some tasks to be completed. You can see orchestration completed. In fact, if I step through a little bit more and let the storage update, oops, not that one. Let's go ahead and continue this call across and let it say hello to everything. Oh, that's right, that was one of the replays. Let's come back here and refresh this and you'll see here that it actually scheduled one of our tasks and the task function one completed and all of these things are being updated and function ones result way over here. You can actually see the result was hello Tokyo. So all of this state is being stored in the Execution History and you'll notice I actually hit my break point again because it's replaying itself and regenerating the state and it's replaying again and regenerating the state and it's replaying again and regenerating the state until finally it's replayed all of the times. It's called all of the tasks it needs to at which point it can return back to our function that called it, okay? So I just wanted to show you the development experience but also show you like there is no magic here. This is the same stuff that I showed in my other slide. We have this table store that's being used for all of our states. You'll see there's actually a number of queues that are created so that it can control the work that needs to be required. The orchestrator can have its queue, the activity functions can have their queues. I'm not gonna go a whole lot into that today and I even have a thing to have leases. So all of this is locked and managed and stateful but again I didn't have to write code for all of this awesome state management. The durable functions framework just gave it to me all for free, okay? So hopefully that is a good hello functions demo. The only other thing and you saw this in the last slide if I wanted to make this a little bit more error resilient I could add things like a try catch block and just put try catch right over here. This is totally valid, right? Just my regular old C sharp, it's all good, okay? I could do things like a wait all and you'll see that in just a second. All right, so that's our first hello durable function trigger. The last thing I'll show you, once this is working how I want it to locally I can just simply come in here and publish it to the cloud and once I publish this as an Azure function now I could call this orchestrated at any time and this will be able to run and scale completely serverlessly either in a new or an existing app. All right, let's come back here now that we've seen that and talk about some of these constraints. So most of these constraints stem from the fact that this is using event sourcing with replays to work. So let's check out some of these constraints. Orchestrator code must be deterministic. And when you understand how durable functions works this makes a lot of sense. If in my orchestration logic I say get the current time and generate a random number and based on what that random number is and what the current time is, do this or that. Now the problem with that approach is every time that function replays every time that orchestration replays the result might be different which means I might be replaying down different code paths and I started down things are gonna get corrupt and they're gonna blow up into a massive fire eruption don't do it, okay? So just make sure your code is deterministic. So a few things don't use random numbers don't say get the current date time don't generate new quids. Also don't do IO directly in an orchestrator. Don't go read files or pull in files directly from the orchestrator. And I love this one. I, this is my favorite bullet point of the slide. Don't write infinite loops. There's actually, yeah. So let me spoil it with this one. Never writing infinite loops is really never a great idea. Okay, almost never. All right, so the good news is we have workarounds for all of these constraints because there might be scenarios where I care about what the current time is. There might be scenarios where I need to read in file data and there might be scenarios where I actually want this thing to run indefinitely. So in some ways I do want it to loop indefinitely. So we have some recommended workarounds. The first one for random numbers and date time that orchestration context that's passed into the orchestrator we actually provide some helper methods here like get the current time. Now what's cool about this is that we'll get a current time but we'll do it in a way so that when we replay it we make sure that we re-get the same time that we got. If that makes sense. So you won't get a different time every time it replays. You'll get the same time from the first time it ran. So we get some nice helpers there. Do your IO and activity functions. You can do whatever you want in an activity function. Activity functions have no constraints into what they can do, into how they can be run, random numbers or not. So if you need to move things into your activity function that's totally cool. And then if you do need something to loop indefinitely, excuse me, we have this concept called continue is new. Now the main reason looping indefinitely can get tricky and durable is because if you think about that execution history if that execution history has been repeating and re-looping for days your history is gonna be massive. And every time it tries to replay it's gonna have to pull that apart and redo all the infinite loops. So there's this concept called continue is new which will take a snapshot, start the instance over from the beginning but it's not gonna have to replay every single iteration of the infinite loop. So we've got some good workarounds here for you but it's important to know as well some of those constraints. All right, so let's show one more demo. Now this one is a little bit different. What I have here and I've teased this one already. This is a website. It's actually powered by serverless functions. This was created by one of the cloud developer advocates Sarah Drasner who is a rock star. I will say, I'll take every excuse I can to say she's a rock star. So I could do something here like add a few items to a shopping cart, go ahead and check out of the shopping cart here too. Let me just type in my very real credit card number being broadcast for the world. And in this case it's actually calling an Azure function to process that payment request, okay? Yay, I just bought however much I just bought I didn't even look probably like $200 worth of stuff. Now the problem that I wanna solve has to do with this store shoppity. And that's that the business comes to me and they say, hey Jeff, you're so productive. You're using all this awesome.net stuff. You're using Visual Studio and serverless. You can do anything, we know you can. Now my problem is I want to have a report. I need to know every seven days what's the current status of all of our orders, okay? And we're a really popular store. We might have thousands of orders that have occurred in the last seven days. So I need you to give me a report so that you can look at all of the orders that have been in the last seven days and tell me what's the current shipping status? How many of those orders have been delivered? How many of those orders are being processed? How many of those orders are in transit? Now I love serverless and I'm like, I want this to be super efficient and scalable and not worry about infrastructure. But this can be a hard problem to solve in a serverless way. Cause if you look at the diagram here of what I actually need to happen, I have my orders database which for shoppity's Cosmos DB. And I'm gonna go say, give me the last seven days of orders. And there might be a 10,000 orders there, 1,000 orders, I don't know. It's gonna be indeterminate. It's gonna change every time I run the report. And what I need to do is for each individual order, for each order, I need to go call some shipping API. Maybe it's the FedEx API or the UPS API. I don't know, it could be anything. And that API is not super fast. It might take a second or two for every single order. So how do I do this? Like if I just try to write a single C-sharp application, it's very possible that this thing's gonna take two hours to run. Like it's gotta make all of these individual calls. I can't just write that as an Azure function and publish it to the cloud as is. So that's where I'm gonna bring in durable functions to help orchestrate these different calls, parallelize the work and do it all for me. So I'm gonna show you what that solution looks like. This is also on GitHub if you wanna check it out. Let's open, where's my open recent? Recent right here. I'm crazy. Right here. No, my name is not Corey. Everything here just happens to be named Corey for undisclosed reasons. All right, so here is that report generator as a durable function. So I'm actually gonna start here from the bottom again to peel this apart. I have here my starter function. Same as before, I'm gonna get an HTTP request that says, hey, go generate for me the report, okay? That starter function's gonna call the durable generate report orchestrator. So it's gonna say, hey, go start this orchestration. That's all this code right here. I'm gonna create a new list of tasks and I'm gonna create a new list of order totals because I'm gonna need to calculate some totals at the end of the day. Now the first thing I'm gonna call is this Azure function called durable get transactions where I'm gonna say, go get me the last set of transactions. So it's a simple Azure function. You could actually see the thing down here. It's gonna call Cosmos DB and it's going to ask it for all of the most recent Stripe charges. It's using the Stripe API to make these charges. Really simple Azure function. Just go get me the last seven days of Stripe charges. Now once I get all those charges this is where the magic happens. I'm gonna say for each one of those Stripe transactions go get the current order process. But I'm actually not awaiting it right here. I'm just adding that task to my tasks list. And what's cool here is that I can say after I add all those tasks wait for all of them to complete. And here's where I put my await. Now what this is going to do from a durable function standpoint and this is going to parallelize all of those different calls. It's gonna allow all of these activities to potentially scale out across multiple nodes, wait for all of them to complete then when all have completed I have this simple link statement here which is gonna generate for me a nice summary report. This is really cool that I'm doing a lot of heavy lifting but I'm writing it completely in Azure functions. This is all gonna run serverlessly but it could be doing some heavy lifting over a really long period of time and this is how it's represented is a beautiful C-sharp app. It's awesome. It's very exciting. It's really cool stuff. So let's see it in action. So let's come back here to our store and I've got here the function that I want to call. I have the hdp endpoint right here, right here, hdp start. So we're gonna kick off that function. Now I didn't call this out before when we were debugging it. You'll notice right away I got back a response. I'm not having to wait two hours for this thing to complete. The durable orchestration's kicked off. I'm gonna talk too long so I'm gonna have to run this thing again because I want you to see it running but it gives me back some things. So the first one, it gives me the orchestration instance ID so I can use this to track the orchestration. I get a status endpoint so I can view the current status of the orchestration. I get an endpoint here called send event post URI. Now this is a web hook that I could actually call if I needed to add some data to the orchestration. If I needed to wait for some event, this is the endpoint that I could call to send it that data. So I can raise events and add that to the orchestration while it's waiting for the right event. I have a URL here that I could post to determinate and we have this new URL here which is actually really cool. This will allow me to rewind my orchestration. If something happened and I wanna rewind it, I could call this URL right now. Now I'm actually gonna call this one again because I talked for so long so I'm gonna get a new instance and let's follow this status endpoint. No, I don't know if I might need to, I think my cache is not actually calling it again. Okay, that's a different instance. Maybe it is. Maybe this is just too fast. Well, what I wanted to show you before I got confused in Chrome land is, so here's the status endpoint. Now if this was still running, I would get the endpoint status running and I could keep pulling this endpoint until it was completed. But you can see here in this case, it's actually completed and here is my report out. So I have seven orders that have shipped, four orders that have been delivered, 10 that are other and five that are processing. Now if this was 1,000 orders or 10,000 orders, this would still work. It might just take a little bit longer to parallelize and complete all that work but the orchestration logic is the same. So really cool that I can do this fanning out and fanning back in. All right, so let's show just a few more things really quickly. The last piece I wanna touch on is, how do you monitor and manage these different durable orchestrations? I'm doing some complex stuff here. What if something goes wrong? How do I make sure that things are behaving how I want them to? So the first one that I would recommend is use Azure application insights. We actually prompt you to create this automatically on new function creates. And this is gonna be a source that you can use to manage how your functions are behaving and how your instances are working. So I actually wanna show you that here very quickly. This is that store function app that we just called to generate the report. I'm gonna go ahead here and open up application insights and I can see here response time statistics and server requests and all these other fun things. Let's come directly into our analytics. Now I'm gonna run a query which will give me the status of one of my instances through Azure application insights. And then I'm gonna show you how I got that query because it's gonna look really complex and then you'll be like, wow, Jeff's such a cheater. He just cheated to get this. So let me zoom in here a little bit. This is the query I'm gonna run. This is called the application insights query language, extremely powerful. I have here my instance ID that I care about. I'm gonna look for all logs within the last two days. And then I have this really long query which pretty much just gets all of the instance stuff for me. It's actually pulled straight from our documentation so no one has to recreate this on their own. But when I run this, you'll actually be able to see here in application insights at the bottom, here is the history of this instance. I can see that I got transactions. It's scheduled that tasks and then completed it. And then it's scheduled a bunch of get the order status tasks. And I can see exactly when those tasks started, when those tasks completed, some of them started and completed in different orders until finally my orchestration itself completed somewhere on the next page of these logs. So really cool here, I get the sequence of exactly what happened, all of this happening in parallel right here in app insights. And again, I know this looks a little overwhelming. If you come over here, this is the Azure Functions documentation. There's a whole section on durable functions. This is incredibly written documentation. It was mostly written by the dev lead for durable functions, Chris Gillum. But right here, I actually have some really helpful queries. So here's the single instance query that I just used. I just copied that and pasted it into app insights. There's one that gives you instant summaries. There's more information here on how you can log and add your own logs. All this is right here in docs that I'd encourage you to check out. So that's one really important note. The other one in terms of monitoring and management, there's also an API that your function hosts will expose to do some instance management. See the current instances, update instances, terminate instances. The only last piece, which is important too, and then we can switch to some Q and A. Version your durable function very consciously. You have to remember the whole has to be deterministic. It might be replaying. If I made some breaking change to my orchestrator logic and just published it willy-nilly to the cloud and there was an existing orchestration that was in flight, it's very possible when it starts doing its next replay, it's gonna be pulling in the new version of the code. And as it starts evaluating that code and doing event sourcing, it might be like, oh, whoa, this is totally different and be in a very corrupt state. Now maybe that's okay, but there are strategies you should be aware of. So there's kind of three big ones, our documentation calls out. First one, you can just do nothing. Maybe you're okay with some instances becoming corrupt. Maybe you know that this is very infrequently executed, just deploy as you will. It might not be what you want though. The other one is wait for your orchestrator to drain. Maybe you have a scenario where you can actually wait and make sure that everything completes before you start the next one. And then you can just publish an update once there's no running instances. But the third one is the recommended way is actually there's a way to do side-by-side deployments. So there's a way in your durable functions project, it's actually from this host.json file that I didn't go into yet, where I can give my Task Hub a unique name. So if I publish a new version of my function, I could actually give it a new name for a new Task Hub and then it will create new execution histories for new instances and that Task Hub would be running independent as my other one. As long as the code's still available for the other Task Hub. So you might need to rename some things there too. Again, this is all in docs as well. Okay, so with that, I do wanna make sure I give enough time for Q&A because I know there's been some good questions. So I'm just gonna do one last plug, check out that documentation. I previewed to you just a little bit ago. It's really, really good. The other one too, which is cool, durable functions is a completely open source. It's actually built on top of another open source technology called the Durable Task Framework. I've done links here to both the Durable Functions extension for Azure Functions and the Durable Task Framework with our branch for the changes that we've made for Azure Functions. What's awesome about this from an Azure Functions standpoint, I've actually come here to the project. We have a lot of contributors given how relatively new this is that are not on the Azure Functions team. There's a lot of people here too and shout outs to them who've come in, they've engaged with us on open issues, they've opened pull requests, they've helped bring this extension forward in this framework to make it accessible for every developer in every single scenario. So I'd encourage you, if you're interested in contributing at all, this is a really good Azure Functions repository to check out. There's a great contributor guide, everything here that you need so that you can jump in if you wanna see some changes. We've got some really cool community additions to this. So with that, I do wanna give some time for questions because I see a few coming in so we've got a few minutes here. So the first one is from our awesome community in Miami, is there any guidance available for testing Azure Functions and the various types of durable functions? Yeah, so the only one I'll say in addition to the kind of tests I just ran, you can write unit tests for durable functions and I encourage you to so that you could use .NET test and make sure that your unit tests work. It gets a little tricky with durable functions because you might need to mock that orchestration context, but it is possible. There's a doc here I believe in the durable functions doc that might go into it even a little bit more. So it's something we're aware of we want to improve but you can definitely do that today for sure. Okay, so the next question, have there been any thoughts on doing service discovery for serverless functions? So have various function apps, how to know where they are, health checks? It's a good question. Durable functions provide some level of calling other functions in that app but it's not really at the point where you say like, I want to call the function that resizes images and that's automatically resolved and directed for you automatically. I guess I would say it is something we've thought about. We do plan to do some work in the future to call functions potentially in different applications. There are some patterns to do that today but having a central event or a central service repository where you could kind of register services. It's still a little early but it's definitely something we've talked about before. Okay, so are there going to be options to set more compute or RAM in the consumption plan? So today a consumption instance is about a gig and a half of RAM and one core of CPU. There is some stuff we're chewing on right now that would enable you to run on premium hardware. I guess I would just say stay tuned. There's some big conferences coming up where we'll have some answers to this. So that's about as non-subtly as I can answer that one. Oh, this is a really great question from Sam, something I meant to go into. Can we use durable functions in Microsoft Flow? And if so, how? So if you're not familiar, Microsoft Flow is another kind of SAS orchestration technology. It's actually built on top of Azure Logic Apps which is a visual orchestration tool within Azure. And the answer is definitely yes. In fact, I have a sample here. It's almost like I knew this question would be answered. So I have here what an example of a Logic App looks like. So this is also doing orchestration but in this case it's a visual designer that you're working with and behind the scenes it's actually generating this JSON workflow definition which some people love and some people are like just give me the .NET stuff. Well here in this Flow or Logic App in this case, I'm calling a function. Now this function could be a durable function and in fact, durable function works really great with Logic Apps. That pattern that I showed before where you call it and it immediately sends back a status endpoint and then you have to check that status endpoint till it's done. Logic Apps and Microsoft Flow will actually follow that pattern for you automatically. So if you have a durable function and you use the HTTP action in Microsoft Flow or the Azure Functions action in Logic Apps everything actually just works. The only change you actually need to make is you need to make sure you're returning a retry after header in your starter function but it all works great. So definitely try that. That's a really good combination to get some of the best of both worlds like some of those out of the box connectors from Flow and Logic Apps. Okay we've got a few more minutes here. So what causes the orchestration function to wake up again after the activity functions completes? What if there were multiple activity functions? That's a really good question. I might answer this slightly incorrectly so I invite Chris Gillum or Katie or anyone, I'll probably poke them after this to correct. So the orchestration function from my understanding has its own queue and when activity functions are done they let them know the activity is done and it's up to some job. I don't know if it's orchestration function to go and see if it has any work items and to see if the criteria is completed for it to wake back up. I don't know exactly how many times it wakes up like in the fan out, fan in example but it does work and it's very performant. Like there's very low latency there. So I don't fully know all the magic behind the scenes that's making this happen but it works and for the most part it's powered by queues, storage queues that wake things up. Okay can we assume that if an activity is in progress the orchestrator will see that when it wakes up every time. Also on the fan out will the orchestrator fire all activities at once or after going to sleep waking up for each line. Yeah so I think we've kind of answered this throughout both the other questions. The orchestrator function likely won't wake up if there's only a single activity and it wouldn't wake up until that activity is completed at least as far as I understand. And for fanning out if you saw from my log example of application insights we'll actually, you have the ability to do them all in parallel so it doesn't have to do line by line. You can parallelize a bunch of work with durable functions too. So hopefully we've cleared that one up through some of the other questions that I answered too. So we've got another one here. We've got time for a question or two more. Could this be used for TCP sockets not web sockets as a scalable socket server or client? That's a good question. I'm not familiar enough with TCP sockets and how they're different than web sockets. I don't know if that's like an HTTP two thing or whatever but I can at least speak to web sockets. I know one of the challenges with a web socket is usually it requires a persistent connection. So if I have something like a ASP net app that's talking to a SignalR server that SignalR hub needs to be persistently available. Now durable functions even though we're able to run in persistent appearing ways most of that compute is still very ephemeral and the orchestrator will wake up and go back to sleep and activities will wake up and go back to sleep so there's not a persistent connection there. So it's often not a great fit for those web socket scenarios where you require persistence. The one I've actually seen paired most with just matter of fact for serverless is actually the new SignalR service where there is a SignalR service. We actually have SignalR service bindings for Azure functions also community contributed. Those will give you a persistent connection. So that's something to look at as well. And then the last one, can we pass authentication headers and tokens between functions? I definitely know your starter function can have authentication headers passed in. I don't know if there's a way to authenticate to activity functions but I don't know if it's really required because they're almost internal calls anyway. Excuse me but for Azure functions all up you can absolutely put authentication in front of your function and say, hey I need an Azure Active Directory token or a GitHub or I'm sorry a Google token or a Microsoft account token to authenticate before I kick off an orchestrator. So great questions everyone. Thank you so much for participating. Hopefully this was helpful. If you have any more questions feel free to reach out to me or follow some of those links that we shared earlier. I'll pop them up one last time. And I hope you enjoy the rest of your conference today. So thanks so much.