 Okay. So yeah, so hi everyone. My name is Nikhil Barthwell. I'll be talking about serverless application development using Azure Functions. So the way I have structured this talk is, half of the talk is actually going to be more about serverless, which is kind of general because Azure Functions is not the only implementation, is just one of them. The remaining half of it is going to be a little specific about the Azure Function implementation. Although most of the things that are true for Azure Functions is kind of true for other platforms also. So anyway, I'll start with what serverless computing actually is. So serverless computing or it's like to call a function as a service is another common name for it, is essentially a code that runs a set of functions that operate on certain input triggers and output triggers. So it's a code execution model where your logic is set of these discrete functions that run in a stateless manner, and they communicate with each other in a asynchronous way. They can run in a docker, they can run in a container, there are different reasons they can run. The whole underlying infrastructure is fully managed by your Cloud provider. So how does, let's go into details of what serverless application model is. So essentially what it is, it's your applications is set of independent functions and each functions what it tends to have is it sends to have a sort of input binding in the sense that it's also have certain trigger conditions, and then it has an output binding in the sense the action it can take. So for example, I have a function that every time you check into a repository does something, maybe pulls out the code, does some calculations, whatever it is. So its input binding is going to be that repository. Its output binding could be another, maybe call to another function, maybe generate a message when a message queue, could be anything, it's actually pretty flexible. We'll see an example of input and output binding. And when an event happens, it actually triggers that function. So certain event happens and the function is triggered, does some action, and then put some output. And the whole characteristic of this communication as that it's asynchronous mechanism. So what do I mean by asynchronous? Let's say that you have an input by function F1, and function F1 calls function F2, just makes a call. It doesn't really know what happened, what does F2 does? I mean, it might get an acknowledgement saying I received it, that's about it. But anything that F2 does after with that call or with that message that it delivers, F1 does not know anything about it. It has no traceability. And that obviously has its downsides and upsides. On the upside signs, you could have massive parallelization now because you can send multiple messages. On the downside, you have no traceability. So if something wrong happens, you don't know what happened. So there are some of the characteristics of serverless application. The first thing, and this is actually a common misconception is there's an abstraction of serverless. Now when you look at names serverless, it's kind of misleading because it sort of suggests that there are no servers. Obviously there are servers. You're running on servers. What serverless means is that they are abstracted out for you, so you don't have to manage them. That's what serverless computing is. It's an event driven. So asynchronous model, you have an event, take some action, generate some output, and you only pay for what you use. And this is actually one of the major advantages of serverless architecture is that it's cost efficient, as opposed to a traditional VM that it's running, it's running, it's running, you're paying for running, even if you're not actually using it. So for example, one of the common application that serverless has in IoT. So when IoT, you collect some data, you send that data, it takes some action. Now, if you don't implement the IoT as a serverless, if you implement it as a traditional model, you have that VM that's running 24 by seven, and obviously it's costing you money. And you might actually use it for like 20 seconds a day, but you're gonna pay for 24 hours. Whereas in serverless, you're using for 20 seconds, you actually would pay for 20 seconds. Obviously, the downside of it is that this function, it's provisioned, it's deprovisioned, whereas a VM is running all the time, which means your responses are gonna be very fast because it's running. Whereas in a function, if the function is idle and it has been switched off or put it in a dormant mode, every time you get a trigger, it has to actually wake up. So there's a certain lag in that it's called cold start problem. And I'm gonna discuss more about cold start and how you can avoid it, but that's kind of the downside. But essentially the key point is the whole serverless architecture is an event-driven pay-as-you-go model. Whereas in a traditional, let's say, microservice implemented in a Docker and so on, you're just paying 24 by seven. There's no concept of pay only for what you use. Lina, I think, sorry, that's, yeah, okay, good. Probably I'll just keep moving mouse, so it's easy. Yeah, okay, good. So let's talk about these functions. Now, the particular characteristics of these functions are they are stateless. The reason why you want them to be stateless is because they can be scaled up and scaled down. So if you have a state, the state cannot be scaled and scaled down. You do have like a persistent storage, like a database where these functions can read and write from, but the functions itself do not have a persistent state in them. That's the characteristics. Functions have to be stateless. And that's what makes them easily scalable. Like I said, each function would have a certain kind of input and output binding, right? Even gets triggered, function acts. And what happens often is that there are actually limits on the execution time of this function. This is actually more of a cloud dependent. I think Google puts nine minutes, Azure puts 10 minutes, AWS puts five minutes. There are variations in how you can provision these functions like they are different hosting plans. So some hosting plans would not put a limit, but they'll charge you more. Some hosting plans do put a limit, but they are cheap. So you have to basically determine what's the most, what's the right plan for your application. But the idea is, and the reason why they have a limit is the functions are supposed to be a small, small functions that work with each other. You should not have like what I call as a monolithic type of application in the sense big, bulky functions that do massive data processing. If you wanna do that serverless is not the right application. Serverless requires you to have small functions. You can have many functions that work together, but each function individually should be small and should act quickly. So what are the advantages of serverless application? To start with, every function is independent of each other. And when you have these functions that are independent of each other, they are independently scalable. What do I mean by independently scalable? Let's look at the flip side and let's say you have a monolithic. Your monolithic consists of different parts of the code. It's a big application. And when you scale up this monolithic, the entire application gets scaled up. Even the parts that you're not using heavily, they also get scaled up because it treats the whole application as one unit, one unit of scale. Whereas when you look at serverless, each function actually is an independent unit. So you could have like, if I have a function F1 calls F2, F2 calls F3, I can have like 10 instances of F1, but five instances of F2, three instances of F3. And how system determines is, if F1 is loaded, it will start scaling up. So what does it mean for you? What it means for you is more cost-effectiveness. Because you're only scaling up parts of your system that is overloaded and not scaling up parts of the system that are not overlaid it, actually you're consuming less resources as opposed to scaling up everything. That means less resources. So less money you have to pay with a vendor. You also have a better scaling, right? Because if the parts of the system are overloaded, you would scale up that part, but you don't really have to scale up everything. The second thing, the big advantage is there's a technology heterogeneity. What is a technology heterogeneity? I have these functions and they are communicating with each other. These communications happen through a lot of an agnostic, code agnostic way, open standards of internet, HTTP or whatever. So there is no limitation on me to have one function using one specific set of technology and another function using another set of technology. I could have one function in Java, call another function in Python, calls another function in C-Shift. I can do that. Whereas when you have a traditional monolithic application, it's an advantage even in the microservices world, but when you have a traditional application, you can't do that. You have to use the whole unit as one particular technology, one particular platform, platform and one certain set of frameworks, but you have the flexibility. Not only is it just about the flexibility of implementation, there's actually a lot more to that and in a way we all face this problem time and again. So what if technology stack gets rewritten? What if your technology gets outdated? If you have a big monolithic application or a big application and you want to move to a new language, a new technology, a new version, because it's a unit as a whole, you have to rewrite the whole application again. Whereas in a function, what you can do is you can do part by part. So it's a less migration risks and you see that all the time. So one good example was Facebook, right? When Facebook started, it started as a college, get platforms on. It wasn't designed to be the company it is today. And then it didn't PHP. The problem with PHP was it didn't scale, but that was not a concern when it was designed. So they created this monolithic PHP code base and suddenly they decided to start the company and the company scaled and scaled and scaled. The problem was PHP did not scale. What do you do? You could rewrite the whole system, lot of time, lot of money, lot of risks or you can basically make PHP faster. So they took the second approach saying, well, PHP as it is does not work. So we'll just make PHP faster. So they created this language hack, which is kind of a superset of PhD. And then they came up with a hip hop virtual machine. The whole idea was that your technology, whatever you're gonna work for, is going to get outdated sooner or later. And rewriting a whole application from scratch is a really big risk. So when you have these applications, Azure Functions or even microservices, which are implemented as set of modules that kind of assembly of services rather than having one big service, the migration from one set of technology to another set of technology becomes really easy. Because now you have a lot less risk. You also have less management overhead. Why? Because you don't have servers to manage. This is in a way an advantage, but in some ways it's a disadvantage because you're not managing those servers. You don't really can optimize or do anything with it. You have to just accept whatever your Cloud Winter is doing. So in some sense, there's a disadvantage also, but in most of the cases, I would consider it as an advantage because it frees you of a lot of burden and just focus on your application. And of course, as we discussed, it's cost effective. You only pay for what you use. And this is a major reason why serverless is getting a lot of popularity. So when you look at the evolution, the whole, and I know Gregor Hopp in the morning, he actually kind of talked about it in his keynote. It's like you make people productive by raising the level of abstraction. So you have the infrastructure, then you say, well, okay, fine, I'll have the platform. Then I say, okay, fine, I'll build you the whole software. Function as a service is just another layer on top of it. So it's like a natural evolution. You're going that direction of increasing the abstraction. So what are the disadvantages? Well, to start with, you have a distributed system and distributed system comes with its own problem. You have certain failures. You have the more moving parts of your system are gonna be the more your chances of something failing is gonna be. Also, the less visibility you would have. I mean, you have logging and all those things, but let's say in a monolithic world, A calls, B a function F calls function F2 and function F2 returns an exception, right? You get an exception, you can handle it. You get a nice call stack with that exception of what happened, what happened. But when you have a distributed asynchronous way, F1 calls F2 and you don't know what happened, right? It goes into this either. Okay, I don't know. So that actually reduces the visibility of system and then you need additional mechanisms like logging and other tools to actually monitor everything. But in general, distributed system always increases complexity. And in serverless, that's what you're doing. You're distributing your system into set of functions. Also, no functions have contracts, but I mean by contracts is they expect the data to be in a certain format, right? I send you a request, I send you let's say a JSON payload that JSON payload has to have a schema. Now, what happens in a monolithic world is that your schema is enforced by your language, right? So if function F1 expects a certain data structure and function F2 expects another data structure and let's say F1 calls F2 and there's a mismatch of data structure, you actually get an error during the compile time, right? Type mismatch on the statically typed languages. You don't have anything like that here because F1 and F2 are independent of each other. So if you have a contract mismatch, that is becomes a kind of a runtime error. You have no way of finding it during the compile time. There are ways how you can mitigate it. And again, I'm gonna talk about it, but essentially the interface management between these functions, it kind of gets difficult because of the distributed nature of the system. There's a latency, you know, F1 calls F2, you're passing through a network, network would have its own latency. It has a queue because being asynchronous in nature, the same function can receive multiple requests from multiple people and queues up, right? The order in which it processes. So this latency introduced in the system and then finally it's difficult to debug. Why? I don't have a call stack, right? I don't have the advantage of having a nice call stack. I don't have an advantage of enforcing those contracts or interfaces during the compile time. Everything now becomes runtime. It's a problem. So yeah, so it's basically when you look at the cloud as a whole, like the promise of cloud, what you're really doing is you're a lot of issues that you have size of the servers, monitors, all those things, you're kind of leaving it to the vendor. And what cloud really is doing is it converts your capital cost into an operational cost. Now, what do I mean by capital cost to operational cost? Let's say I wanna run an application, right? I buy it, I have to buy some data, not like land, build a data center or basically have some kind of warehouse where I put servers. There's a lot of upfront investment that I'm doing. When I go to a vendor like Amazon, I don't have to worry about all those things. So my upfront investment actually gets translated more into an operational cost that I'm incurring. A different, but a common example is it's buying a house versus renting a house, right? You buy a house, you put a lot of investment. It is cheaper in the long run in the 20, 30 years period because you own the house. But if you don't know or if you wanna scale up and scale down, you visit a town, you don't know how long you're gonna stay, you know, you get a new job, all those things, you don't wanna make a big investment, you rent a house. It gives you that flexibility to increase or decrease your stay. So in a way, cloud is kind of doing the same thing. You know, your capital cost, your upfront investment actually gets translated into an operational cost. And there are so many of the issues like size of the servers, monitoring security. All of these, you actually kind of get it because if you go to a trusted vendor, they are providing you. It's actually very weird that some people are afraid of moving to the cloud because they don't want to trust the vendor and, you know, they are afraid of their data being stolen. The truth is, typical vendors like Amazon, Microsoft, they spent like billions of dollars on security. So the chances are their security is gonna be better than you because it's what they do. It's what they specialize. You know, you're, let's say an insurance company or some kind of other traditional oil and gas company doing some data processing. It's not your domain. So I don't know why people have this notion of having reluctance to move to the cloud. So when you look at the computing era, right? We started with mainframes and we had these dumb terminals connected and slowly and slowly we moved to a client server architecture. And when you look at the client server architecture, what you're doing is you're distributing the processing between client and server. And you had two tier architecture, three tier architecture and so on. And then you moved into agile. And finally, the cloud. And when you look at this transformation, there are these two factors that are actually happening. And one of this is you're increasing the velocity of your development, right? You're moving faster and faster. That's what agile is all about. It's what this conference is all about. At the same time, you're decreasing the cost and the size of the risk. Why? Well, to start with, when you're going to a cloud, you have multi-tenancy, right? So that decreases the cloud because they have the economics of scale, right? The infrastructure is actually shared across multiple people. So the cost per vendor per client actually goes down. So these are the two driving factors. And when you look at a function as a service or serverless computing, in a way it only drives these two levers, right? It takes them a bit further. So I can have better velocity of development because I don't have this overhead of managing servers, so on. And I have less risk, right? I have these independent functions that I can just change, modify anytime. I can involve, I can transform. I don't have to worry about this big service doing everything one at a time. So serverless, there are certain misconceptions. The first thing is there are no servers. Well, I mean, there are. You just don't have to manage them. And because you're not managing them, you don't have any control over them. And that, again, becomes more of a, it's a double-edged sword because you can't optimize and so on. I mentioned this before. Most of the time, this is an advantage because you don't want to deal with all these headaches of management and also hardware generally is cheap, right? So you shouldn't really worry about optimization and also you could scale up and down because primarily, at least in serverless applications, you have these stateless functions. Scaling up and down becomes easy. So it's usually in a typical organization, your expenses are on the developer's salary, not on the infrastructure, right? So in a way, you're trading money saved on developer's salary, even though it might, might, and I'm using the whiter, increase a little bit on your infrastructure side, that's okay, that's a good trade-off to me. So there's a natural evolution, right? When you look at cloud resource management, you have like, you think about, like an on-premise solution, you basically manage everything, right? And then you move to infrastructure as a service. What you've really done is I don't care about hardware. Where is managed by somebody else? And then they are virtualized. And then I have these VMs. I provision those VMs. I have these operating system images. I choose what operating system it is. I apply the security patches and all those things. That whole operating system that VM is mine, right? And slowly and slowly you kind of evolve and you say, you know what? I don't want to manage operating system, why? I mean, I have the same version of Linux on my local machine. I can just run the same version of Linux. Why should I worry about patching and all those things? So it's raising the abstraction a little more that you don't worry about operating system and runtime. And that's what platform as a service kind of gives you. So moving forward, function as a service takes one more step and says the application, you know, the application of how your different functions and all those things are gonna be abstracted by me. Just worry about these individual functions and triggers. That's all. So it's, in a way, it's a next evolution beyond infrastructure and platform as a service. So there are a lot of commercial openings. I'm just gonna talk about three of them because these are the dominant one. So Azure Functions, which is most of what my talk is based on. Then we have AWS Lambda. AWS was kind of the pioneer in serverless. They, I think it was released 2015. And relatively new entrant is Google Cloud Functions. I had a little bit of experience with Google Cloud Function. Unfortunately, it's somewhat limited. It just, you can't use anything but Node. Node.js actually to use. And that was a severe limitation for me because in the organization I was working, they were like Python and Java guys. They couldn't do much. They were more comfortable with AWS. Azure also supports predominantly.net. But they are expanding their support to other languages. But there are other, I think Rackspace has, I'm not sure, but there are other implementations of serverless. These are the three dominant ones. Okay, this is a little bit of digression, but I'll tell you. There was actually a full talk about it. In a way, yes. There are certain limitations in the sense that in the design philosophy, functions tend to be short burst like with a time limit of five minutes. Whereas microservices in the design thinking does not have any such limitations. You just, it's a service. They actually, you could implement microservices as form of functions also. And there's a full one hour talk on serverless versus microservice which actually kind of goes into the details of pros and cons. But you're right, they're along the same way. The common philosophy is that your service your application is an assembly of services as opposed to a monolithic. So that's the common binding theme between microservices and serverless. Where they kind of differ is how the services talk to each other and certain assumptions around that. Typically, and this is, it's not always the case but typically what microservices, especially the event-driven microservices what they would have is they have a broker, the central broker Kafka or Rabbit MQ and services to read and write. It still uses an asynchronous method of communication. But it has a central broker. Functions on the other hand, don't have any such concept. They don't have any such concept or central broker. A function F1 would call F2. The microservices design philosophy is that each microservices are decoupled with each other in the sense that I'm programming a microservice I don't have to worry about it. So in a way, if I'm using a microservice I just have to worry about interacting with the broker. I don't have to worry about other microservices. They can read and write. I'm just injecting in into the broker. Serverless does not achieve that kind of decoupling. Why? Because in serverless you don't have the broker. So F1 has to call F2, F3, F4. If you change your contract or you change the way your application works F1 has to update itself. Let's say that in a typical application F1 was calling F2 and you made a decision I'm gonna change and I'm gonna say F1 should call F2 and F3. In an event-driven microservices where you have a broker F1 injects a message to the broker. F2 is already reading it. You just have F3 subscribe to the topic and read it. F1 doesn't need to change for that. You're just adding a consumer. The producer stays the same. In serverless you don't do that because you don't have the broker. F1 now has to change that it has to have an additional output binding that actually calls both F2 and F3. I would rephrase it and say that functions have similarity to microservices. I would not say that they are the same. I think there are implementation differences and they are design philosophies differences. Of course they are kind of substitutable in the sense that you can use functions for microservices and so on. So there's a, again, the commonality is that it's an assembly of service. So your individual component. In fact, I would say that most of the modern applications that you would see is not going to be serverless and not going to be microservices but it actually will be combination of the two. And... Oh, yeah, they can. So let's say that you have an application model. This is more going towards the microservices world where you have a broker. You could have a function that writes to that broker and you could have a separate assembly of functions that write and read from the broker. So for that to happen, the function can have an input trigger to that broker to read that message. So both can coexist. Why would both coexist? Well, the reason why both will coexist is in an application you have tier one services that impact your customer. Typically tier one will follow request response. So they are like typical synchronous communication pattern. Then you have tier two services which kind of semi-impacts customer. But then you have these tier three services that are not really typically important. You kind of need them but you want them to be cost-effective because they are not impenetrable. Serverless is going to be more cost-effective than microservice. So you would have an application where you have these different components, microservices and serverless function and broker and possibly other traditional request response systems all operating simultaneously with certain components implemented in a certain way that best describes the need. A similar concept more going into cloud computing is hybrid computing. That's what hybrid computing does. You have your own data center, you have the public and they can talk to each other seamlessly. What's the advantage? Well, it's more expensive to have your own data center but then you have better control over it because it's yours. So typically how banks would do, banks can't put a lot of data because they have a lot of regulatory concerns. You're putting your data in somebody else. So banks would give the most important information that they have, they will have it in their private data center. The ones like customer service they'll have in the public. And typically let's say in Microsoft context you can buy this Azure software stack. So you can buy all these Windows Server system center all these products and run your data center and this data center is gonna be identical to the Microsoft Azure data center. And then you can have an application that is in part your private and in part public which can talk seamlessly and because you have the same software stack tomorrow you decide I want to keep this data not in my private data and I wanna move it to the cloud or vice versa. You can do that seamlessly. So that kind of thing also you could do it with Microsoft services and Azure functions. In fact, I'm gonna talk about Azure runtime. You also have a hybrid cloud concept in serverless computing because you can run the runtime of Azure functions locally and have them interact with Azure functions running in the data center again in the same way for the same reason that you have hybrid computing. Or for another reason, let's say I'm doing a development, I wanna test it. It's very hard to test it on the public cloud. I can do local testing. So I can build this environment that is a replica of my private, in my local machine test everything and when I feel confident, move it. So yeah. So there is a concept of hosting plan. There are different hosting plans. I have a slide on that also later on. So I'll explain in details but they have a different hosting plan. You have to choose. And in short, you could have like a kind of a container based system but then you're paying more because it's running 24 by seven, right? Then you don't have the startup and shutdown. You also have the other option that I want to make it in the consumption plan. In the consumption plan, it's like it pops up when you need it and then shuts down when it's idle but then you have the cold start problem because if your function is idle and you get a request, it has to start. I actually gave another talk on unique kernels and one of the things I actually talked about in unique kernels was because they can they can be shut down and be provisioned very fast. They boot up in millisecond. That's an alternate implementation that you have. Tiny unique kernels, you can implement microservices or functions or whatever you like to call it in these unique kernels and do a just in time provisioning. When you get a request provision them, the advantage of unique kernel is the boot up time is extremely fast because they are very small images. So you can do just in time provisioning with serverless, you have a limitation. You can't really do just in time provisioning because they're still containers, right? They need time to boot up a few seconds, few minutes. Okay, okay. So I'll answer your question. Sorry, I don't have a watch. So I'm actually, you know, okay, that's okay. I can go faster. So Windows Azure provides you a lot of functionality. I kind of talked about it. It's a serverless offering from Microsoft. There's an open source version Azure Web Jobs SDK and you can have the ability to run on cloud and on Chrome. So that's what I was talking about. Azure Function Runtime. Azure Function Runtime talks about hybrid cloud. So I can have part of it in my private cloud and part of it in my public cloud and they can work seamlessly and that way you don't have the possibility of, you know, vendor lock-ins, right? Because now you're running it locally. You control the environment. So these are typical characteristics of Azure Functions, right? I already discussed most of them. This is more in context of Microsoft. The key thing is that there is a trigger for input and trigger for output and it's an asynchronous execution flow pretty much. Like this is gonna be true pretty much for any serverless offering and AWS Lambda might have a different constraint but they'll operate on the same way. So these are the supported bindings, right? There's a full list. This is just a partial list. There are inputs and output. You have in Azure, you have blob storage which is basically your blob. You have queues for short 8KB message sizes. You have tables to store key value pairs and you can have all these input and output binding in the sense that if I send a message to the queue, that will trigger an event that will call a function. Same thing. If I can send a message back to a queue, I can inject this message back into a queue, right? You can have both input and output. You obviously would have rest, input and output. You can have scheduled like, I want the function to be triggered every 10 minutes. DB streams and so on. So a lot of different notification. This is the official Microsoft site. So kind of gives you all of the input and output bindings. HTTP one does not have, it's actually a rest of web book. Yeah, yeah. So this, like when I prepared this slide, this was updated, but actually you would want to refer to this because the information keeps changing, right? So this is actually the reference link for it. Okay. One of the problems with infinite loops is, infinite problems is that you can get into infinite loop because it's an asynchronous message, right? You can get into a cycle. F1 calls, F2, F3 calls, F4 and F4 back calls, F1 and then you get into a circle. So you want to avoid this because then otherwise you have very little visibility in the system and then you just end up with a big bill. So how you avoid it is you pass the call stack, right? So you pass the call stack. So if you get into the loop, you have some mechanism to stop and stop processing or at least stop this infinite loop. This is actually true for any system that supports a cyclic graph, right? Typically let's say build system as a directed acyclic graph of this very reason that it wants to prevent the cycles. In functions, as serverless, you don't really have any such thing as I want to prevent a cyclic graph. So you can get into an infinite loop. Functions are stateless, but you have these persistent ways of storing, right? File, queues, table, blob. Blob is like the big chunk of data. Tables is key value pair, queues are short. ADC has messages and then you have the file. Functions by itself should be stateless. Startup latency, you want to keep that. So typically when the function boots up, you want to keep the external dependency to minimum because it'll add a startup time, right? After a period function gets idle, this is what I call as a cold start. Idle functions have to boot up. So they'll take time. There is no cold start problem with app service plan. We'll talk about what apps are. So these are the hosting plans. So this kind of answers your question. In app service, you have these VMs running in a container where they kind of run 24 by 7. That way you don't have any cold start problem. You don't have any limits on execution, but obviously it's going to be more expensive, right? Consumption plan does the opposite. So based on your application, you would have app service plan and consumption plan. You can have both operating simultaneously. Some of my functions are on consumption. Some of my functions are in app service. Typically the user-facing functions are going to be on the app service plan because you can't afford the latency. So user-sensor request, you get an acknowledgement immediately. The ones that do backend processing can go on consumption plan. So in the Azure portal, you can create these functions. If you want to learn that's how you do it. In reality, you actually won't do it. I'll talk about that towards the end. I don't have a slide on it, but I'll still, I think just verbally talk. Testing a function can be hard because you don't really have the visibility. It's an asynchronous. So you can test functions using command line curls, test functions within functions, write the wrapper functions to test a function, test in browser. You have different mechanism of testing these functions. You can run them locally. But yeah, testing functions can be tricky. Monitoring. So Azure portal will provide your monitoring tool. So you can monitor. You have logs, I think there's, yeah. So there are execution logs also. Okay, so before now I have a little bit of time. So I'll actually go into some stuff which is not in the slide, but I think it's kind of important. And what it is is how do you build a large scale application, right? Now in a large scale application, what you have is you have these various sub, each function as a project by itself, and then they share the contract. So usually it's a good idea to keep the contract file separately. How do you keep the contract file? There's a, in C sharp and F sharp, they have entity framework. They have type providers that basically converts format of data into chunk of binary. Otherwise you have protocol buffer, Google protocol buffers that can generate code for various protocol formats, right? So how you do it is you have these contracts and then you have these functions that import and in your build system, you would generate the bindings, sorry, the stubs for these contracts and your functions would actually import that. So that way you don't have a problem of an impedance mismatch. That's how you would actually do a large scale. And in a large scale, obviously, you don't go to the portal, right? You would deploy it via command line. So Azure, there's an Azure CLI tool, Azure command line interface. So pretty much what you can do on portal, you can actually do to command line. So how you would actually do is you would have this repository. You would check out, you would build, you would make sure that all the contracts, if function one and function two are calling each other, they share that common contract, both function one and function two, import that contract. Why? Because if you change that contract and you forgot to change F2, during your build time, you will get an error, right? So you have these contracts and let's say each one of these functions are now generated as binaries. And then you wanna deploy these binaries to your actual production environment using CLI. The tricky part of it is, let's say you have one repo with all these functions. If you change a contract, if you change one function and it's easy to deploy a function, no problem, the tricky part is when you change a contract. Because when you change a contract, you might end up changing several functions that depend on that contract. So what do you do? When you build it, you have different binaries. How you could potentially solve this problem is that when you have these functions on your environment, you actually kind of, the binary, you can have some kind of hash code for it. So you know which version is running. And when you build it, you have different, you know, DLLs or EXC or binaries generated and you can compare the hash code because the problem is you don't have a, you have an atomicity of commit, but you don't have an atomicity of deployment, right? You don't want to deploy the whole thing when you actually just change four functions. So you wanna keep track of what is running in your system and what has changed. And then you can use these hash codes to just deploy the parts that you actually need and leave the part that is running. How much time do I have? Almost up. So that's about it. I can take more questions if any or we can have an offline discussion. See, this is more of an experimental than a theoretical thing in the sense that you can't put a limit saying cold start will take two, three times because it depends on several factors, right? The cloud has provisioned a server depending on the speed and overall because you're sharing that server with other people, right? It's hard for you to put a number saying it takes two, three times. I mean, you could do it in a couple of few seconds, few minutes, something. Yeah. You would, I know it's pretty cheap. I know the numbers for AWS. You were talking about few cents per car or something. So it's pretty damn cheap. And when AWS Lambda came, some person, they were playing and they actually got into an infinite loop. And even then the bill wasn't that high. I mean, obviously they detected at some point and they stopped it, the point is the individual calls and functions are pretty cheap. So I would not worry too much about them unless, you know, I don't know, you're processing like millions, maybe yes, but generally they're pretty cheap. App hosting plan, that could be expensive because you're running 24 by seven, right? Okay, so any other questions I'm around, you can ask.