 I'm just waiting for, yeah, I think maybe we can start. So, Siddharth Goyal is going to talk about going serverless with Django on AWS Lambda. So, over to you, Siddharth. Thank you. So, hello everyone. As I explained, I'm Siddharth and this, the topic for our today's talk is going serverless with Django on AWS Lambda. So, what you will see in this talk, I'll be briefly talking about serverless computing, why is it for you and the benefits disadvantages of it. Then we'll talk about deploying a web application in a serverless computing platform. The things to look out for, we'll have a live code demo for the same. And to conclude, I'll talk in brief about API gateways and their role in serverless deployments. So, let's get started. Serverless computing, what is it? So, serverless computing platforms are a type of service offered by major cloud vendors like AWS, Azure, et cetera, wherein the cloud provider manages the machine resources. This is in contrast to a traditional service like, say, EC2 server, wherein we pre-provision the resources ourselves. So, why? As we all know, a lot of work in IT has gone into ensuring that operating system hardware is abstracted so that developers can focus on writing business logic code. Serverless computing platforms are a manifestation of this. And as we can see in this, as we move from infrastructure as a service to say function as a service, which we are talking right now, more and more components of the stack are abstracted to the vendor. And the serverless platforms today are mostly function as a service offerings, wherein we, as a developer, only have to focus on writing the business logic code. Now, one of the biggest points to note in serverless platforms is they are event-driven. So, unless only when they are requested, an event is generated through which the serverless computing platform runs. And it, in theory, downscales to zero. So, it has a lot of differences when we think of deploying a traditional web application like a web service or an API on a serverless platform. We will be discussing these differences in the session later on. Now, you might be wondering why should I bother with it? Like, I'm fine with my Docker containers. So, one of the biggest advantages of serverless computing is, since more layers are abstracted, the less you have to manage. The only thing that we manage here is our business logic. And secondly, if your application has a lot of variants in terms of usage, like it needs to upscale, downscale a lot, then, again, serverless computing platforms are very cost-effective because we only pay for our current usage. We do not pre-provision future usage right now. But this is not always completely true. For example, like say, if you have a very high usage without a lot of variants, a sustained usage, then a traditional computing platform will be much cheaper than a serverless option. So, in today's talk, I'll be mostly demoing the things on AWS Lambda. But a lot of advantages, disadvantages, things to look out for are common across most of the offerings by different major vendors. So, you should be able to take learning from here and work with vendor of your choice. Now, what are differences in a traditional or a serverless deployment? So, first difference is that you do not have fixed compute resources. Now, since you do not have fixed compute resources, your code has to be, has to ensure that it does not rely on any of these. For example, memory caching. A lot of us sometimes use memory cache in our web services or in our APIs because traditional deployment, the service continuously runs. But in a serverless computing environment, the application has the potential to completely downscale. And in that case, you will lose the memory cache. Similarly, a file system. So, since the components are so abstracted by the vendor, we do not have access to like a typical file system that we might have in a, say, a containerized deployment. And another difference, which I think is a big disadvantage of serverless deployment could be that the deployment strategy is a little vendor specific. So, what does, what do I mean by that? Again, like say, for example, I have a web application and I have containerized it in Docker. If I make a Docker container, I can pretty much deploy it to Kubernetes offerings of any major vendor without any issues. But in a serverless platform, since again, the so many components are abstracted by the vendor, each vendor has its own methodology of how it takes that application and runs it. So, deployment strategies have to be tweaked for each vendor. Now, things to look out when specifically making a web application. So, as an example, I'm using Django, but since most Python common web frameworks follow WSJ request response flow, you should be able to again use this thing for say a flask application. So, again, the biggest disadvantage in my view is that traditional serverless platforms mostly do not provide the web service, gateway interface, request response, object flow. As we all know, like in a traditional web application, so like when I say run, I make a Django web application and I say run server. So, the endpoint that is exposed from, from the browser or say your client to your web service, the request is converted to a WSJ request and the application returns a WSJ response. The gateway acts like as a bridge between your client and the application. So, in a Django application on serverless platform, you will have to add another layer between like the client and your WSJ wrapper. Another thing, since again, the application has the potential to downskill completely. So, latency is a big issue here. You have to ensure that your application starts as quickly as possible. Again, this is not an issue in traditional deployment. Even if my web app takes a minute to start, it won't be much of an issue. For this point also, we have an example coming up later in the session, a sample example, so things you can do to make it faster. Third point would be linking to external services. Since, again, the platform has the serverless platform deployment has the ability to upscale and downscale rapidly. So, if you have an external service like, say, a database, those applications will make individual connections a lot of individual connections to that database and it might lead to performance issues there. So, you need, and it's a good practice to have a proxy between the application and the database. So, that proxy can manage the collection pooling and your database doesn't get clogged up. Okay, so now we'll start with the code demo. So, guys, I have a simple web application here. It just has a single web page and it displays data from the NASA Rata API, this one. It's a free public API. So, I'll just quickly walk you through this application. Anyone who is even a little familiar should be at home with this. So, we have two app modules. The source folder has all the code, by the way, and we have two app modules, Sample and Stars. The Sample app module contains standard Django files, ASGI.py, WSGI.py, these files are WSGI app initializers. The URLs.py, this is the base router for the whole application. And this, as you can see, I have just routed every route to the Stars app. The settings.py, as Django standard dictates, this file contains all the configuration for the whole application. This custom storage.py file, this is not a standard Django file and we'll look into it later, why is it here? Now, coming on to the Stars application. So, in here, again, pretty simple, just a single URL path, the slash path redirecting to the home page view function. Single view function, home page view function, which just uses this helper file to fetch data from NASA APIs and renders the home page HTML page with the context data of that. So, pretty straightforward. And for a demo purpose only, I have a single static file which is just a logo for the application. Now, this is a very simple and straightforward application. I do not use any database, et cetera. And if you see, and I run it, so this is our application. This is Astronomy Picture of the Day, as told by NASA, one of their most popular APIs. And this is just a windows plot for the Mars inside station or the last available solve data, which is Sol 656 yesterday. So, as you can see, a simple Django application, I can run it very simply as you would any other Django application. Okay, now we will go and check AWS console. Okay, so first I'll introduce you to what is a simple Lambda function. Then we will connect it to an API gateway and then we'll see what is that event thing that we have been talking about from the start. So, let's create a Lambda function, say icon live demo. And time is what language runtime you want to run it in. Let's select 53.6. And for those of you who are completely unfamiliar with AWS, so each service object, like say this Lambda function inherits a role and role has proper permissions attires to it. Permissions can be simply like ability to access S3. So, this project will inherit those permissions. So, as you can see, it gives a standard boilerplate code for us. It's a simple Lambda function.py file. And here we have the function which has two variables taking event and context. Let's log this event and see for ourselves what is it. So, I'll just save this. And we have successfully saved it. Now, let's connect it to an API gateway for people who have not heard of it. API gateway is simply as the name suggests a gateway through which you can link your external environment say the internet to your internal services like say a Lambda function or you can use it to proxy pass to another STTP endpoint, et cetera. I'll again name it mic online demo. We shall create it. So, as you would say in an Nginx, you would give routes like for example, what to do when you get a slash route. You can specify routes, methods, et cetera. So, let's create any method like any method all out and the integration type. So, as I was talking, it allows you to connect the external environment, internet to an internal service. So, you can have a Lambda function which will use an RKS, an STTP endpoint, an internal VPC link, et cetera. So, let's select the live demo that we just created. Save it. And there we have it. We have integrated it with that Lambda function. So, thankfully AWS provides a console internally to check out, to like test this API. So, I'll just make a simple get request and we get a response hello from Lambda. If we see here, the code here is also hello from Lambda. Now, let's see what is this event that we just logged. So, these are the logs for this Lambda. We didn't log it, I think. Okay, yeah. Okay, so we have it. I think it didn't load it the first time. I'm scared. Yeah, so, yeah. So, here we have the event. Let me copy it and prettify it for everyone's so that everyone can see it easily. I can just copy it correctly. So, here we have it. As you can see, it contains a lot of keys, but the thing's to note. So, it has a request context key which has the path and it has the method through which we have called. And if you were to make a post request, it would even have the body. And correspondingly, if you see, so it has pretty much every standard information that you have in a HTTP request. And it has all of these. So, what we have to do to deploy our simple web application to Lambda, as I told, we have to convert this event to a WSGI request and then take a WSGI response and return it back in a format that this understands, which would be this kind of format. So, this is one of the trickiest part in deploying a web application to Lambda. Properly converting this event to the WSGI request. Again, if you can do this, you can deploy like a Flask application also because that also uses the same standard. So, now let's see here. So, there's a thing called handler in Lambda function. This is basically what file name you are calling and what function in it it will call. So, as we can see here, the file name is Lambda function and the function name is Lambda handler. So, on a similar note, we have another file, a Lambda handler file and the function name is also Lambda handler. So, what I do here is I take the WSGI app object, import it from the Django application that we have. This is standard Django application. Then we create our own function which takes this application, the event that we get from Lambda and return the proper response. So, now let's look at this function. So, we come in here and for those who are completely unfamiliar with like WSGI app, let me go here quickly and show you guys. Yeah, so what this function does, it returns a WSGI handler class and if we just go here and check, so basically what it does is it has this properties for the handler which loads your whole code and takes this over that class and returns the WSGI response. So, basically what this will expect as a parameter, you can say is this environment which represents like the request information and the start response which represents the response information. So, taking this knowledge again, here also we create our own response object and we generate the environment from the event. This part is basically converting what I told, converting that event to WSGI request format and this part represents converting the WSGI response to the standard response expected by Lambda, API gateway. First, let's look at response. This is very simple. So, we have a start response class. Therein, we have this response function which just takes this output as a parameter. This output would be what? The WSGI response outside. It parses it and converts it into a format which is suitable for API gateway. And this again can be used like say a Flask application. Now, for the environment part. So, here we have a function and in this function we basically convert, we take that event body and convert it into this format. This is the standard environment format or the request data you can say in layman terms which the WSGI application expects. So, herein, I parse the headers, the path, the body, et cetera, et cetera. So, this is the code that basically enables us to run this application to Lambda function. And now deploying it to Lambda. So, as I told, the deployment is a little tricky, when that's specific. So, how would you deploy? So, Lambda expects self-contained code block. So, what do I mean by self-contained code block is that you have your requirements, you have every file in there. So, this is a simple shell script which allows us to deploy to Lambda. So, again, very simple. We go into the source folder. We install the requirements in the target dot and target dot would entail that all the requirements are installed in the source folder itself so that it becomes self-contained block. We will look into later what this line is for. We make a zip file and we upload it to the function. And the function is PyCon demo test. And suppose I run this. In the meanwhile, we can check out. So, in my repository, I have added continuous deployment step using GitHub actions which does the same thing. So, this would be the configuration file for this. In here also, if you see, it's quite simple. We check out all the code. We set up Python 3.6. Then we run a simple script which does what it again installs, like updates pip, goes into that folder, installs requirements in that folder for a self-contained package. We look into this line, makes a zip. Then there is this action which basically takes that zip and uploads it to the Lambda function. So, let's see if that is done. Yeah, this was successfully updated. Now, let's see. Yeah, there's updated 39 seconds ago. Similarly, I have made an API just like we saw right now. This does what this also links to that API. And it is deployed live on the internet with the specific stage. Please go ahead and here it is. So, pretty straightforward again. It's just converting that event to this. Now, another thing that we saw that I talked about is reducing the latency. So, for you, I have an example. So, people who are familiar with content hashing, so we will be doing that in Django. So, quickly if I can just tell what's content hashes, basically, suppose I open this URL and let me reload it. Let me hard reload it. Yeah, so this fetches some CSS, JSS files and if we see here, this is the star image that I showed you guys here in the static files. This is that same star image, this logo thing. But if we see the path, the path has this string like between the star image and PNG. So, this is content hashing. It is basically a way to invalidate static assets cache. Now, Django has the ability to do this by itself. There is this mixin which is called manifest file mixin which Django storages static files app itself has and like say, if we couple that with S3 Boto3 storage which is from the Django storages package, what this does, this allows us to create a way through which we push our static files to AWS and serve them through that place. And as we see here, the, yeah, sorry. So, as we saw here, this is from S3 bucket. But so what it does, like when we do this collectStatic command that we saw this, saw here, so what this does, it takes all the static files, collects them to a place, takes the content of those files, makes a hash out of it. So if you change the file, the hash will be different and the application will fetch the new file itself and makes this json file which is like the manifest file. So Django uses this file to fetch to know, like for suppose this path, I have to fetch this API from the static file source. So traditionally what will happen, this file will also be pushed to Azure. Sorry to interrupt, just a time check. We just have about five minutes. Yeah, no problem. So however, what I have done, I have overloaded these two functions which are basically for reading and writing the manifest file and they write it into the local file itself. So this manifest file becomes a part of the package. So in a traditional deployment, loading this file won't be an issue when you start the application, but this reduces latency by a lot when we make this file a part of the whole package itself. So these are small, small things that one can do to reduce the latency and it's very crucial you do that in serverless deployment. Next, I'll just conclude with a little bit about API gateways. So as we saw the demo and the created gateway also, API gateways are just the initiators of the event of a serverless function. So there are of course multiple available gateways from different vendors. Some of them are even open source, like part of them are open source, like say con, but the crux remains the same. They all have ability to call the static, to call the serverless functions, but the differences arise in the events. So unfortunately, I cannot show you the event, like live demo of con, but I have an event of it, I think somewhere. Yeah, so this is like the JSON of a con event. As you can see again, we have similar, we have all the information for request. We just have to convert this one also to a WSGI request object and our app will be able to run. So guys, that's all from my side today. The code is available for publicly on this, on my GitHub and feel free to connect with me on the ZULIP chat or through any of my social contacts. Thank you. Do we have any questions? Yeah, thanks, Siddharth. That was a great session. Yeah, we do have questions. So there's one about Lambda. So can we get access to the Git repo for this Django AWS Lambda? I mean, yeah, experiment I guess, yeah. Guys, I'll share it on ZULIP for everyone. The repository is open and the deploy and pull request is also open. Anything else? I guess there are a lot more questions, but probably you could take them on ZULIP. Meanwhile, let's just ask you a general question. So why did you go with AWS? Why not, you know, GCP or Azure? Mm-hmm. The basic difference is that AWS, I thought was more familiar to me, but I have tried Azure, especially Azure has Azure HTTP functions which make this converting event JSON to request object a lot easier. So there are, of course, plus and minus of different vendors, whatever you are comfortable with, the basics remain the same. Uh-huh, that's nice to know. And Lambda can in general be used for any kind of real-time or near real-time compute, right? Is that right? Yeah, yeah, so that also again has a whole new topic in itself like hot scaling down scaling, but yeah, you can. I see, that's nice. It was nice talking to you, Siddharth and I'm sure the audience really enjoyed your talk. Very impressed, thanks a lot. Thank you, everyone.