 But it's time now to pass, to go to our next speaker. Harsh is here and is going to tell us about building serverless Python applications on the AWS using this tool, the AWS Chalice. So Harsh, whenever you're ready, I leave the floor to you. Thanks, Francisco. I hope my screen is visible right now. Yes. Hello, everyone. And thanks for joining me on this talk on building serverless Python applications using AWS Chalice. And first of all, I would like to extend my heartfelt gratitude to the Euro-Python Organizing Committee for providing me with this opportunity to present my talk. So a quick introduction. My name is Harsh and I'm a senior year undergraduate from India. And I'm currently a Google somewhat of course student developer with Metacol and an intern with Red Hat. So I have been working with Python mainly around building APIs and machine learning models. And more recently into building iPython kernels as part of my Google somewhat of core project and some of the DevOps utilities using Python. In this talk, we will be basically discussing a bit about serverless computing, AWS Lambda and AWS Chalice, which happens to be the Python serverless framework on AWS. It would be a pretty short talk aiming to just introduce all of you to the AWS Chalice and how you can develop a simple API using Chalice and deploy it over the AWS resources. So with this determination, let's get started up. So before we start up, let me just introduce the agenda for this 30-minute talk. In today's talk, we will be basically knowing the basics of serverless and how exactly you can get started with serverless development using AWS. You will also get a head start on developing serverless applications with Chalice and a background to AWS Lambda. And why is it such a great choice for developing serverless Python apps? Finally, I will be demonstrating a live example about creating an all-new Chalice app, testing it locally and deploying it on AWS with a very short but sweet demonstration. If you have worked with AWS before, it would be very easy to follow up. All you need is an AWS account and you need to configure the AWS credentials or your local machine. If you have not done the same, it would be highly preferred to have it done so that we can very readily have a smooth transition into the practical example once we arrive at that particular point. So with this determination, let's get started up now. So before we jump into the entry cases of serverless, let us first discuss the standard ways that we employ while we try to deploy our APIs. I have highlighted some of the standard ways that we normally follow while you're trying to deploy our APIs on some of the popular AWS services. For example, if you want to get started by the standard way, you can use the AWS Elastic Compute Cloud to deploy your APIs. It provides a very standard virtual machine instance which you can then configure on your own to run your APIs. Deploying on EC2 or its pretty alternatives on Google Cloud Platform or Azure, it is pretty much the standard way that most of the technical teams follow to deploy their APIs in a very simplistic manner. You just create an instance, you do a secure shell login to connect to your instance and you're already to configure everything up for your deployment pretty much straightforward. More recently, many developers have been using container technologies like Docker to deploy their APIs as well. We can just create a Docker image that can act as a blueprint for the container deployment so that we can just put it over the Elastic Container Service which acts as the container deployment and the management service for us. The container setup file just called a Docker file will outline our image and provide all sorts of bell instructions for easy deployment. You can then deploy your image with just a single click over the ECS. The third way of deploying our APIs is slightly tricky but is commonly being used today to separate the need of managing infrastructure from the code and keep our focus only on the core function and the business logic that we need to develop and deploy. And as you might have guessed it right, it's the serverless way of development that is being talked about right here. When building serverless application, AWS Lambda is one of the main choices on AWS for running your application code. So let us beg our attention to one of the most pertinent questions we have. What exactly is serverless? So I will start with addressing the elephant in the room. Serverless doesn't mean no servers. In very name and terms, serverless computing means that your backend code which acts as your business logic will just run on some third-party vendor server infrastructure like Amazon Web Services or Google Cloud Platform or Azure which you just don't need to worry about. It does not mean that there is no server to run your backend logic, rather you just don't need to maintain it. Going further, serverless abstracts the phenomena of exposing the developers to the overall infrastructure. This just means that the underlying servers are being hidden from the developer and we don't have to worry about all those weird server configuration files and a ton of other things that you need to put on our application before a production level. It's a confusing term, but don't let it fool you. We have got servers where our application is deployed. We just don't have to manage, maintain or scale it ourselves. This is exactly what serverless in its entirety means. But why should you bother about serverless at all? Let us take an example here. Earlier, if we wanted to set up our application for global use, we needed to have a server up and running. This server can then process the requests coming up and render the data and logic on the client side. Isn't that what happens every time? So this was a traditional approach that we followed a few years back. Server computing was possibly the de facto way of serving applications and we needed dedicated people and engineers to maintain the servers and the backend code. With the introduction of serverless, that requirement has been completely rooted out. With a serverless platform, a programmer can just write the code and they can directly run it on the cloud without worrying about the hardware, operating systems, servers, configurations and what more. They can just write the backend logic as a code and they can just run the code without needing to be aware of the development deployment complexities. Applications get high availability through serverless as well as auto scalability without any additional effort from the engineering side. All of this can significantly reduce the development time and the consequent cost. All of these simply make serverless a very lucrative choice to get started with the development and deployment of our application. So I hope most of you have now understood the basic philosophy and the requirements behind serverless computing. Nowadays, it is completely universal and various cloud service providers like AWS, GCP, Azure, they are offering their own serverless compute services that we can basically capitalize upon. Lambda is one set service provided by Amazon Web Services. So what exactly is Lambda after all? In a simpler manner, Lambda allows you to create functions that are self-contained application in the form of functions that can be deployed through the Lambda on the AWS. So your function can be stored on the cloud and with events like let's say API calls or database modifications and much more, you can just trigger these functions and make use of the same. Behind the scene, Amazon manages running servers that are handling the function execution and the resource allocation so that it can successfully complete your function execution overall. It might sound very easy and fascinating because AWS Lambda takes care of all the needed resources and continuous scaling as a server to successfully run the functions. This simply overcomes the headache of scaling application and also we can allow or prevent the function from accessing the resources. You can just deploy your functions in a zip file or as a container image, whatever way you prefer and Lambda will allocate the resources and auto scale it. One of the biggest trends of AWS Lambda is a reduced cost of execution. You don't pay anything when your code isn't running. This ensures that you don't pay for the time when you're not actually running the code or your application is not being used. While building serverless application, we just need to take care of three things. Having a Hypertex transfer protocol gateway service, having a database service and having a compute service. Lambda fills in as a compute service that we require out of a serverless stack. If you're looking for a database service, maybe DynamoDB or RDS is the way to go. While we normally use the API gateway for the HTTP service that we most commonly use to expose our APIs, Lambda tightly integrates with all of them and abstracts all of the underlying things going on from the end user. This makes Lambda a great fit for deploying highly scalable serverless application in a very simplistic manner. Let us now welcome the protagonist of our talk which is AWS Chalice. We talked a lot about serverless, Lambda's and all, but where exactly does Chalice fits in? So as I developed the advantages that I mentioned coming with AWS Lambda are huge. We do have servers up and running. We just don't need to provision and maintain them. Our duties solely relies on providing the function that is a code to the servers with an added advantage of being only for the compute time that is being used. Under the hood, AWS Lambda is making use of a whole lot of services in the AWS ecosystem to make this happen, along with flexible scaling and high availability. And it can be the go-to service for your serverless deployment. AWS Chalice is a micro web framework that is very, very similar to Flask that allows for the development and the deployment of our API-based microservices, which is pretty easy compared to anything else except for the code part, the scaling, packaging, and the deployment are all done for us almost instantaneously. And this is what makes Chalice so great. So if you work with a micro web framework like Flask or maybe even Fast API, you will find Chalice to be quite similar. So Chalice is a similar web framework with a lot of features and syntax quite similar to that of Flask. Albeit, it runs completely on AWS resources and offers a single click deployment. The best part about working with Chalice is that we get all the functionality that we can expect from an API development framework. And it provides integrated functionality with most of the other AWS toolings like the S3 storage, the simple queue service, API gateways, and more. Chalice is completely an open source framework and it is currently being used by over 1,000 repositories on GitHub and over two dozen packages right now. And it is an actively maintained project as well. So you can check it out on GitHub, on the AWS organization, and you can even contribute or use the same. So why exactly Chalice? What advantages does it offer that the other API development frameworks don't? So Chalice comes with a very handy command line interface tool that you can download using the standard Python package manager that is PIP. And it can be used to automatically set up a Chalice project and you can test it locally and you can deploy it as well using the same integrated CLI. With Chalice, we can only focus on writing the code that matters while leaving the deployment and the management part of the AWS resources to take care of. Apart from the standard Python, PIP, and the AWS credential setup, you won't need anything to deploy your APIs. Everything can be set up from your local machine without having to go to your AWS dashboard or doing a remote SSH into a server. Chalice makes everything easy for you to build, deploy, and manage. Apart from all of this, it also provides a decorator-based API through which it exactly integrates with the Amazon API gateways, S3, SNS, SQS, and the other AWS services. So if you are entirely working upon the AWS stack, working with Chalice would be a breeze to save because you can now leverage the same services that you originally used while building your applications. So with this, we come to the practical hands-on part, and this is where we are going to do some practical demonstration. So let me stop sharing my slides and let me start sharing my whole screen right now. Yeah. So before we are gonna start with setting up our Chalice project, we just need to make sure that you have the AWS credentials properly set up. If you are not familiar with the AWS credentials part, you can just go to your AWS management console. You can go to the IAM, which is the identity access management. You can go to the users and you can just click on the new add users option and you can just configure a new user so that you can exactly use all of these AWS services. So you can just give up programmatic access and you can just keep a username, which in my case, I can just keep as Hersh Eurobiken. And once you do that, you need to assign the permissions. So you can either add the user to a particular group. So I have a group called personal just for managing my personal configurations, or you can copy the permissions from an existing user, or you can just attach the existing policies directly. Once you do that, you will be getting access to a secret access key and an access key ID that you need to keep upon your dot AWS folder. So if I go to my terminal and I just have a new tab, if you just minimize it a bit, and if I just go to AWS folder, I can exactly see that I have saved my configuration and my credentials right here. I will not be showcasing my credential because I cannot screencast it because that would reveal my personal information, but you can just have your credentials right here and it would be your AWS access key ID and your secret access key. All of this would be present on the dot AWS folder on your home directory and you can directly save it right here. If you don't want to go about this in a whole manual way, you can just download the AWS CLI, which is a very handy way of configuring your AWS credential and managing your AWS resources. And you can just enter this particular command AWS configure and you will be given a complete walkthrough so that you can just enter your access key ID and your secret access key and AWS will automatically create a user for you on your local machine. So we need to have this particular thing up and going because we don't want to go to the AWS management console for deploying our app. So these credentials will be used for authenticating your user configuration so that you can easily deploy your API. So now that we have set up our credentials, we can get started with creating a Jalice project. But before we do that, we need to initialize our virtual environment. So for this, you can just install virtual ENV. If you're familiar with this part, so virtual ENV basically gives you a virtual environment and it basically isolates all of the packages and the dependencies that you're installing away from the other dependencies that are part of some other project. So you can just install this by using the pip3 install virtual ENV. And once you have done that, you can just go ahead and like create a virtual ENV on your folder. We have initialized the virtual ENV right now. It is by then 3.8.8. And right from here, we can just get started with installing the Jalice. So the first thing that we are gonna do is like Jalice by pip into our virtual environment. So we are gonna do pip3 install Jalice and it will automatically collect the Jalice and it will install it on our virtual environment. So yes, the Jalice has been installed and if we just run this command, we will exactly see how we can exactly use it. So Jalice gives us a lot of the options right here. We can create a new project, which is exactly what we are going to use at the first. We can delete this project as well. So once you have deployed your API and you just came to know that you have messed up somewhere, you can delete your project as well. You can deploy your project using the standard deploy command and you don't have to do anything. Your Jalice project will automatically get configured through your AWS credential and it will automatically get deployed as a Lambda function. You can also test it locally using the standard local command. And you can also develop and debug some of the other commands for Jalice using the def command. So let us go ahead and try to create a new Jalice project. So to do that, we will just push in this very simple command Jalice new project and we can keep a name for this project as EuroPython 2021. So as soon as we do that, we will see that a project has been automatically created, which is the EuroPython 2021. So let us just see the inside it and now if we just jump back on our VS code, we can exactly see that Jalice has automatically created a few files for us, the basically the boilerplate so that we can exactly get started with the same. So at the first, we have a .Jalice directory which contains all of the configuration that we need for our project. Apart from this, we also have got an app.py which contains some of the boilerplate code that we need right here while developing our API. So there are some boilerplate code here which we can exactly delete for now because we are not going to use it. But rest of our frequency that we are importing Jalice and we are creating a new application along with the app name and we just create a standard decorator signifying that if a user is going on the index route, they will exactly see a hello world message in the form of a JSON. We also have a requirements.txt file, but we will see that it is completely blank. We don't even have the Jalice right here because Jalice is not a part of the runtime environment. And we need to understand that Jalice is just being used for creating the standard Lambda functions in an easy manner. It is not actually a requirement that we need to use for deploying our application. So let us jump back on our app.py and see what is happening right here. So it has only one route that would assign the URL of the application to the function easily. The decorators here, they're simply wrapping the functions which makes it easy to write the code logic by breaking them down into separate routes. For now, our application is serving only a very simple JSON message, which is the hello world. Now that we have set up a Jalice project, let us move next and test it locally and deploy it as well. So let us jump back onto our terminal and just push the command Jalice local. This command will automatically create a local development server at port 8,000 while managing all the complex stuff with the aid of the decorators. So we can just hit the enter and we will see that the dev server has been started and right now it is serving our application on port 8,000. So if we just open this up, we will see that we have a hello world message right here exactly as the JSON that we were requesting it for. You can also do a standard curl on this. So if I just copy this whole thing, just open a new tab and do a get request. Okay, list an edge. And we can see the hello world message right here. So this is how easy it is for you to basically test your APIs using the Jalice right in front for all of you. So now let us just go ahead and start to deploy our API as well. So we have a simple hello world message right here and it is running successfully. So let us just go ahead and invoke a very simple deploy command so that this API will automatically get deployed over the AWS resources. So to do that, we will just push the command Jalice deploy and it will start creating a deployment package for us. It will create an automated identity and access management role. It will create a Lambda function for us. And as I said before, Jalice is really handy way for developing your Lambda functions and it will also create a REST API. And finally, it will deploy this on a Lambda Amazon resource name along with the REST API URL so that we can just invoke it and see how our application is working. So this is a REST API URL. Let us just copy it and let us do a curl on top of this to see like how is it working. As you can see, it is just passing a really simple hello world message right here. So if you open this on the browser as well, we will see the same message right there itself. Jalice just makes it very easy for developing, deploying and testing our application using the handy CLI that it has right there. Let us just go ahead and try to add another route and beat a very simple one because we are just running out of time just so that to see like how we can add more routes here. So one route that we can add is instead of a simple hello world, let us try to add a route for a hello name. So we will just create a decorator, app.route. You just wanna say hello and we will just keep the name within curly braces so that we can exactly use it for our case. So this is our app.route and let us create an F hello name and let us just return a hello message. Let us try this out and let us see if it works properly. So the Dev Server has been started. Let us just go and see. It's missing the authentication token which is quite weird to see but rest apart, we can just go ahead and we can see like what is happening. Okay, then I guess this might be a silly bug which I'm not able to figure out. So let's jump back on the slide itself and let's see like what are all the other stuff that we have to cover. So apart from all of this, we have just arrived at the end of our practical demonstration. If you're looking for, you can add more routes to the standard boilerplate code and you can set up standard get and the post request. You can add course as well if you're trying to deploy it for a frontend user interface as well. And as you might have noticed, Chalice provides a lot of advantages when it comes to developing and deploying APIs on AWS. With serverless popularity reaching to the moon, services like Lambda and Chalice can simplify the overall development and the deployment process but there are some certain disadvantages as well. Chalice right now is not fully mature to develop and deploy large scale APIs and for creating full-fledged application. Lambda has a limit of like a 15 megabyte deployment package which makes it exactly disadvantages for us to deploy heavy dependencies, especially if you're trying out, let's say heavy packages like machine learning models or something like that. Chalice also binds to your AWS resources and thus it can create a vendor lock-in problem, especially if you want to migrate your deployment to another option in the future maybe. But rest apart, Chalice can be really seen as a handy way of writing Lambda's and deploying simple APIs. With this, we have came to the end of this talk. Thank you everyone for joining me on the short open Chalice and I would be ready to take up any questions if there are any. Excellent, thank you so much, Harsh. That was a great presentation. I didn't know about this project, Chalice and I'm sure I will try it out in the future. Also, thank you for being very clear in your presentation, the pro and the cons and this example that you gave. We have some questions. I will copy one and show it to you on the screen. So let's see if it shows up. Okay, how do you feel comparing Chalice against other serverless frameworks? As I said before, Chalice is not fully mature enough to develop and deploy large scale APIs. So my personal use cases has been just in my personal project and sometimes just for triggering a few functions. So compared to the other serverless frameworks, I guess I would just go with them before I see like Chalice being fully matured enough to handle large scale executions for me. But if you want a simple thing to tie up with the other level of services and if you want just a simple Python syntax, Chalice might be a very good option for you. Awesome, awesome. I have, well, we still have maybe a couple of minutes. So while people type their questions in the room, I will beat them because I'm already here. And I will ask you another question about the latency. I guess, I don't know, maybe the latency that you get from a serverless application versus a fully managed server application that you manage to write and deploy and configure and scale and all that kind of stuff. Are there comparable latency-wise or maybe with serverless, there is a little bit of latency to start the services? I don't know if you have any experience on this. Yes, so this is a standard problem with Lambda and if you try to read more about Lambda, you will know that Lambda has a very slow response time. And this is exactly what we see as the cold start latency. So this is a definite engineering problem and this is why most people have a significant apprehension against going with the serverless frameworks. But there are definite walk-arounds that is being very well documented about how we can handle this latency issues with the serverless frameworks. Okay, makes sense, makes sense, very cool. Thank you so much. We are running out of time and you folks can continue the conversation in the matrix on the breakout room. We now have a very short break, four minutes break, we start again in nine of five Central European summertime with some morning announcements in Optiver. And then we continue with the first keynote for the day, Claudia Comito, and she's talking about lots of cool stuff, data science, data analysis frameworks, you know, a lot of stuff. So I will see you there at nine of five CEST in Optiver. Thank you again, Harsh. That was a great presentation, very interesting. I'm going to try this technology right now. And thank you folks and see you in five minutes.