 Good afternoon, everybody. My name is Praveen. I'm a senior solutions architect with Amazon Web Services. So for the next five minutes, we'll talk about what are all the different workloads you could run in AWS, the Python workloads you could run in AWS, and how AWS could help you with that. So before even going there, we provide you an SDK called Boto3 SDK. So that's the SDK through which you would be interacting with any AWS service. If you want to use, be it our EC2 instance, our virtual machine, or our storage, you would be using that, right? So the Boto3 is SDK. If you're working on Python with AWS, you should have it. So let's talk about some of the workloads which you could run with Python workloads you could run with AWS. Let's take the web workload. So if you want to run a website or a microservices architecture, API-based, anything, you have something called an elastic beanstalk, right? So which is our pass service. And what it provides, we natively support Django Flask application. You could directly run on it. Typically, how we run your web application is that you probably will get a virtual machine, install Python on it, install your application on it. You are responsible for autoscaling that particular application. You are responsible if the load increases. You need to make sure that your capacity is over there, right? So that is what I've kind of avoid. You could use elastic beanstalk, which could help you in scaling your application. We take care of automatically, horizontally scaling your application. And if you want to connect to any other data store, like a DynamoDB or something, you could use elastic beanstalk. So that's the first compute option you have. The second option which I want to talk about is your on Lambda. Lambda is our serverless based architecture function as a service if you want to call. You could have your API call Lambda function, or there are about 60 plus different events within AWS which could invoke your Lambda function, right? So Lambda, as a function as a service, you upload the function. You don't really need to worry about how to scale it, patch it. And if the load suddenly increases, we will make sure that we will bring up the number of instances for you, right? So that is what the second option to run your compute is Lambda. And the third, let's say if you have something like a salary which you wanted to run at a kind of a task cube, a distributed task cube, then again, we provide you a couple of options. One, you could always bring up your virtual machine, the EC2 virtual machine, install your salary distributed application over there and run it. The second option is that we support Docker containers, right? So we provide something called ECS, which is our container service. You could use our container orchestration service. You could use that. But if you're comfortable using something like a Kubernetes, we also provide you what we call it as an EKS, it's elastic Kubernetes service, which you could use it, right? So you could use your virtual machine. You could use the Docker containers, or you could run it in the serverless environment. This is for a web kind of application. And let's say if you want to run a typical machine learning or a artificial intelligence, a deep learning kind of application, then we do provide various services for that as well. One service which I want to talk about is a service called SageMaker. So what SageMaker does is that it kind of gives you about 17 different most popular algorithms, and you could start using it right away. And the next thing it provides you for you is that make your life easier when you want to do a distributed training. It provides you a way, what we call it as a one-click training. You just need to tell us what is the algorithm you want to use, where the data is there, and how many nodes you want to do the training. We will actually provision those many machines, run the training for you. Once the training is complete, we actually bring down those machines, and then we actually throw away the machine, copy the model into a secure location which you could start using it, right? So that's a second feature which SageMaker provides. And the third thing it provides is that it helps you with the inference. So if you want to deploy your model, you just say one-click deployment, you just deploy your model over there, and then we provide you a restful endpoint which your application can start using it. So this inference layer can do auto-scaling when your load increases, we can automatically scale it for you, right? So these are some of the benefits of using SageMaker. Apart from that, we also provide you a layer where we provide you the GPU capabilities at the Tesla, NVIDIA GPU, if you want, if you're doing some deep learning applications, we do provide you. If you have any questions on what I spoke now, we are Booth over there, the AWS Booth. I can, we can meet you over there and discuss there. Thank you.