 Buenas tardes, thank you for coming. This talk is about how to introduce micro-service and serverless architectures in Python Pro is in general. I will try to explain this one with practical examples. For the side whether micro-service or serverless architecture operations I will shoot for your architectures or not. Basicamente we start now with a couple of definitions, what are micro-service and serverless as common samples. This is the main points I am going to talk. First I will talk about micro-service in Python, what are the frameworks and the tools that we have in Python for building applications following this type of architectures. Later I will introduce basically serverless, what are serverless and functions as a service, the main points of this type of architecture. Later I will comment what are the main Python frameworks for working for example with Amazon with services from Python. Then I will center into tools that will comment like SAPA and Chalice. And finally I will try to show some demos for deploying AWS Lambda functions from Amazon console. This is a graphical view from Google Trends where we can see the evolution of pom steps. Basicamente micro-service are at this moment are over serverless, but the term serverless began to be no more at the beginning of 2017. Basicamente no days micro-service is a more used term than serverless, but in the next years maybe serverless architecture, serverless topics will increase can equal input of micro-service. This is the evolution with I think that micro-service in the next years can equal or put over micro-service in the next years. When we are talking about micro-service, basicamente a micro-service architecture puts each functionality into a separate service and scale by distributing the service across servers. Micro-service architecture exist when your system is divided in small responsibility blocks and those blocks doesn't know each other, they only have a common point of communication. Generally a message key we can, in Python for example, we can find some tools like Celery, Redis, Rave MQ for example for developing our architecture with micro-services. In general if you are welding micro-service in Python for example we have these two frameworks, tornado and twisted, basicamente these two frameworks are very useful when you have, when you increase the number of concurrent requests and it's important that these requests are calling in synchronous mode and these two frameworks are very useful for this, for when you have a very, a lot of concurrent requests and you have these requests in a specific process, in a specific thread, in a synchronous way you can use these frameworks. With Python also we have a classical Benigno Asingayo and an HTTP for example in micro-service architectures, asynchronous calls play a fundamental role when a process used to be performed in a signal application, no implicate server and micro-service. Asynchronous calls can be as simple as separates a thread or process within a micro-service by its application. In this example we are using Asingayo and HTTP as the main model for doing these tasks in our application. In Python also we have, we can, we can find frameworks like Flash or Django frameworks. These frameworks basically are oriented to developing our REST APIs and communication system based on HTTP. With this perfect we, for welding our micro-service architectures. These two frameworks are the most known in Python ecosystem for basically for work-read web projects. In my opinion we have to, to show you one, one of these between the un one flags. I think that the biggest option is flags maybe because it fits perfectly with the principle of unirresponsibility in micro-service architecture. Flags also allows to be more or terrain for example if you want to create an API management or working with non-relational database or other systems like GraphQL and now I will command this kind of architecture with GraphQL. For example for the performance point of view there are some studies that say that flash turn out to consume 8% less memory and half 6% response time faster than Django. And in this study and from the performance point of view in terms of memory, CPU and response time flash has a better performance than Django. And from the, and we are developing our architecture thinking in this kind of tool. We must keep in mind that when we talk about micro-service it's usually combined by a greater consumption of memory and resources. And the performance is important to know this aspect, study this aspect in our application. Another type of architecture we find in micro-service is the use of GraphQL. GraphQL brings a micro-service arrangement such a data ownership separation, granular data control for execution and service coaching. Another benefit of adopting GraphQL is the fact that you can fundamentally assert greater control over the data loading process. Because the process for data loaders goes into its own point, you can control in a granular way how data is transferred. And GraphQL provides in this aspect a good solution for doing this kind of applications. Also we can combine the combination of GraphQL with Python basical is designed to replace REST API we can build with flash or Django REST framework we can use the combination with GraphQL with Python. What are the frameworks that we can use? For example Graph in Python is a library for building GraphQL APIs in Python in an easy way and it can integrate with applications by in Django flash or an SQL alchemy for example. Basicaly Graphen for example allows you to define your models, define the attributes in your model and I saw per square is to get the data for those particular models. In the same way that we have the Django OREM for example, we can use Graphen to define our models and schemas and interact with other phase were relational or not relational. Another type of architecture that we can build with micro-service for example is using the combination of Django and Celerity for example Django can emit many tasks for if we have an architecture we can use for example Redis or Ravi MQ as message broker and Celerity basicaly what provides is an asynchronous distributed task key and we can pass the mesas to the application. Celerity is a good solution for real-time operation also but supports scheduling as well and as a command is useful for Celerity is very useful for background task processing and you can communicate with Django through the message broker like Redis or Ravi MQ. Another solution that we can use in Python is using the use of Celerity MQ Celerity MQ provides a building blocks to develop a scalable distributed system on top of sockets. Basicaly we can build for example a simple public subscribe micro-service for example with Celerity MQ on Flash we can build in a simple way this kind of architecture for example this can be our server or publisher in this example the response of service call is published as a message we have a mesas and the mesas will publish in the socket that provides Celerity MQ and later our client will write the mesas from this socket and the application will respond to that message and this is the client part where we can exchange our Celerity MQ client or subscriber in this example we have a client that will have the init method with the constructor where we initialize the host, the port and the context for subscribe the client and basically the client is subscribed to the channel where the server is sending a message and when the server sends a message the client responds calling with the resist message and connecting to the subscriber topic well in general the idea with micro-service what is the idea is to break the monolithic architecture and separate each is relevant functionality in a specific component micro-service also has software in stereo and improve incrementally in simple test as the service are the couple from each other it's very easy to upgrade and improve service without causing the other to go down micro-service also can be much better libres via container for example with docker we are working with docker or cobertness solution for example micro-service are very useful solution because provides an effective a complete virtual operating system environment processa isolación and so on well now I am in commenting the serverless part the serverless well I will start with the definition this is the definition that we can find in Wikipedia I think basically serverless computing is the practice of building and running service and applications without having to worry about provisioning and management servers basically what we are doing is the code is running in executing environments that are managed by cloud providers such as Amazon, Google, Microsoft and so on will serverless you can deploy your code in the cloud without worrying about instance capacity basically is automatically managed by the cloud provider in a completely transparent way by the developer well serverless architecture basically the serverless architecture that I am going to comment is based on function as a service function as a service means that every serverless model that you are programming has a function and this function is execute on the cloud basically provides features like scalability, provisioning deploying all these things that you normally are executing in your local machine you can use these cloud providers for executing this command and another feature is that you only pay for the time your code is running in the server cloud provider serverless computing basically the best feature that has is that allow us to focus on building the application without managing infrastructure and what provides to the developer is the developer's focus in the inputs the inputs of the London function and the logic in between and the rest of the of the French2 servers memory, CPU the rest is all managed by the cloud provider and you only need to focus in the important of the application the logical of your applications and the inputs and the outputs in general serverless functions these are use cases for example we can build recipe is in the same way that we can that we can find we can build with flash or dango res framework we can also use this kind of for building chatbots for events also it depends the cloud provider for example Amazon provides the S3 event for 5 processing data ingestion also we can is very useful this kind of architecture for data and S3 processing IOT projects insider handling because for example Amazon for example provides a cloud watch event log and another time of tax related with schedule tax like monitoring or attempting all these are use cases that we can that we can find in serverless for managing the projects in general ladder will comment the drawbacks of serverless but in general serverless functions are well suited to operations that take a very short time to complete and no resource intensive because why because a cloud provider normally impose execute on time and memory limits with mean that if the workload is a long running process a conventional server model may be more appropriate than using instead of using a serverless solution we can we need to use a conventional server because our process our thread or our application execs the time and memory limits imposed by the cloud provider this is another example that that provides for example with Amazon and this is a sample for processes and serving multimedia with AWL circuit this is popularized by Amazon basically is an image processing event handle function the function where the function this is our function and the function is connected to a data store such as Amazon S3 that emits change event and each time a new image is uploaded to a folder in S3 and event is generated and for water to the event handle function that generates a zoomed in image that is stored in another folder and in that way we can see that each time a client upload a multimedia to the S3 the S3 is the data storage of Amazon and each time a client will upload a multimedia to this data storage the process is executed is the multimedia event is a video the land of the function is responsible to call in for example elastic transcode service this is the service that use Amazon internally for converting processing video and so on and if we have an image then we call the Amazon the other service that has Amazon for this task and basically all this graph is our function and depending the events that produce it calls one side or another the benefits that provides serverless solution basically you don't have to worry about clusters, load balancing you lower infrastructure cost as I commented before by developing a serverless system the infrastructure cost can be greatly optimized as there couldn't be a need for servers to be running permanently and the server starts whenever the function is triggered or event is produced and stop whenever the function gets executed successfully and the draubash also we have a little draubash for example draubash related with the bug in the plugin and monitoring but well the plugin and monitoring for example the behavior of multiple functions maybe can be more complicated that monitoring a micro service for example in this kind of architecture for the bug in for example we have a good support in general there is a good support for most tools I want to comment later most tools include visualize logs in different development environments for example in Amazon we have cloud watch logs for getting metrics monitoring and debugging but well there are many tools but it can be better in the future another draubash that we have is the tools around the development automation of serverless there are many developments but many many of these tools are still in development are in futures and so on and another draubash is we have no control over the containers because what are doing these cloud providers when you call when you have a landing function in a cloud provider internally creates a container for executing the code and so on and we have no control over this execute on environment in the containers well this is regarding the cloud providers the mass no maybe it can be Amazon because we have a lot of service one of I am commenting now is the lambda service also we have Google cloud platform Microsoft Azure for open source solution we have Apache OpenWish and Qubeless Qubeless is a serverless framework for Kubernetes we work with Kubernetes we can use Qubeless for a serverless framework for doing these tags and OpenWish is an open source project for IBM basically provides native support for many language and supports many languages and run time via docker containers basically these two frameworks are designed to be deployed over for example we are working with docker we can deploy over docker or Qubeless clusters and they offer a serverless open source environment where we can execute our code and they support many languages like Java, Python, Node PHP and so on well and at this point I am going to center now in the lambda service for Amazon lambda is the main Qubeless service dedicated to serverless computing it is integrated with the rest of Amazon service like the AP Gateway DynamoDB notification service S3 and so on it's a service basically it's a service that allows you to write Python code that gets executed in response to events like HTTP request or for example when we upload to a file to Amazon storage we can write our functions for responding to the events like this what is the basically with Amazon we have two types of calls we can to approximation when we are programming with Amazon we have two approximations the synchronous that implies that we have a res API and through the AP Gateway we involve lambda functions behind the API we have the classical request response and the other approximation for working with with lambdas is the asynchronous way that implies that there is a lambda function that is listening to events for example with a file it should flow out to storage in Amazon or when we get a notification in Amazon notification service and when this produce this action then is called the lambda function lambda functions are equivalent to sellery workers if we know this kind of architecture if we have work we sellery lambda function is the equivalent the idea with lambda functions is that they can be triggered asynchronely via some AWS events basically the benefit of running a lambda function is that you don't have to to deploy a sellery micro service that needs to run 24x7 to be a message from a Kiwi a lambda function is a handle function that Amazon can invoke when the service executes your code this is the basic syntax the real syntax structure when created a handle function invite we have as parameter the event and the context the event is the parameter for passing data to the handler this parameter is usually a picton dictionary type and the context is this parameter provides the runtime information to your handle function and the return value of this function can be anything from bashing values to dictone edis converted to JSON stream for example how we can create this function in Amazon for example with the AWS clai command from this client we can create a lambda function basically we provide the region of our account in Amazon the C file with our code our role the run time with the pinton version that we are executing another parameter related with memory and time out and what that is this equivalent to we go to our console in Amazon this is the web interface that provides Amazon to creating a lambda function basically we can deploy our lambda function we are adding a C file and from the configuration point of view the most important is obviously specify the run time with python the python version what are the main frameworks that we have in python well for simplify the development process of lambda functions there are some frameworks and tools released from developing are managing your python serverless applications I will focus in two main sapanzalis but there are many others like serverless lambify and python lambda basically we are working with lambda functions important to know the Amazon epigate way this service lets you define an API that is full managed by the by Amazon and provides offers a variety of way to control access your APIs for example the serverless framework this is a framework that is built in JavaScript and requires Node.js you can install it via npm and so on and provides a boiler templates for creating our projects well I am going to start now with STAPA STAPA basically is a framework that in an easy way you can deploy your python applications with AWS lambda and IP gateway with STAPA each request is given its own virtual HTTP server and it communicates with the Amazon epigate way for doing this control basically you can communicate basically it creates a virtual server in the cloud and it executes and it creates an API in Amazon epigate way basically for working with STAPA STAPA provides a common line tool for lambda functions and for developing for deploying for working for example with flash and dango it has a lot of features in this example we can see that in an easy way with STAPA init command this tool provides an interactive wizard that helps you to set up STAPA projects quickly when you are first getting started the command line tool as you simple questions for example it detects if you are working with flash or dango projects it has the capacity to detect automatically this feature and the security of this command as you about the configuration environments if you are working in develop stage of production and other variables like your main lambda function and in a simple command you can deploy, update or destroy an application for a specified stage that is the development stage or production once you create your project you manage the configuration directly in the SAP settings file basically this is a simple JSON that stores your project for your project configuration for a stage and in retours information about region executon root time executon and application settings basically when you deploy SAPA automatically it creates the package it will load the file to amazon storage data storage creates and manage the necessary policies and roles on the users and register all information in the API gateway this is an example where we can see that with SAPA basically we need we need a virtual M and this is the hello world for SAPA execute with a simple command SAPA deploy and development automatically package create the API gateway routes and deploy the API gateway and you have the role where the service is working also another interesting feature that provides SAPA is that offers the possibility to execute functions in a synchronous way in this example for example you have a Flux API to make an order you can call a function in a completely separate lambda instance by using the SAPA syntax decorator and you can set all your lambda functions in the same way that you can configure your accelerated workers this is the equivalent to configure the accelerated worker for example and the solution that we are going to comment is the chalice chalice is a micro framework for managing the the lambda functions and the API gateway in the same way that we have seen with SAPA chalice provides a command line tool for creating the plugin managing your app the project is available in PyPy you can install with PyNistar and available in GitHub basically provides chalice enables a single play page HTTP handling and you can use Python decorators for example the root decorator that automatically builds a routing table for the lambda function to invoke functions based on the request pass and method in this example for example we are declaring our a function with the uproot weather and pass a parameter the city for maintaining the weather for example we can build our recipe in a easy way with chalice for example we have a classical recipe for getting updating or deleting objects in this case for task for tall object the code is very simple for example we only need to annotate the function with the decorator root and specifying with the method in the method parameter we define we want to get a post, a delete or a port and basically for doing this task we can use for example in this case we are getting the torch for a database in memory for example and for update we need also the id of the tall below and we can see that we pass the id parameter for modify the tall information and we can see that in a easy way we can develop our recipe and deploy automatically in in this case chalice provides many options this is the options that provide we can deploy our application in local in localhost we can create a new project we can generate policies and this is basically what this does chalice when we made a chalice deploy chalice internally update the random functions send the change to lambda the epic way where the cpi already found and deploy in the environment and finally if we want to test this kind of solutions in our machine with docker for example we have a collection of containers for testing IWS lambda environment in particular we have the docker lambda image it is a sandbox local environment that replicates the life AWS lambda environment almost identically including software, libraries environment variables, context objects and mihaibos and finally for commenting the cost that suppose using this service we have the service call is basically a cost calculator where we introduce the number of executions the estimated execution the memory it will include a free type because Amazon includes a free type for testing this this kind of service and depending what are the the input of this of memory and so on it retours the cost the possible cost of using this service and you can compare what is the the most cheaper solution for making a provision of your of your project for example for learning this time of architecture we have most examples Amazon provides blueprints are examples that are configured that are pre-configured basically are configured templates for learning this kind of architectures and in the serverless G hub we have most examples for learning this kind of of solutions we can find how to program our telegram bot for example in Amazon develop our API our REST API with DynamoDB for example that is a database for Amazon and finally these are the reference for making the tool and that's all these are for conclusions basically serverless architecture using serverless we can improve microservice architecture eliminating the need to have the service algo effective and well serverless provides many advantage and the use of serverless computing is increasing in in in energy ads and that's all, thank you we have time for maybe one question regarding this kind of architectures what I found with my team is that we had an issue with external dependencies because usually when you are in Amazon or you are in Google Cloud when you depend on third party libraries that sometimes are compiled they really deny you to install do you have this issue in some point or an issue Latin with third party libraries dependencies when you get into this architecture where you are doing pure Python are you able to import third party libraries because my team has been always having trouble with this I have not been in trouble with third party but I mean some pypy compiler modules or whatever you can install whatever library that you need if the dependencies is in pypy there will be no problems for us in Google Cloud it failed basically failed so that's what I mean and how easy is to remove from this kind of architecture and to write on if you say it's a mistake for removing this project provide the command tool for example getting out from Amazon and moving to another provider or just moving into your own server you need to run our on premise on your servers I'm afraid we don't have time for any more questions let's give another big round of applause to Jose