 So, here I'm talking about Python for cloud services and infrastructure. Let me tell you something about myself. I'm Bomic. I came from India. I have around five years of development experience. I'm an open source savvy guy. I love Python. I do some DevOps and site reliability things. But my core will be the cloud. So let me tell you my current product and a little bit about it. So we have developed a smartphone case which can read your body vital when you place finger on the case like this. It will read your BP, ECG, heart rate, skin temperature, SPO2, and it will show those things on your mobile and sync it with the cloud. So I worked on the cloud part. So this talk is all about cloud and internal components. So yeah, to build any web application or any cloud services, we have these questions for us like from where to start, which tools we can use, how to integrate stuff, and things needs to be taken care of. So from this presentation, you will get all these answers maybe. So yeah. So I'll let you know the Python-friendly stack that I have been using since three-plus years, and which is very reliable. So of course, any application, any web application, or any cloud service will have this kind of set of components you need to prepare where there will be some web framework, WSGS server, task queue, database, login, caching, and many other things. We also need to take care about other components like this. So we'll discuss all these things later on. So start with where to start. The first thing will be the web framework. Let's finalize the first web framework. What kind of web framework we are going to use. When it comes to the Python, we have two famous framework available, which is Django and Flask. Django is fully armed and fully included things inside Django. But Flask, well, it's kind of a small framework, which is very extensible. So you can attach other things with this Flask. So these two are most famous. I always prefer Django because it has a few things included, like authentication is there, so you don't need to worry about all other things. Yeah, so how to run our web application. So we need WSGS server, WSGS server. So what kind of WSGS server we need? It basically gets the data from the upper layer and process the data inside the process and generate one output, which can be a response and send it back to the request. If it is a request-response thing. So we have two good WSGS server available in Python. These two are most famous. So one is G-Unicorn and one is Micro-Wisky or U-Wisky. I generally prefer U-Wisky because it has much more configuration power. There is no specific reason to, you can choose any of them. So yeah, we should have some WSGS server. Next question will be database. So what kind of database we need? I mean, either it's a SQL database or no SQL database. It's based on our use case. So if we have some data elements, which we need to be relational, so we need some SQL database. So these are a few databases that we can go for. PostgreSQL is there, MySQL is there, and won't go to be for no SQL. So currently in that product, in that cloud, we use MongoDB for the 90% of the transactions. So no SQL you can use for if you have some unstructured data. So yeah, that's the thing. Next is web server. So any client, any request will not directly communicate to the WSGS server. It will pass through the web server. So web server will take care of all the requests. It handle it and pass it to the WSGS server. WSGS server will process it and return back the response. So this is the general architecture we need to figure out. These two are most in use web server. Actually, these two takes 50% of internet traffic right now, combinedly. So yeah, either you can go for Apache or you can go for Nginx. Nginx, mostly known for speed. And Apache mostly has some configuration power. So based on your use cases, you need to define your web server. Yeah, this is very interesting. Next is the task queue. So when you want to perform some ad hoc tasks, like on some request, like if you are creating one account, one create account request comes. And you want to generate some newsletter or some email thing that can be done in a distributed way. So you should have a task queue which can perform all these things. So if you write your task queue, it's good. But we should not write our own task queue because it's already there, which can solve few problems like queuing, like a retry mechanism. For example, there is one task came to send one email to a customer, which just registered himself on our website. So if there is something failure happened, we should have some mechanism to retry that. So that is one. Some timeout, some scheduling, or some status of the task, maybe. So that can be solved by some existing task queues. Celery is most famous there. Redis queue is there. I always prefer Celery because it has a good dashboard as well, has good monitoring of tasks. So yeah, you can go for Celery. So next is logging. So logging can be event-based, or the full logging feature could be you can log everything or some specific. So I always prefer Sentry over there because Sentry has a good logging power as well, plus it provides one good dashboard where you can search your data, event-based data, and all the events you can monitor it from it. It also provides notification. When you configure your event, like if any error occurs, you should get an email. So that can be done by the Sentry. CloudWatch is under the option. So you can go for it. Yeah, you also need to take care about other components. As I said, like source control, it's a must thing. I think everybody does this. So yeah, a specific version, if I want to check out some specific version, so source control will do it, all things. It also provides good history and accountability, remote accessibility. These two are most famous source control system. One is Git and SVN. We use Git there. So deployment and configurations. So these two are most important thing after the development when you want to check out a repository in the cloud and remotely access this server and execute some SSH, execute some commands through SSH, or maybe that can be done. So few things are there. Like fabric we use. It has a good SSH-based remotely accessibility. And Ansible Chef Puppet can be used for configurations management. And deployment could be automated, not manually. We should not do that. So Jenkins is there. Jenkins will do all the automatic deployment initiation. After getting deployed, all the things, we should have keep our eyes on process and system and activities. So we need to monitor system. Like how many nodes are there in our server system and cloud? Each server has their health, and we need to monitor all those things. For that purpose, we use NegiOS. NegiOS provide a very good monitoring platform where we can monitor all the system as well as processes. When I say processes, it means there can be UWSDA processes. I mean UVSGI processes. Or Nginx is there. So we can monitor Nginx from the NegiOS or MongoDB service running well or not. So that can be monitored through NegiOS. Sanctuary is a kind of monitoring service where we always monitor our logs if any error occurred. If any unusual thing happened, then we can directly be notified by an email. So Sanctuary is a good kind of monitoring service there. It's basically a logging system, but it provides some kind of monitoring. Supervisor is there. So a supervisor will do and control your processes. You can start. You can stop from that dashboard. So you can configure by your use case. So supervisor is also there. So yeah, cache is also one part of our cloud where we need to have some cache mechanism based on our use case. If we are serving static content on a website, then we should have some front end cache. And if we are serving some content, not the static content, but some content which can be reusable or frequently asked. So some caching mechanism we can implement by Radis or Memcache D. We use Radis there. So yeah, messaging. So in messaging, I would like to share one use case where messaging can help. Maybe you have a multiple app servers or multiple DB servers, or let's take an example of app servers. For example, three app servers are there. If one app server is doing some task and wants to notify others that I have done this task, now you can start. So at that time, messaging can be useful. We use RMQ, RevitMQ, many people say, which provides some pubs of mechanism, public and publisher and subscriber, where a subscriber can join and subscribe to the server. And it will be notified if we publish something on the channel. So yeah, it is a messaging, that's it. Yeah, virtually. So I love this feature because it provides one isolated Python environment in your system because you can have a n number of Python environment in the system. And if you are working on a different, different project, you always have different dependencies. So virtually, we will help there. You can create one environment. You can install the dependencies like I need some dependency of version 1.0. And another project, you need some dependency of 1.1. So it will be isolated for each of them. So yeah, it's a really great feature. Analytics, yeah, you always like to see your users data or how your users are using your website, maybe your application, your clients are using your things. So analytics can be done by either you can use Google Analytics for the static page where JavaScript can send data to the cloud and it will show you on the dashboard. Or you can use internal analytics where you can build your own dashboard. So there are a few tools available, like Graph Fight is there, StessD is there, which can send all the metrics to the cloud and Graph Fight will arrange it into the graph. Yeah, a few other things we need to take care of is scaling. It depends on your use case, what kind of usage you have. You may have a database overload, so you can scale your database vertically or horizontally. You can have a performance-based scaling where your application number of applications are not sufficient. So you may scale those applications and servers. You should have a periodic backup of your data, which can be done through Jenkins and Fabric jobs. You need to figure out how you can do it. And your application and periodic backups may provide data reliability. So your data will always be there if you have set up all those things then. And load balancing is there where through Nginx or through Apache, you can kind of load, balance your load. Yeah, that's it. Any questions? Thank you. Thank you, Belmi.