 Terima kasih kerana menonton. Terima kasih kerana menonton. Terima kasih kerana menonton. Helo, so my talk will be around making your application AWS ready. How you can plan the scalability and file tolerance and make your application disaster recovery. And the talk will be having to deploy application on AWS so it will not just applicable on Django. You can map it to any general purpose web application. So you can map it to other languages also like PHP, Java or another. So let's start. As I believe I think you already heard about AWS because when you talk about cloud, the first time comes is AWS, Amazon Web Services. And so I will give a brief introduction of that. And then the next will be the services you need to know because AWS provide a lot number of services for different purposes. But the services you need may be different. So what are the basic services you need to know to make your application deploy on AWS. Second, we will see the AWS scalable architecture. I will go through some steps so it can help you to think how we can move to the scalability because this is a topic which can go a long way. So step by step we will see how we can add scalability. We can think to make our application. And next we will see about Django specific because we will have to use the S3 service. So let's see how to integrate. And then the deployment methods. There are a couple of deployment methods I will talk. So let's see. Starting from AWS services. You can see here is the console of AWS services and you can see AWS provide a number of different services in different categories like compute. It provide EC2 for database, it provide RDS. For database it provide RDS and networking also deployment services, mobile specific services. There are a lot number of services. 45 plus services already launched and every year they launch at least 2-3 services. But you don't need to know all those. Just the basic one. And then as per your application requirement you can learn more and implement more like hurry questions, explain about some services and so there are a number of services. Let's see the basic services you need to know. The basic one is EC2. EC2 you can see is a virtual machine in cloud where you can host your operating system you can install your application. So this is the basic service you must need to use to host your application on AWS. Second is RDS. Okay, firstly AWS is infrastructure as a service. It means everything you have to manage yourself. They do not provide platform as a service but they do not provide managed service. But some services of AWS is managed. Like RDS is one of them where it provide a managed RDS service where you can host your database. They provide around 4-5 database support like MySQL, MSSQL, Oracle. So you can just start RDS instance deploy your database and do not worry about database backup automatic package installation. It do's itself. So this is the managed service on top of AWS IIS infrastructure. And next service if you talk from storage you take S3 is the popular service for storage of user content. Static content of your site like JavaScript, CSS, image, etc. One service also Route 53. Route 53 is scalable domain name service. So it's very easy to use service. You just point your domain controller name server to the Route 53 name server and you can map your domain to any particular easy to load balancer URL. So your users can access the AWS services. Cloud front. Cloud front is on top of S3. It is a content delivery network which you can use if you want that your content should be really fast to download on user side. If you are hosting like media, image, video. So using the cloud front network it will be delivered using the nearest location of user physical location. The next service you might be interested is cloud watch. Cloud watch is the monitoring service which monitor the AWS resources. Mostly we monitor using EC2 and RDS where you can monitor some parameters like security utilization, disk reads, writes, and network traffic. And depending upon that you can take some decision like if you are running out of resources you can increase the resources or you can set some rules. So depending upon the states like if CPU utilization is more than 80% for 5 or 10 minutes it can automatically start means chances to add the scalability and will never let your application down. So you can do some sort of stuff. And we can talk one deployment service One deployment service will be Elastic Benish Talk. It's a deployment service. So as I already say it's an infestation service but it provides a platform as a service. The one service is deployment service Elastic Benish Talk where you can set up your application and it will automatically start for EC2, RDS, create buckets for you. So resource provisioning you can just set up and it will automatically do yourself. You do not need to start EC2 separately or RDS, S3, and cloud watch and different. It provides combination of those services in a single console. So it's one of important deployment service. I will talk later on that. Now my point the scale level how to think scalability when you are moving to cloud. So we will see some steps how you can think. So let's start from single box model. Single box means EC2. EC2 is your server operating system like Linux or Windows. Everything is in single box. Operating system is there. Your server is there like Apache is there. Your database like MySQL is there. Your user content. Your user image, video everything is in single box. So you can go with like shared hosting or some dedicated server is like this. The single box model. Everything is inside. This is representation of AWS architecture where you can see I mentioned the region and availability zone. So currently AWS have 9 or 10 regions. Singapore is one of them. So there is data centers. And inside one there is 2 physical availability zones. So 2 or more physical availability zones Singapore we have 2. So inside one availability zone I am running a single EC2 instance. The user is directly hitting our server. So this can also work. But this is not scalable because everything in single box if it fails everything fails. So let's use AWS services AWS different services which AWS provide and make it scalable. So first service comes in mind is RDS. Database. You take out your database MySQL out and host it in RDS. That's a managed service. So you don't need to worry about backups and different thing. And if your EC2 is down your RDS is still safe. So your database is still safe. So this is one point you can implement scalability. Next add more services like S3 bucket. You can take out your user content to S3 bucket. So everything after deployment of application whatever user is uploading will go into S3 bucket. Now your EC2 is only a pure application. The database is out the user content is out. And the load on server will also be less because S3 can directly serve the user content to browser. So EC2 will not busy in serving just a static content. Static content will be served from S3 directly to the user browser and your EC2 will directly hit only for application usage. If you add more services or you want to make it more scalable you can use elastic load balancer there where you can start at least one EC2 instance means your one application server and depending upon load you can add more EC2 instances to handle the more load. Elastic load balancer also provide like auto scaling group you can create some policies depending upon your application nature and you can say that if CP utilization is more than 70% for 10 or 15 minutes start another instance like that you can set the rules it's very easy to set tools in AWS console AWS console where you see the services where you can manage these rules so it can automatically start instance when the traffic will increase and you can set the load also the rules like if CP utilization is less than 30% for more than 10% down the one instance it will save your cost actually so written announcement you can justify your client that yeah it will like in the daytime for the evening time if your traffic is up and the night time it's not up so you can manage these kind of thing even you can manage like you have in the weekend your application get traffic more so you can set the rule only on the weekend it start one instance and add to the load balancer so a lot of stuff you can do to make your application scalable see more if you want to make purely disaster DR system where you can set up in one availability zone one instance and the second availability zone you set up another instance one one instance initially and later depending upon load it can start one more instance in one availability zone or another or in the both and this is on the easy to application layer and also the database layer you can make a scalability like you can have master or seller concept you can deploy multi availability zone deployment we have a service means when you start ideas there is option to multi azure multi azure means it will have a copy in the second availability zone within the same region and it will be directly syncing you cannot directly use the slave database but when master database will be fail it will automatically transfer your load to the second database so you do not need to worry if something happen on the hardware failure on the AWS site still you are set even one availability zone will be completely down like there is a mass failure of hardware and AWS site one azure is down but second azure is still there and automatically load will be transferred to there so because there also we will be having at least one EC2 and the slave database so transfer will be there and S3 is outside of that so even your OS goes down because it's only application pure application and we have multiple instances of that application so your system will be scalable and also TR if you want more scalability or more things you can use other services like for domain name you can use root 53 so your domain control will be here DNS controller and then you can map your elastic load balancer URL so your user will hit the domain and then using the root 53 it will go to the load balancer and load balancer will see which server is less traffic and then root the request to the particular instance and instance will reply to load balancer and load balancer to your browser directly so for S3 bucket you can see you can use the cloud front CDN service so if you really want to ensure the aesthetic content is really being downloaded fast on user browser you can use the cloud front on top of these other you can use cloud watch monitoring service to monitor your EC2 instances your network administrator may really need it it can immediately send you SMS or email alerts when something is going wrong on your institutions or RDS instance you can use IAM identity and access management service if you are working in a big team you want to give access only to S3 service or only to EC2 service to some user you can manage in IAM role easily you create a role just give the permission of only particular service and he will be able to use only that service you can use SES simple email service it's like SMTP I provide SMTP you just utilize it and start sending emails and so these are basic services which you can utilize but definitely there are lot more like lambda for mobile there is also mobile testing and deployment service so this is one way of thinking this is just a start you can add more services as per like you can use SKS service and like you can use if you want to make like if you want to make PDF to word conversion service like so you just upload PDF and you go to convert it into word like that you can do but here you can use SKS service so user will upload the PDF first and then you can send it to the SKS and SKS will use the one server to convert it and after convert it will automatically send an email that you can download your conversion is done so this type of service you can use the small instance so if traffic increase even it can utilize the small size instance and utilize it will be cost effective to you it should of saying that you are running a one instance of 2GB RAM and the traffic immediately increase and then your server cannot handle that so you can use Q service also so there are a lot number of service you can utilize and now see specific to Django I already explain that you have to use S3 RDS at least so how to integrate S3 in Django so Django is very easy S3 because AWS provide AWS SDK for different languages for PHP Python Java .NET it provides for Python it provide BOTO current version is 3 so BOTO3 library it provide AWS SDK you can utilize that you can store any file you can store to S3 you can record from S3 you can create buckets delete objects so everything it provide in the wrapper it's AWS SDK and what you can use for S3 I already explain you can use for static files like your JavaScript CSS image one thing it's part of application second user content user is uploading its profile photo videos images etc or documents so everything you can store in the S3 and how to setup for S3 so for S3 you can use one application Django storages Django storages is the Django application which enable you to utilize another storage like your application is hosted on one server generally it save into the a single folder of that but if you want to utilize Amazon S3 as your blob storage or cloud storage of Google Django storages provide number of storages into your Django it coupled so your code will not change but when your user will upload something you write the code and user will upload image it will instead of saving saving on the server it will save into the storage like if you're using S3 it will save S3 if you're using as your blob storage it will save there so you use Django storages it's very easy to install trip install Django storages and also the same way you can install botter 3 and in the setting it's very easy you go into setting.py you add your storage application and a couple of settings for S3 we generally use access key secret key and then bucket name is there you have to mention bucket name and you have to mention files storage and default file storage so it will change the storage default file storage it will change to the botter 3 storage S3 storage then everything what you are going to upload it will upload to the S3 bucket so using this storage application it's nothing to change in the application I am dividing into two part first thing is static files of your application how to upload there and then user content first if you do for S3 you use for static files what you can do is you can directly copy and paste your S3 your files like javascript css to the S3 bucket and then automatically it will start using when on the front end on the Django when you want to reference any image it will directly refer from bucket you just need to mention that static when you are ingsrc when you are mentioning you have to write static keyword first and then according to the storage class you are using it will take reference and will add after that so in the S3 bucket it will add the S3 bucket path it's STDP path and then images or your image name so front end is very easy in the back end I say you can manually copy paste but if you do not want that you can use another application static files app you install your static file apps and you just run a command on your application python manage.py so what it will do it will collect all the static files like jcss of your project and will upload those to the S3 bucket which you mentioned in the last slide I said you static files storage is the bucket already there and bucket name I already mentioned so static file will utilize this setting and it will know that we have to use S3 and it will copy your static content directly to the S3 bucket so you do not need to copy manually to S3 for your static contents so it's very easy to use and for reference I say it's also very easy you just add static and according to the storage class you are using it will already refer there and if you want to store user content you do not need to change anything this is the sample code which you generally use it's just renaming the file which you are going to upload and your upload is simple like you are having the model image field so when you are going to upload image it will upload with a new name and where it will upload because you already set it the defined storage class is S3 so it will automatically go into the bucket you mentioned and will save there and we want to refer it the same like imjsrc you just use the the URL of your model like profile.url and it will automatically show the correct URL because in the backend setting you change the storage of your Django where it is going to store so it's nothing changed it's the general way you are using your code to upload so S3 you already integrated and RDS there is no change RDS means database you just start your database and change the credential and start using that so we have already used EC2 where we are our Django application in our state RDS where our database is in our state and S3 where our CSS image or user content is being stored now this is the custom deployment so this is the first thing the deployment way is custom deployment where you have to start your EC2 manually deploy your application and S3 integration you have to do and manually you start RDS manually you create a bucket and do this setting stuff just it's ready to use you just point your domain to your EC2 instance directly or whether you use Lord balancer so this is the custom deployment and after every deployment like I say EC2 is what I call it if you will terminate the instance everything will stop everything will lost so you have to save it so in Amazon there is a concept AMI Amazon Machine Image so it's very easy to create the image just right click running EC2 instance or create image it will copy a snapshot of your operating system within the data inside so the application will be inside so every time you change into your application you have to create an MI and then you have to give that MI to your Lord balancer so whenever there is a trigger event of creating a new instance it will use that base MI base MI will already contain your operating system and your Apache application code so it can work so this is one way of doing the deployment another way of doing deployment is elastic bin stock in elastic bin stock what you are going to do your application code is not inside the EC2 it's outside it means you just create an environment first and then upload your application code separately and when you upload your application code it do not go into EC2 directly it firstly goes to the S3 bucket it automatically create a bucket in your account and your application will store into S3 so whenever a new instance want to start when your auto scaling policy want to start a new instance it have the base MI it have the software already like packages already which you need but do not have the application after starting the EC2 instance it download your application code from S3 bucket and extract there so now it's more scalable and you do not need to create AMI again and again after every deployment of your code so it's very easy to use application deployment service of AWS and you can utilize it using console also you go into console you start your environment firstly you have to create an environment there and then you have to upload your application there and it will just directly run it automatically create load balancer it always comes with load balancer so you have to point your domain to the load balancer URL it started working already or if you are using command prompt you can use EBCLI it's command line interface that AWS provide using some commands you can deploy your application I will see I will show some commands how to deploy your application using elastic finish top so firstly we can configure the elastic finish top in that firstly I am activating the virtual environment local it's not doing anything on live on AWS first so EB VIIT bin activate you just activate it and then according to your application requirement you have to gather your requirement like some additional packages you want to install you just run tip fridge requirement or TXT will generate and this requirement of TXT will have the dependency of your application what additional packages you want to install and this requirement of TXT your elastic minister you can understand so whenever it will start server it will download those dependencies automatically for you and you have to create in your project folder you have to create a dot EB extension folder firstly let's create MKDR and create the folder and after that you have to create a Django.config file this file EB right option settings AWS elastic bin stop container python and WSI path so container python means it will start the python container because bin stop is a common service you can start python container PHP container or you can also have their option of selecting the tomcat versions so depending upon your container it will start the MI and will start will install the server, defile server there so you are asking about python container and you are mentioning the wsga path it will use of your application so then the application can run and you are just deactivating this virtual environment this is optional step so firstly now you make your application ready to deploy an elastic bin stop but it's only on local your computer just not going to the EB service now so how to deploy you just initialize it EB INIT this is the command you run it python version your application name so it will create the application and then you have to again run the same command EB INIT at that time it will ask you the couple of questions about the creating a key pair file the name and hash and that will help to login into the EC2 instance because EC2 it's operating system to access that you need a private key file PPK file so this is the steps that generate I am not going to detail of that but if you will execute this you can see what it is asking and you can enter the details and then you create EB create Django environment as I say elastic bin stop working environment so you can create one production environment one staging environment so you just create one environment operating system so it will start python container and that's it no application deployed yet so the environment takes around 5 minutes and after that environment will be up and it will automatically deploy your application as soon the environment will be ready it will install your application also and it will start running and you can in the browser you can open the elastic bin stop URL and you can see your application or on the command line you can type EB open it will directly open the browser because browser of your operating system and will go you to the elastic bin stop URL and you can see your application already running there so this is the way how you can utilize the console or the command line and I already say console is very easy you go into console you upload your application zip you create your environment and just upload your application and it start using that and the integer will be the same it will start the load balancer with your application and later you can setting it command line is very hard to do the configuration which image to use or which what instance type you have to use so you go to console and you can do your setup there you can set your auto scaling policies your security groups like you want to enable some port or disable some port you can do that kind of stuff directly and this is the command to redeploy whenever you do change in your code you want to redeploy you just EB deploy it will redeploy your application on the environment and your application will be get update so you see we already seen two type of deployment and this elastic finish top in this your code will already having the correct RDS credential you should you have to ensure that it can connect to correct RDS and bucket you already integrated and now it is scalable so according to your policy it can start new instances it can down new instances and so even the traffic goes increase it will never let your site down and it will be serving so that's all Thank you Any questions? I think you must have Thank you