 Good evening all, I am Lalit, I am here to present developing Scalable PHP apps on AWS. Let me introduce myself firstly, I am Lalit Nama, working as a system analyst in Rhenosis Technologies and I am really passionate about PHP and I am working on the PHP since 4 years and with the Amazon Web Services the cloud attract me and I have experience of deploying more than 15 applications on AWS cloud so I am just exploring that and Amazon is working deploying new services so there are a lot of services and I am continuously exploring those and taking the benefit of that. So the topic developing Scalable PHP apps on AWS AWS you all know and this topic is a very big topic and so I have to summarize in 30 minutes so I am just covering only 2 points, the 2 services, simple storage service S3 and SCS, email service. So let's see, this is the table of contents, firstly I will talk about AWS SDK for PHP and version 3 release and installation of this and then I will move on S3 and SCS services and the best practices to use SCS what problem you can find and a short demo to deploy a sample PHP application so moving straight. So first thing these are the services I just take a snapshot yesterday, more than 45 services Amazon already launched so but to deploy a PHP application you do not need all to know, only the few services you may need like from server side EC2, you use storage, you use S3, database, you use RDS so let's see a list, the likely services you generally use to deploy a PHP application, first one is EC2, you know the server elastic compute cloud to host your PHP application, second is S3, simple storage service to store the user content, aesthetic files like JS, CSS, images you can store in that and you have to use S3 to be really scalable because that time the user content will be in the S3 and you can run multiple web servers, so aesthetic content will be delivered from S3 and your application, the PHP core application will reside in EC2 and the database is your choice, RDS, RDS provide you MySQL, MSSQL or RECL so you can use or either you can choose DynamoDB, the next service you generally use is SCS, simple email service to send the email from AWS and another one is the deployment service, they are major two deployment service cloud formation, elastic unit stock, so we will talk about that, cloud front is a next service you may need to use if you are really concerned about the speed of delivering the aesthetic contents to your user, so cloud front is CDN and it can deliver the content fast from near well region of user if you use that, the next one is cloud watch, cloud watch is the monitoring service of the AWS instances, AWS other services, it give you a statistic about if you talk about EC2, it can give you a statistic of server usage, memory usage at any time and there are two type of services in cloud watch extended and normal, normal give you the states in five minute period and extended in one minute, every one minute you get the update of your different services and the last one, identity access management, you can go with the root credential but there are many services, 45 plus services, so you can find your control, the control which you want to give to your user, if you are developing application, there are many developers, so you can directly give that this person should only access to EC2, this person only S3, so that fine level of permissions you can set using IAM role, so these are the services that likely you use only, the first thing, first SDK, PHP SDK, AWS provide PHP, SDK, it is a bunch of classes to utilize the AWS services, like if you want to start EC2 instance, you can use the AWS SDK to start instance, you want to store files in S3, you can use SCK, so it is the way to interact the AWS services basically, although most of the functionality you can directly perform from the console, AWS console, but in the programming, you need it and majorly you need it for imagine S3, because whenever user upload a content, like file user is uploading, you have to store it in S3, so that time it used, so firstly the SDK installation, you can see there are three types of installation, first is composure, there is dependency manager, so you can just install composure if you not installed and then there is one click, one command, you can install the dependency of your project, AWS SDK will download and you can use that, the second one is for PHP archive, you can download the package from GitHub account and directly include that class in your project and you can use the AWS SDK, the easiest way is just download the zip and zip it in your project folder and start using it, so that's simple, yeah these are just for reference if you are using different techniques, how you will include that S3 object or AWS object in your class to use, so using composure you have to require your autoload class and if you are using far, AWS far you have to include in your project, using the zip, in zip there is it comes with AWS autoloader which you have to load and after that you can utilize the services of S3, EC2, SES, many other services you can directly use, yeah so moving forward as I talk, we will talk about S3, so S3 is a simple storage service and it will enable you to really scalable, so S3 is only a file system service, so it only provides you to store the files, there is actually no server running and S3 is only using the APIs, REST or SOAP API available, you can only access using those and it is truly unlimited, you can store unlimited amount of files in that from 1 byte to 5 gigabyte and there is no limit in the bucket, you can really fill this and all objects are stored in a bucket, in S3 there is a bucket, bucket is a container where you can store your files and each account, AWS account has a limit of 100 buckets and this is the format of bucket and if you are bucket name in my bucket, then it automatically have a URL, my bucket.s3.amazon.aws.com and slash is a path of file under that, so if your file is my file.txt, this will be the path to directly access that object using HTTP, yeah so I would like to introduce S3 stream wrapper, you may be familiar with this, this is the easiest way you can use S3 without going into deep of SDK, SDK also provides you a way to store the files, delete the files, retrieve the files, but they are also providing stream wrapper in that you can directly use the PHP core methods, the similar kind of methods which you already aware and the underlying architecture will take care of all the things, so you can directly use f, open, copy, rename commands and to use the S3 stream wrapper you have to include your S3 client and then register your stream wrapper, then a protocol S3 protocol is available to you to perform any operation, so if you want to perform any operation on object, objects are the file actually, here in the S3 terms we call objects is actually files, so this will be the path of your files, so S3 will be the protocol and you have to define your bucket name and then path to file and this is the path and now you can perform any action on this, if you want to delete it just use unlink method, let me show you how easy it is to use S3, simple, unlink you know unlink we use to delete a file on our server, so instead of our server file use the S3 protocol, pass the bucket name and the path and directly it will delete from S3, so that is really easy how we can utilize S3 in our PHP projects, similar kind file size method you can directly use, file exist you can use copy, copy is a good command here, copy you can copy from one object from one bucket to another bucket or within the same bucket also, so it is really easy as a start point if you are going into AWS and want to use S3 services, so now moving forward another service is simple email service, AWS provide you a simple service, it is reliable and cost effective, you can directly use it and you can use it for transactional emails and promotional emails also and it support both SMTP, it provide SMTP credential directly you can use in your PHP Miller class and start sending emails or either API based from the SDK, there is SDK you can directly use those APIs also to send emails, also it provide DKIM support, so each message will be signed of your DNS record, domain record, so receiver client email client can understand that it is a legitimate domain, the email is valid and AWS concept is pay as you go, so you have to pay only whatever amount you are using, so its cost is very low only 10 cents per thousand email you have to pay and rather there is another cost to send attachment if you are sending attachment the data transfer charges also applies, but the good thing is it provide 2000 emails per day free if you are using EC2 service, because AWS promise you can use any single service also or the different kind of services, so if you are hosting your application on another server like on your dedicated server you can still use SCS, but if you are using EC2 then they are promoting to give some free emails on the per day, now the problem when you are scale in SCS, they provide SMTP service, they provide API service to send emails and if you are using directly SMTP service that is slow, because you know the SMTP that make the connection personally and then send the email, then connect the connection and when in the PHP script you are sending the email it actually wait the response from the SCS endpoint that email is sent or not, so the major problem is you can see the SCS service is only available only in three regions North Virginia, Ireland and Oregon and we are in Singapore, we prefer to host our website in Singapore region, so we are hosting our website Singapore region, our EC2 is running in Singapore region and SCS is in North Virginia, so the latency will be much high and in that sense it can take 1.5 to 2 seconds to get the response and that is enough, that is more, so in that case your application will be slow, so this can be one point if you are using SCS and your application is low, you just have to wait to continue on the script execution, so this is the problem now what we can do in this, so this is the solution, first solution you can use still SMTP but reuse SMTP connection like if you are sending 100 emails in a script you can use the same SMTP connection or you can use to create few parallel connection of SMTP to send the emails, so this is the optimized way of that, but still if you are sending one email only in one script then it is taking time, so the other alternative what you can use, so I am explaining one alternative here you can use SMTP relay, so what in that you need, you are hosting your web server on EC2, so EC2 is your IAS infrastructure as a service, so it is providing you a virtual machine, so whatever you can install, so you have to install sandmail email package there firstly and send emails through the sandmail, but configure the sandmail, so it can internally use SMTP that is SCS SMTP, so if you know from the server if you are sending email it is very fast or if it is using local mail relay, so when and the PHP execution when you are sending the email the pointer it returns immediately and continue the execution yeah in this we cannot guarantee that email will be delivered or not, but sandmail will retry it, so there will be less likely that it will not be delivered, it will be delivered, so this is the one solution you can use, so for that there is a prerequisite that you, sandmail must be installed and in the couple of commands you can install that on the liners and another is you have to verify your form email address, because SCS is a service to send emails, it do not host emails, so if you are need to host email you have to purchase email service, from anywhere you can purchase that and assigned a elastic IP, your EC2 must be assigned elastic IP to send an email, otherwise it will not be sent and you are requested for the production assess, because if you are in still sandbox you can send email only on the fixed number of white level emails which you verify and last one you have to generate SMTP credential, it is very easy, just go to SCS console and download your SMTP credential, so host name is fixed in all the three regions, different host name for three regions and you can just create SMTP, it will give you username and password and just use that to send your email in the PHP mailer, so but here what I am suggesting is you can configure sandmail, so you have to just follow the steps with AWS, AWS wrote a good guideline in the developer guide, you can directly find there, because in the 30 minute that cannot be covered there is a series of steps around 20 some Linux commands and that actually what doing is they are changing sandmail configuration to send email internally using SCS SMTP, so from your PHP script you send email, they go to local mail server and then it is delivering slowly slowly whatever the speed, it depend upon package and sandmail configuration if it is using multiple SMTP connection, not using, it is using same connection that depend upon the sandmail package configuration, so if you installed sandmail and you configured it to use SCS, you can test it, just there is a command sandmail from email address to email address and you can send directly an email, a blank email and how you identify it is sent or not using SCS SMTP or the general sandmail package, so here is, it is an email snapshot, it comes with wire, imagine scs.com, if it is there it means it is delivered using SCS service, not the core sandmail package and other way around how you can use it, you can use SDK service, that is STTP API you can use to send email, that is fast here I am writing some statistics which I measured, if you are using directly SMTP in PHP Miller it is taking around 1.5 to 2 seconds because multiple round trips from Singapore to North Virginia is doing so the latency comes in there and it takes time, using direct SCS API almost 1 second, 0.8 to 1 second it is taking because it is allowing you to continue the script execution and in the callback method you can get receive your response if it is delivered or not, while using local mail relay it is negligible, only 0.10 seconds it is taking to send one email and your script execution continue and internally the sandmail package is sending using the SCS API, so these two services we covered here and now I would like to demo application on AWS cloud how you can host it and how it there are features to truly scalable, so let's see a quick demo so here I told there is many deployment services, you can deploy your PHP application directly on EC2 instance or you can use elastic been stock, there are many services, here you can see elastic been stock cloud formation, there are main deployment services which you can use to deploy your PHP application, so I am picking one elastic been stock so let's see here it is elastic been stock I am just I just created an application PHP meetup demo app and here you have to start an environment of your servers, so just create one environment here, so here the two options are there web server environment or worker environment worker environment you need when the application is running on different server and you are creating a job batch of job and just you are sending those job in the SQS simple queue service and from worker environment they are doing work on the jobs, picking the job from queue and just processing the request and giving it to the server, so we need a web server really, so first we are selecting, it is something problem here, let me refresh sorry I do not know to internet I may need to come let's see a quick demo how we can deploy a PHP application and make it truly scalable so create web server here and as we are deploying PHP application it is providing default configuration for so many languages, so I am picking PHP and environment variable is a single instance if you are you need only one instance then it is not scalable and so to scalable you can use load balancing here, so it will start a load balancer next we will see here so first is application version, you have to upload your application jib here that directly the Elastic Minister extract on the server www folder and it will be accessible using the URL, so let's select so I created one page app here, so I am selecting that and here the batch size deployment limits when you deploy another version of the same application you can choose how you want to deploy because if you need really less downtime then you can choose like fixed instance I generally choose here so one instance will be replaced by the new application version in one time when it will be running again then it choose the second server to deploy your application version, so you can decide the setting here next you have to give the environment name here whatever you can give so I am just giving the name and you can check the availability of this name so it's available and a description you can write here so moving next here the two options RDS but I do not suggest to create RDS here as a part of Beanstock application because when Beanstock will terminate your RDS will also destroy, so keep it outside just start RDS directly as you normally do from console and use that data and feel free and you keep it easy to server free from RDS no dependency you can use directly there another thing is you can use VPC it is a virtual private cloud in the public cloud it can provide you service is a private cloud which your local application can connect using it secure tunnel so but we are not using that so next there is a couple of types microstance we are choosing and easy to keep here it is on it provide you PEM file and that is the only way to access your server using FTP or putting if you do not have one PTP pair you cannot access your server directly so I am just selecting one PTP pair here and email it yeah sorry so here are other settings you can check those settings and that is the configuration of how your application will behave so these are not so important so I am just going forward and pressing here to identify your easy to instance and instance profile and the service role is there two IAM roles and these IAM roles will actually have access to your deployed services so using these IAM roles you have to give permission to these IAM roles which I am selecting here and then it will work so I am just selecting I just created it service role and here is summary and you can directly launch it this way your application will deploy the firstly the GIP will be uploaded to S3 and there from there it will extract to your server and it will start so it just take two minutes and once deployed you can see your application on this URL it is currently deploying okay so now I am just showing you the scalability of this so here is the scalability here you can decide the number of web servers the minimum servers are one at a time running and maximum four this I am setting so this setting will trigger a new server when you will set some configuration here this is the scalability trigger which we use I generally use CPU utilization so CPU utilization the every CPU utilization in percentage I am setting here 5 minutes so every 5 minutes it will check the CPU utilization and what it will do if it is going upper threshold I set it generally 70% and lower threshold I set it 30% and upper bridge scale increment is 1 and lower is minus 1 so when your server will be at 70% usage for 5 minutes it will start another server keeping the limit 4 so if you are getting more customers more visitors it will start another web server and the web server there is a load balancer your traffic goes to load balancer and load balancer decide which server is free and route the traffic to there that way it makes a truly scalable app so this is the basic of how you can deploy elastic finished application same way you can use cloud formation also that there is a JSON template based system so I am hoping it is just a big topic so I just covered a few points here hope it will help you thank you do you have any questions for me one or two questions if possible yes is there any way to scale up the database as the number of machines go up yeah you can write but you can create redeplica of RDS RDS provide a feature redeplica in that you can start another server RDS instance same kind of visitor instance and there it create a replica and how do you connect it with the scaling from here here I asked earlier that I am not connecting RDS here directly I didn't started RDS I am having loosely coupled RDS so you have different control of RDS you can connect it with any server if I bind it with elastic then when I will destroy the environment or somehow my environment will destroy the RDS will also destroy and I do not want that so I am managing those differently I understand if I am saying as the number of instances go up yeah the number of connections to the RDS goes up yeah yeah so that is point you can create start the redeplica of the servers database server the question is can you automate that yeah in RDS there is setting yeah auto scaling in that database also there you can set also if the CPU utilization RDS instance is going high start another redeplica so same scalability is there also any other question yes it is a PHP application to be deployed so it is starting a server here you can see directly this is Beanstalk service and if you go to services and EC2 you can see it started a server really server here the one running in stock is coming here so this is just started any normal server you can directly set using ftp or putty so instead of instead of doing it manually this kind of does it for you basically automatically spins up instances installs software and then low level deployment it is handling I know that EC2 supports windows linux you can issue yeah if you want to use another server like windows server so you have to create AMI of that server so you if you know the AMI ID every server which is running have an AMI ID AMI ID is the image of linux or any operating system so in the there is a configuration where you can change the AMI ID and just start it so it will restart destroy all the old instances and create another instance with the new that AMI so that can be windows that can be linux ready to go into anything we can load our windows application to run together with our web application yes yes can I wait to do that yeah you can do it you can take this offline I think it is a possibility to speed up two different web servers or something or it is possible but you can think it is offline for implementation details about this thank you