 Hello guys My name is Karan and I am the instructor of this course, which is called cloud computing with Amazon web services Before we start working on the services of AWS Let me brief you about what is Amazon web services and what are the various components of AWS Now first of all, what is cloud computing and why we need that in today's world most of the organizations are reliable on cloud computing because cloud computing offers you flexible secure and cost-effective IT infrastructure You have to pay whatever you use. There is no upfront payments that's been charged by cloud providing vendors and If we talk about traditional hosting We have to pay a lot of money and that was very difficult to scale up the infrastructure In case the demand of the users or you can say the demand of the services would have been increased Now when we say IT infrastructure that can have your virtual servers that can have your backend databases and also some storage mechanism where you keep your backups and Notification services monitoring services all those components makes an infrastructure It depends on your application and it depends on your requirement what all you need in your infrastructure Now if we talk about Amazon web services, it provides you complete set of services Using which you can build up your entire infrastructure. When I say complete set of services I mean your virtual servers your databases your storage your your DNS system your VPCs Your notification services content livery networks and a lot more So Amazon web services does not charge anything when you register with them. You have to pay whatever you use and in case your initial Configurations are not at par with your current demand The demand has been increased, but your system is not at that power So you can easily scale your infrastructure using AWS without any issue and It is very easy to set up your infrastructure on AWS. You can either migrate your existing servers From your existing hosting company to cloud or you can create your new infrastructure as well Now AWS platform has been divided into these categories compute and networking which includes your virtual servers and your VPCs Storage and CDN it includes various storage services that AWS provides and we talk about some of them and Also the content livery network Databases is also one of the services which has been provided by AWS Followed by the application services in these services we talk about the notification services and the email services and Last but not the least we have deployment and management So in deployment and management we talk about how to deploy the application automatically and how to monitor your resources For the various threshold values that you set So computer networking contains Amazon EC2 and Amazon EC2 supports Most of the operating system that's been available in today's market It includes your ad-hoc Enterprise Linux, SantOS, Debian, Amazon Linux, Oracle Linux as well as Microsoft Windows servers The other service which falls in this category is Route 53. In Route 53 we deals with DNS system which we can configure on AWS platform and The third service which falls in this category is VPC It allows us to create the virtual private cloud in the AWS Means you can launch your Instances or launch your servers in a private space and not in the public space within your cloud So these categories contain more than these services which I have mentioned here All these services which I have mentioned actually are Specific to this particular course. So in the next course, which is more advanced course of this series I'll be covering the advanced services, but the categories will remain same So the next category is storage and content delivery It includes the very famous and popular storage service called S3 Where you can actually store your images, code and you can even host your static website using Amazon S3 Amazon Glacier is the recent service which has been introduced by AWS It is used for archival system whatever you want to keep as an archive you can use the Glacier storage because Glacier is very much economical as compared to S3 and Another service in this category is CloudFront This is all about your content delivery network So if you have worked on any traditional CDN systems, which allows you to transfer data Geographically and boost up the performance of your website. This is where Amazon provides its service Which is called Amazon CloudFront. We'll talk about all these services in this particular course So from database point of view, we are going to cover Amazon RDS. Right now Amazon RDS supports MySQL, Microsoft SQL Server and Oracle Enterprise databases From deployment and management point of view, we are going to cover Amazon CloudWatch Which is a monitoring service provided by Amazon. You can monitor your resources Which includes your servers, your storage, even your billing And your DNS and your RDS databases. Everything can be monitored by CloudWatch And you can manage your users and groups using the identity and access management From application services, we have two services which falls under this category One is Amazon SES and the other is Amazon SNS If you want to implement bulk email solutions For example, if you are doing e-marketing of your website or any of your product You need to send out mass mailing. You can avail the services of SES And on the other side, let's say you want to receive notification alerts In case your server crashes or your server is in bad health Or your databases are experiencing high number of concurrent connections So in such scenarios, you need some kind of notification to be sent on your email That is where your SNS will come into the picture You can even use your mobile number so that you can receive the notification on your mobile So these are the all services which I'm going to cover in this particular course So every lecture contains a video Which describes how to configure a specific service and how to use it in real time I hope you guys will like this course Stay tuned for more videos. Thanks for watching guys Hey guys, in this video I'm going to explain about how to create virtual servers in the cloud So before you configure any service So to build up your infrastructure in the cloud, the first thing that you need is a virtual server To create a virtual server, we will be using Amazon EC2 service There are a lot of other options that we are going to cover in Amazon EC2 But we'll start with virtual server first Before we actually create the virtual server, there are three types of instances or servers that we can create The one is the normal instance or also called on-demand instances So these are the instances which we create and we pay as we use it on a monthly basis These are the spot instances spot instances means You have to bid for the extra resources that Amazon has So every month Amazon releases the free quantity of the resources and sets a base price for the bid Let's say five dollars and you bid for ten dollars So if these so if the cost of these resources is between five dollar and ten dollar then you can access your instances But anytime if the Cost exceeds ten dollar then you won't be able to access your instances And reserve the instances are those For which we actually pay upfront payment and then the monthly bills will be reduced to significant percentage For most of the environments we'll be using the on-demand instances. So click on instance And from here you can select a region in which you want to create the instance And then you have to click on launch instance So this is the wizard using which you can create an instance So these are the templates which are actually provided right away so that you can start with it and if you were And if you think that Your operating system is not in this list then you can go to community AMIs and search for your operating system here There are a lot of options that are here like 32 bit instance 64 bit instance So you can so you can select whatever your requirement is And this star means That this AMI is eligible for free tier. So free tier is what we are using right now It is just for testing purpose and amazon does not charge anything extra for this But there are some limits for this if you exceed those limits then amazon starts charging for this So just go to aws free tier page and go through these limits once you get time So for example this thing so we get 750 hours of amazon ec2 per month We got 750 hours of elastic load balancer and 15 gb of data processing Similarly we got 30 gb of elastic block storage. So if you exceed these numbers then amazon is going to charge you So these templates are basically based on the AMI amazon machine image So you can consider AMI as an ISO So you use ISO to boot on your instance. Similarly, we have AMIs here using AMI we can boot up on your instance So we will go with first one Select this thing and here you have to provide the number of instances t1.micro type is the Default free tier eligible type And if you select anything else you have to pay for that And here you also get information about the cpu compute units that you're getting cpu core and ram in each of your instance So we will continue with t1.micro Availability zone so every region is further divided into availability zones So if you're not sure which availability zone you want your instance get created in just keep the default value And here you can see this thing prevention against accidental termination Sometimes what happens is we can accidentally terminate the instance But to prevent that thing we can Enable this security And if you shut down the instance, what will happen? Does it stop the instance or does it terminate the instance? So keep it as stop So this thing here is the root device which is attached with your instance This is of a gb in size And as you can see type is root. So this is then ebs volume. So ebs volume is elastic block storage volumes So there are two type of storage that an instance supports one is the instance store and other is the ebs So instance store is the storage which is not at all persistence Whatever the data that you store in instance store that will be deleted automatically whenever your instance will be terminated But if you attach an ebs volume with your instance and then Keep the data on ebs volume then your data will be safe Even if your instance will be deleted And here you have to specify your name of your server And now if you want to access your server you need to generate key pairs. So this is the ssh paste authentication So here you can select for create a new pair and say my new key And create and download your key pair. This is a one-time process. You won't be able to generate this key pair later on save it in your keys folder So then comes your security group. So this is kind of firewall it allows you to block unwanted ports and allow only those ports which you want For example, let's keep a group name here my first group Group for demo and here you can specify your port number for example ssh and this source means Who will be able to access this service? So if it is zero zero zero it means anybody can access if you want to grant access only to certain ip Then you can provide the ip address here And add the rule. So as you can see on the right side, we have allowed ssh service on this particular instance So similarly you can allow other service also So this is very much similar to your firewall and then continue Now this is the summary of your instance that is going to be launched Just go through with this if you want to add it anything you can and then click on your launch instance So as you can see instances are now launching It will take a couple of minutes before it actually starts running And in the meantime if you select this server, you can see a couple of more options here So this is the zone in which it got created. This is the instance type. This is the AMI name which we used and This is the key pair name And this is the public dns. So whenever you create a new instance amazon provides you one public dns And one private dns. So public dns will be used if you want to access your server outside amazon network And if you have multiple instances running in the same availability zone in same reason Then you can communicate with those servers Using this private dns only. So we have basic monitoring enabled here So there is an option of enabling detail monitoring So this is something which we will be using when we configure cloud watch So here we can enable detail monitoring and then we can configure alarms for that So that whenever the threshold value reaches an alarm will be generated. So this is kind of monitoring So status check there are two types of status checks So the first status check is monitor the aws systems required to use this instance And ensure that they are functioning properly And the second one once this will be completed. The second one is that your software network configurations are working fine So once these two status checks will be passed then we can access the server So as you can see two out of two checks have been passed. So let's try to access this server So so before we actually try to access this server. We need to convert this PEM file which we downloaded into the putty enable format since i'm using windows machine I need to convert that to putty enable format So if you're using mac or linux, then you can straight away use that PEM key But since i'm using windows machine, i'll be converting this to putty and then I'll be able to access this It depends on which SSH client you are using so i'm putty So that's why i'm converting it into putty format But if you're using any other SSH client, you need to convert that into that standard So then i'll say say private key I don't want to add any passphrase and here i'll type my new key and by default it will be saved as dot ppk And save this thing So as you can see, this is the public dns before you actually try to connect with this thing copy this thing and paste it here in the putty terminal go to SSH browse this ppk file and click on open And here you have to specify the username And see you have logged into the server So if you right click on this instance and click on connect So here you can see this is the username which has been created by default So for most redhat and centOS base system, this is the standard user which got created So this is how you can create an instance and connect with that Now a couple of more things which i would like to tell you here is The one thing is ebs so a couple of So a couple of more things that i would like to tell you here is First is elastic ip So amazon already provided you one public dns But if you need more public ip then you can do so using elastic ip So go to elastic ip and click on allocate new address Once you have this new address You can attach it with any of the running instance And here you select the instance id And yes associate as you can see this ip address has been attached with this instance And now you can access this instance using this public dns of new elastic ip You can assign as many ip as you want for your server Only thing that amazon does is they're going to charge you extra for every ip Which you provide to your instance and all the data transfer that takes place on each elastic ip that is also chargeable So that's all for ec2 Stay tuned for more videos. Thanks for watching guys Hey guys So far we have seen how to create a virtual server using the predefined AMI template Now what if you want to build your own AMI? let's say If i'm running web server i launch my instance from the basic AMI and then i make some customizations Like i install apache php and all those necessary modules as well as i configure some of the files What i want is once their server is being configured I want any other web server should also follow the same set of configurations and modules already installed But if you go with basic AMI once again to launch a new web server You'll have to go through with all those installation processes again and again Now let's see how to build the custom AMI First connect with your server And login as root and then let's say you make some customization Let's install php So now you have done some customization to the base AMI Your base AMI did not include php and apache packages, but now you have installed it Now any other web server that you want to launch You want that you need not to go through with this process again and again But you build a custom AMI so that you launch the instance from that AMI And these packages will be installed by default So if you right click on your instance and see this option create image It gives you this possibility to build your own AMIs Here you can specify the name Let's say web server AMI And description AMI for All web servers And since we are going to create this AMI from a running instance Do you want that your running instance will reboot or it should not reboot? So you can select this option if you don't want that system to reboot Else if you don't select this thing your system will be rebooter So this is the default file system disk that has been attached If you want to attach more volumes you can do that here Otherwise just go with the standard volume So AMI create request has been received if you go back to your AMIs So you can see this web server AMI is pending So it will take 5 to 10 minutes before this AMI will be Completed successfully Then whenever you launch a new instance using this AMI all those packages that you installed here Will be automatically installed you don't need to install these packages again and again So this is the process on how to build a custom AMIs So now as you can see here this AMI has been created and this is available This visibility is private means only I can access this AMI And if you want everyone else should access this then you have to make it public Then you just have to click right click here and Added this thing Go to permissions And make this public So once you change that visibility from private to public Then everyone can access this AMI and build their servers based on your AMI Now once you have this AMI creator Right click on this thing and click on launch So this is how you can launch a new instance from the custom AMI The process of creating the virtual server from this AMI Is same as we have seen in the Virtual server creation video so there is no change in that the only change here is the custom AMI Now let's say you have built this AMI in this us east region But you want to launch a web server in Sydney In Asia Pacific in Sydney region How can you make this AMI available in the Sydney region? Right click on this AMI and select copy And here you have to specify the destination name So Asia Pacific Sydney and you can change this name you can change this description This is not mandatory fields And then select copy So this process will initiate in couple of minutes And after let's say 10 or 15 minutes depends on the size of the AMI Your AMI will be copied to the Any other region and once copied you can launch the instance in that region based on this particular AMI So this was all about how to build your own custom AMIs And based on that how to create the new instances and how to transfer your AMI from one region to another region Thanks for watching guys Hey guys in this video i'm going to explain about EBS volumes So as you may recall This server has a default EBS volume of 8 GB which contains its root file system Now what if you want to attach additional EBS volume and store your data on that So first of all EBS volumes is just like another hard disk When you plug in another hard disk it's a raw disk You have to make a file system on that and then you have to mount it And then only you can start deploying data or transferring data on that particular disk The advantage of having EBS volume is that your data will be persistent Even if your server will be terminator or crashed by any XYZ region, but your data will stay there Let's go back to volumes And create a volume So there are two types of volumes one is standard. One is provision IOPS provision IOPS provides better IO rate So this is what we typically use with databases or any other application which needs higher IO rates Then you can go for this disk otherwise go for standard Now here provides the type of your EBS volume and the availability zone So keep it same zone in which your instance is running So to whichever instance you want to attach this EBS volume with keep this availability zone Same as your instance availability zone And say yes create So once greater you can see this is available Right 2gb of EBS volume is available means it's not been attached with any server yet So right click on this thing And attach volume And here select the Instance with which you want to attach this thing Click on yes attach Now go back to your server And if you type fdisk-l You can see this additional 2gb of Hard disk has been attached Or EBS volume has been attached This is the device name What you have to do is you have to build the file system on this one Before you actually start using it So it depends on the size of your EBS volume If the EBS volume is the higher size Then it will take some extra time otherwise it will be done within a minute So now your EBS volume has been formatted And we have built EXT3 file system on this Now let's try to mount this thing So now you can see this 2gb of volume This has been mounted on MNT my disk Go to MNT my disk and see this is empty Now whatever data that you start putting in this volume It will be saved here In the EBS storage So even if your system will be terminator Or your instance will be terminator Your EBS volume will stay there And you can launch a new instance And then attach this volume with new instance So this is how you can create the EBS volume And attach it with your system How to format it and how to mount it Before you actually start using this thing Thanks for watching guys Hey guys In this video I am going to cover Amazon storage solutions The first one is Amazon S3 Or Amazon simple storage service And the second one which has been recently launched That is called Amazon Galatia Now first let's talk about Amazon S3 And see what is Amazon S3 And how it helps in storing your data And what are the additional advantages that you can take out of that Amazon S3 is a storage for internet Whatever you store on Amazon S3 bucket Is highly reliable and scalable There is no such limit how much data that you can put on the Amazon S3 bucket So the thing is that really helps in scaling up your environment Let's say you start with 10 MB And end of the you can say 6 months or so You start getting 1 terabyte of data Coming from your users And you are uploading that on your S3 bucket That is very much supported by Amazon So at the max it supports up to 5 terabytes by default But if you want more storage You can write that to Amazon and you will get more storage There is no such limit for that And what are the advantages of using S3 The first thing is You can save your static data on the S3 For example images and all that stuff that you want to use in your website You can save that stuff on S3 and access Those images and videos directly from S3 within your code Secondly It actually allows you to host a fully static website as well So if you have an HTML based website With few images and few CSS You can straight away host that website on Amazon S3 And add the CNAME for that in your DNS And you are good to go You don't need anything else to run your website Another thing is that it integrates with other Amazon services also For example for notifications it integrates with SNS For content livery network it integrates with cloudfront So it is very easy to integrate with other services And again it's cost effective You have to pay what you use So before you actually start storing data on S3 You have to create a container which is called bucket So here you have to specify the region where you want to create the bucket And the bucket name Let's say So this bucket name should be unique My first bucket One two one two And create So as you can see here your bucket has been created And on the right side you can see the properties for your bucket This is the name of your bucket And this is the region where it got created And this is the owner Here you specify some of the permissions So this is my admin account And I have got all the permissions Who can access my account And moreover I can add more permissions here For example if I want everyone to list the content I can do so There is also a bucket policy that we can actually set up here But before we talk about this thing We need to talk several other things And we will come back to this thing later on So as I said earlier you can actually host your static website here So this is the dynamically generated domain name Or endpoint that's been provided to you by Amazon S3 bucket And you can enable your website hosting And what will be the default document that will open When this link will be hit Let's say index.html And what will the error document Let's say error.html And save this thing Logging So whoever is going to access your website It actually creates access logs So these are the industry standard W3C compliant logs Which you can analyze with any other industry standard log analyzer If you want to keep the logs enabled You can enable it And you can save it in any other bucket You can save it in the same bucket Or if you have multiple buckets You can save it in that bucket also So just give the prefix whatever So that those logs will be generated and saved under logs folder Save this thing And notifications So here we have to use the Amazon SNS service So we'll talk about SNS later on And then we'll see how you can actually use SNS service to send the notifications Now this is the life cycle This integrates really well with Amazon Galatia So another thing is If you want your requester pays Let's say the one who is accessing your bucket That guy pays you Rather than you paying to Amazon You enable this thing But for this You have to disable the anonymous user accessibility to your user Then you really have to have the Amazon accounts of other users Those needs to be filled in Those needs to be integrated with your bucket So that those Amazon accounts will be charged And not your Amazon account will be charged And versioning here is It is very much similar to any other version control software that we use For example, Subversion or Git So it keeps the versioning of your data at regular intervals So that even if you accidentally delete any file You can retrieve that from the older versions So that was the all about bucket properties How you can set up various parameters as per your requirement Now let's click on this bucket And as you can see this bucket is empty as of now Let's try to upload something Add files Let's upload this thing And start upload So as you can see this management console Enables us to upload the data Or upload the files It was not the case earlier And most of the times you are not going to use this thing to upload the data The one other thing that you can use to access your S3 bucket is CloudBerry Explorer So you need to configure this thing If you click on file here it says Amazon S3 account And you can create new account Fill in your display name access key and secret key Every account comes with access key and secret key And then your account will be created Just like these two accounts that you can see here And as you can see My first bucket 12.12 is the one which I just created And it also contains web server.ppk Now let's upload an HTML file And see how that web hosting thing is working or not So upload one test.stml file to the bucket And as you may recall we have enabled the web hosting properties So this file has been uploaded using CloudBerry Explorer for Amazon Test.stml And go back here and copy this link And give the name of your file So as you can see this file has been Rendered and displayed on your screen So similarly you can host your entire static website here And if you don't want to use this thing You can take up your own domain You register your own domain And make the CNM entry for this particular domain So that you access with your own custom DNS Rather than the one generator byte S3 Now let's go back to your virtual server once again So I'll show you a command line utility That you can use to access your S3 bucket Click on instance And copy your public IP Switch to root user Here install S3 CMD So it installs the S3 CMD utility Which you can use to access your Amazon S3 buckets Now before you actually start using it You need to configure this utility Start with dash dash configure And here you have to specify your access key Now where to get your access key Under your name Click on security credentials Here you can get this access key tab Expand this thing And again click on the legacy security credentials link It may ask for it to sign in again This provide the master account information And here you can see this is your access ID Let's fill in there And this is your secret access key Copy this thing And fill in there You need encryption password No I don't need And just hit that enter enter Just keep that default values Save this thing Now as you can see here Your access key and secret access key work fine Now it's time to use this thing S3 CMD and then type LS Type LS Here you can see my first bucket This is the name of the bucket which I created So you can see this bucket Let's create a file Take one and S3 CMD Put take one two So this is how you can upload your Files to the S3 bucket And if you want to list the content of your S3 bucket You can specify LS And after LS specify your bucket name So take one This is what I have uploaded just now That's what STMA was already there And this is the one which we uploaded using the Amazon console Now if you want to create one more bucket S3 CMD type MB And here you have to specify My second bucket Again list it And here you can see two buckets So for all the commands This type help And you'll see all these commands that are possible That you can use with your S3 CMD utility You can use this utility inside your code also Or you can consume the APIs In your code so that you can Upload or download the data To and from Amazon S3 buckets So let's go to S3 again And see if the newly created bucket Can be seen there or not So this is the my second bucket This is the my first bucket So this is how you can actually Create buckets, delete buckets, upload content Set some permissions Enable the website hosting property Enable logging Thanks for watching guys And stay tuned for more videos Hey guys In this video I'm going to explain you about Amazon Glacier This service has been recently launched by Amazon I think in March 2013 So the basic idea behind this service is To store the backups So this is the archival system of Amazon It integrates with other Amazon services like S3 So that if you have your data in S3 Let's say one year old And you want to keep your data archived So you can put that data in Amazon Glacier So you can configure that property As we have seen in the previous video of Amazon S3 So you can configure the properties of your bucket So that the data archival can be turned So you can configure S3 buckets To archive the data into Amazon Glacier automatically We'll see in this video how you can archive the data You can even delete the data if that is older than Let's say six months or so It again depends on the requirement that you have So Amazon Glacier has something called vaults Like we have buckets in S3 In Amazon Glacier we have vaults So what you have to do is you just have to create a vault The one that I have created test one So similarly you can just create a vault And give it a name Let's say My Vault And create vault now This is all what you have to do As far as Glacier configuration is concerned Again it depends where you want to configure your Glacier So right now all these locations are supported U.S. East, North Virginia, Oregon And California, Ireland and Tokyo Rest of the regions like Singapore, Sydney And South Hollow are not supported at all As of now So since it has been recently launched So probably by the end of this year All these regions will also be supported by Glacier So once we have created the vault Let's go to Amazon S3 Now basically when you configure S3 buckets To archive the data automatically It will not store in your vault That you have created explicitly It will create its own vault and store the data If you want to upload your data from your system To the Amazon Glacier Then you can create your own vault And upload the data Into Amazon Glacier We'll just see how we can use CloudBerry Explorer to access the Amazon Glacier So once you have CloudBerry open Go to File and click on Amazon Glacier Here you can create your new account Just provide the access key and secret access key And you'll get a new account here Similar to this what I have got here So once you create your account It will be visible here And you can click on this thing So as you can see my vault has been created So these are There are two other vaults also Which I have created in my Glacier And this is the one which I have created In the Oregon region So if you want to let's say upload anything What you can do You can just click on this thing Click on vault first And then this and drag And drag it here The process is very much same Like you upload data to your S3 buckets So you cannot upload the data using Amazon Web Console Like in case of S3 buckets You are able to upload the data From Amazon console itself But in case of Glacier You cannot do so As of now You can either use the Amazon APIs Or you can use the third party tool Like CloudBerry Explorer Now let's go back to S3 And see how we can configure The life cycle of a bucket So that the old data can be archived To Amazon Glacier automatically Click on the properties of your bucket And then click on life cycle And here you can have the rule So if you don't want to specify any name A new rule name will be generated automatically Apply to entire bucket Yes Time period So days from the creation date Or you want to specify Any date from where you want to take the archive Let's say this And move to Glacier So how many days that you want to specify Days from objects creation date Let's say 180 days That equals to 6 months Save it So as you can see the notification says All objects in your bucket have been scheduled To move to Glacier After which they will no longer be immediately accessible So any object So when I say object I mean file or folder in your bucket Whatever the data in your bucket That's called an object So any object which is 6 months older Will automatically be moved to Or archived to the Amazon Glacier So when this event will happen You will see a new vault will be created By any random name will be assigned to that vault And if you can use your CloudBerry Explorer You can view the contents of that particular vault So this is how you can create your vaults in Amazon Glacier And how you can configure S3 buckets So that the data will be archived automatically Into Glacier As per the conditions or policies that you will configure Thanks for watching guys Stay tuned for more videos Hey guys In this video I'm going to discuss about Amazon's Identity and Access Management So first of all what is Identity and Access Management No when you sign up for your account You provide your email ID and password So that becomes your master account But now suppose you have multiple users Who want to access your Amazon web console You cannot hear your master password Because if they get the access of your master account They can do anything So the idea is to create separate users And each user will have its own roles and responsibilities Now when you define separate users First you have to define the group Let's say we start with creating a new group Programmers And here you can select the access level All the programmers of my team Will get their username and password But with a limited access Let's say redone the access So here the policy has been straight up all this So all these policies of Amazon Are basically written in JSON So if you know a little bit about JSON You can easily configure your own policy But I'll show you how to configure policy In this video as well Continue And I have created all these users earlier But you can also create new users from here Let's say programmer one Programmer two And continue Here you can see generate an access key for each user So as we seen earlier when we created virtual servers We use access keys and secret access keys So that was for the master account But now since we are creating new users And we are not going to share the access key and secret key Of the master user We create separate access keys and secret keys for these users Click on continue And here it will ask you to download the file See So two users have been created And credentials have been generated for them Download the credentials Let's say click on your And if you open this file You can see The username here Then access key and secret access key So you can share these credentials with your users So that they can access the server But still you have just created a group And you have created a couple of users One more thing that you need to configure before you can share the credentials Click on this one So permissions is only read only access Security credentials This is the access key This is what we have already caught But there is no password How come a user will access your console without a password So you need to generate a password for that Let's say assign an auto generator password Again download this And if you open this thing again You can see We have caught some other information as well Users name Random generated password And the link which they will use to login to the management console Now click on the dashboard And here you can see This is the I am user link All the users which you will create here Will use this link to login to your amazon console And use the credentials that we have just generated The password that we have generated And the access key and the secret access key of those users Now let's go back to your amazon s3 Click on the properties of your bucket And go to permissions Now let's try to edit this policy Click on edit bucket policy Click on AWS policy generator Here you have to specify The bucket policy is s3 Allow And how you can access your username which you just created in I am So I have created a couple of users Programmer 1 and programmer 2 I want to access programmer 1 So this is the syntax ARN means amazon source name Column AWS And then the service name And then Two columns And then your account name So this is your account number 4092 So copy this account number Paste it over here Just remove these dashes And then again column Followed by keyword user And then forward slash Then user name And which action that you want to allow To this user I want to allow cat bucket Object On which bucket you want to apply this policy Again access that bucket ARN AWS Now the service is s3 Followed by three columns Then the bucket name So copy this bucket name from here Put a stresig So if you want to access all the objects Put a stresig If you want to access a single object Then give the name of this object Like just like test.html Stresig and add the statement Once we have generated this statement Generate policy out of this So this is the policy which called generator It means Gata object is the action That we have allowed On this particular bucket For this user Copy this policy And paste it here That's it So now what we have done We have just generated a new bucket policy Based on the users which we created In identity and access management So this is how you can grant the permissions Various roles to the various users So first create the users in identity and access management Then add it to your policy And generate the policy using the policy generator tool And then apply the policy here And at the last click on save So that was all about how to create users In identity and access management How to provide the roles and permissions And then based on those user names How you can actually edit your bucket policies And allow the users to perform various actions Thanks for watching guys Stay tuned for more videos Hey guys In this video I'm going to explain about Amazon CloudWatch Now Amazon CloudWatch is a monitoring service provided by Amazon You can monitor your country's sources Running in specific region For example your EC2 instances Your EBS volumes Your load balancers Etc Now when you configure your CloudWatch You create couple of alarms One alarm for each service that you want to monitor And that service will be monitored by CloudWatch As per the specifications that you provide At the time you create an alarm And in case your alarm got triggered You'll get a notification An alert email or an SMS Now that is something we can achieve Using Amazon Simple Notification Services So this is the best scenario To see how multiple Amazon Web Services Can integrate with each other And provide you the best solution that you're looking for Now this is the dashboard of your CloudWatch As you can see I don't have any alarm creator Let's start by creating an alarm Now as you can see These are the metrics that are available to you To create an alarm So this is the EBS volume And here is your auto scaling group Here is your EC2 instance metric So all the services which are currently running In your region In your particular region All those services will be available for monitoring Now as you can see at the top here Is the stats that you need If you need average stats Minimum stats Maximum stats Sum or samples Is up to you So let's say I want an average for five minutes For a particular metric And the metric I select Is going to be CPU utilization Here I'll say CPU alarm Alarm for high CPU utilization Now here is the definition of your alarm This alarm will enter the alarm state When CPU utilization is greater than equals to Let's say 80% and for 15 minutes If the CPU utilization of your instance Goes above 80% and it stays there for 15 minutes Then this alarm will be triggered And if that alarm triggers What happens? If you want to receive an email alert Then you have to use Amazon simple notification service here So we can simply create a new email topic The topic name is MyCPUAlert Whatever you name it And here is going to be the email IDs That you want to specify And all these email IDs will receive the alarm that will be triggered And continue Now this is the summary Create an alarm Now it will automatically create SNS topic And the email will be sent to you for the Confirmation of the subscription So right now it has insufficient data Now as the time passes on And if your CPU utilization of your instance Goes above 80% And it stays there for 15 minutes You'll get an email alert And that alert email says that Okay this instance is running with high CPU utilization So on this left side you can see the metrics All these services are supported by CloudWatch You can take advantage of this service To monitor your NoSQL DB To monitor your EBS volume To monitor your EC2 instances Your load balancer Your RDS databases Route 53 Your DNS system And so on So this is how you can create an alarm Provide some metrics And provide the threshold And then you can configure your simple notification services So that you can get an email alert Thanks for watching guys Hey guys In this video I'm going to explain about Amazon notification services Which is called Amazon SNS Which stands for Simple Notification Service It is really very helpful to send out notifications On several events For example if you are monitoring your servers And let's say CPU utilization of your server Is 85% And let's say for more than 5 minutes It stays there And then you want a notification So that is where these kind of services Comes into the picture So when you open this Amazon SNS dashboard So this is how it looks like What you have to do is You have to first create a topic And then you have to assign the subscribers for that But before I tell you how to create the topic And how to assign the subscribers I just want to give you the idea What is a topic in messaging system In general I'm not talking about Amazon SNS But in general what is topic Or what is a messaging system Now every messaging system has two components One is called topic And the other one is called queue If you have heard about IBM MQ series So that is the messaging system Which has been used by large enterprises And that also uses queues and topics Now what's the difference between queue and a topic Let's say this is a publisher Who published the message For queue and for topic These two are publishers For queue and topic respectively When this publisher published the message for topic It puts a message in the topic And at the receiving end There could be multiple subscribers here So there could be multiple subscribers Who can subscribe to this topic And receive the message That has been published by the publisher But in case of queue Publisher puts the message in the queue But at the receiving end There will be only one subscriber The message that was generated by Or published by this publisher And put that into the queue That will be received by only one subscriber So this is the main difference between topic and queue If you want to send out the bulk notifications When I say bulk notifications I mean notifications send out to multiple users And we always use topic And this is the same concept Which has been used by Amazon simple notification services What you have to do first You have to create a new topic So you have to give the name Let's say my first topic Display name This is something you can consider as a sender's name Notification alert And create the topic So once your topic has been created You have to create the subscription Means you have to add the users Who will receive the message From this particular topic Now create subscription Here you can see there are multiple protocols That has been supported by SNS You can use your STTP, STTPS, email, Amazon SQS Amazon SQS is the queuing system We will talk about that in the other course The advanced course So when you use STTP or STTPS How you can use this Every service in Amazon is a web service What you can do You can give your STTP URL here Which will actually consume this web service When the notification will be sent And then receive the notification in the code And then display it wherever you want to display Or use the way you want to use it The best way is to use the email Here you have to specify the email address And subscribe to this topic Now I have just generated one subscription For my first topic And an email alert has been sent to this user Or subscriber So that that user can actually confirm it Before the subscription becomes activated Let's go to my email id And see what I have got This is the alert which I have received in my mailbox So when I confirm it My subscription will be activated Subscription confirmed Go back here Refresh this thing And you can see it has been activated Now it's the time to test it out Click on publish to topic Subject This is a test Message Hi there This is first test Notification And published message And let's check in the email once again This is a test message Notification alert There's a sender's name Hi there, this is first test notification So this is how you can actually create your topics Create your subscriptions And test it if it's working or not Now you can use this service With other monitoring services of Amazon Which we will discuss later on In the next video So when we discuss about the monitoring services We also need to send out the notification to the customer In case something goes wrong So that is where you can use these SNS notification services And send out the notification of the alerts Which will be generated by the monitoring services That's all for the Amazon SNS Thanks for watching guys Stay tuned for more videos In this video I'm going to explain about Amazon cloud front So what is Amazon cloud front? So basically it's a content living network Provided by Amazon Before we actually jump into Amazon cloud front And see how we can configure it What's the benefits and all that stuff Let's talk about what is CDN in general Now let's say You've got one server here Let's say this is a web server right And this guy here in orange This is the user Now what happens generally If we don't have CDN So then what happens So your user access the web server directly And your server sends the response Now let's say you have multiple users And your users could be in Anywhere in the world They could be in Asia Pacific In Europe In the United States Wherever it is Now let's say these users right here On the right of the screen These users are in Asia Pacific They also need to access your server Just like other users are doing Directly without any CDN in place So what happens in this case As your user grows up And if your users are sitting Far away from the actual location of the server They experience a lot of latency And the performance of your website Drops down drastically So after some time let's say Your users have been increased to 5,000 For example So one day your web server will also crash There are chances for that So how can we resolve this issue How can we make the website Loads of faster Doesn't matter where users are sitting They could be in the US Or in Asia They could be in Europe They could be anywhere in the world So what we generally do We put a CDN in front of Your web server Now What's the role of CDN here CDN communicates with your web server And the user communicates With your CDN Not your web server Directly So let's say You request for a page Index.php This CDN will ask for the web server First time Give me index.php Once it receives the index.php It serves it back Let's say this user also Requests for the same page At that point of time This CDN will not go back to Web server and ask for that page Once again It will provide that page from its cache It maintains a cache So we can configure How much long we want to keep the files in cache By default For most of the CDNs it's 24 hours Even for Amazon Cloud Front it's 24 hours But we can configure it Whatever the number we want If the website is static We can use a bit higher number But if you're using a dynamic website Then we can keep it lower The idea again is Your users Will communicate with your CDN And not with your web server And another thing is You launch multiple CDNs And they all Communicate with your Web server Now you can place these CDNs across the globe You can distribute these CDNs So that If a guy is coming from Asia He'll get connected with the nearest CDN Rather than all the way coming To the web server itself It drastically decreases the load On your web server And the advantage is that Every user will get the site Loaded up very very fast So this is the same concept Which has been used by Amazon Cloud Front Now let's talk about Amazon Cloud Front When you start using Amazon Cloud Front You'll have to create a distribution So a distribution Actually needs one origin point Origin point could be Your S3 bucket Or could be any other STTP server Where you save your files Basically it needs a source Where your data is So that it can pull the data And give it to the users So let's start by clicking on the create distribution And here you have two types of distribution One is download and one is streaming A download Distribution use STTP or STTP as protocols To serve the data To the users But if you are serving media files Video files and if you want to stream Your media then always use Streaming distribution type It uses RTMP real time media protocol Provided by Adobe's flash media server For example What's the difference between these two Let's say you want to stream Using download distribution What happens is First the video will be Downloader to users Local system And then it will start streaming In other case if you are using Streaming As soon as User will see first few bytes Video will start playing It will not wait for The video to download fully But it will start playing the video As soon as it receives first few bytes So this is the main difference Most of the times we stick With download distribution only Now here you have to specify The origin to main name Where are your files You have to give the source Let's say my files are in This particular bucket I am using STTP as the origin You can even use Your own custom Origin it could be your STTP endpoint But what we need Is actually We need to give some sort of endpoint Where your data is So whatever the information That you provide here Cloudfront will go there And search for the requested files It needs to be there So here you can specify the origin ID You can keep the name whatever you want here Restrict bucket access We will talk about this later on First try to deploy a distribution Without any restriction Path pattern is always default What type of protocols that you Want let's stick with the basics And object caching Now when it Fatches the object from the From the source It receives cache value In the header of the data Let's say My cloudfront is fetching a file From this particular bucket So when it fetches that file That file header will contain Cache How long you want to keep that data in cache So it What it defines here is Do you want to use that cache value Or you want to specify your own value here This is the main difference So let's say We specify our own value Forward cookies keep as it is Forward pure strings no Restrict viewer access We will talk about this later on Price class Amazon cloudfront whenever you create A distribution in Amazon cloudfront It distributes that Across all its edge locations In US, in AP In Europe, everywhere So If you want to distribute only in US and Europe You can select this US, Europe and Asia And if you want to deploy it On all that locations You can keep this thing Now whenever you generate Or create a distribution Amazon cloudfront gives you A randomly generated domain name And if you want Your own domain name Let's say you have got your own domain And if you want to access This particular cloudfront Using your domain Instead of using cloudfront's domain You give your domain name here Let's say Mycdn.nexport.com So this is my domain Mycdn.nexport.com Is the C name that I'll create For this particular cloudfront But keep in mind one thing When you are creating these C names You have to go to your domain names Registrar, log in there And create a C name entry For this particular domain name SSL certificate The default one Default root object Let's say when you hit this particular domain name What should open first Let's say Open index.html So whenever this Domain will be accessed This file will be opened So this is the default Root object Whatever you want to execute When your domain will be opened Logging On or off Whenever you access your cloudfront The access logs will be generated And these logs Are industry standard W3C compliant So that you can use any third party Log analysis tool To view these logs Or analyze these logs Now where you want to save those logs Let's say save it in this particular pocket And you have to specify prefix also My logs Cookie logging off No commands and distribution state Let's deploy the distribution As you can see The status isn't progress It will take 10 to 15 minutes To deploy this distribution Now what happens is now We have got bit of information here As you can see This particular domain name It has been generated by cloudfront And this is the domain name Which I'll map in my domain name Registrar's account So what happens is Using this particular Domain name Or this particular domain name What you will get You will access all the files Which are stored in this particular Bucket Because we are using this bucket as the Source or you can say Origin of this Particular distribution This is the id or name Of the origin and this is the Type of the origin we are using As three origin We'll talk about origin access And all that stuff And I'll show you what's the difference between that And why we need that But first let's talk about the basic stuff Now this distribution Or you can say This particular domain name Is the one which users will access Now it will be deployed Across all the agile locations You can see the locations from here All these locations It will be deployed So a person sitting in Singapore will access Asia Pacific agile location They will not go to US And a person sitting in US Will not come to Singapore So the biggest advantage Is that latency will be Very minimal or I can say There will be no latency at all Now the status has been changed To deploy it means we are ready To use this distribution Now first let's see what's in that Particular bucket which we need to access Check out the data in the bucket So we have got test.stml file Let's try to access that Using this Default domain name Which CloudFront created for us Copy this thing So as you can see this is the test.stml So test.stml file Got accessed Using the CloudFront's URL And as you may recall When we did configuration Of Amazon S3 bucket That also provides us One domain. This is the domain Of Amazon S3 bucket We can also access The same file using S3 Domain name But if you are using in this way We are not actually taking advantage Of Amazon CloudFront's capabilities Most of the times We need to access The data using CloudFront only So what we can do We can restrict this access What we want We do not want our users Access the data Using the bucket name But we want to force everybody to use CloudFront so that they can Access the data We can configure this thing In private content This is where your origin access Identity will come into the picture But before we jump Into this topic Let's try to create the C name first So as you can see This domain name is Randomly generated and this is Something you cannot remember So let's try to make something more Remembrable Which you can actually memorize Let's go to your Domain name registrar So my registrar is GoDaddy So I go to my domain and I Quickly add The C name for This domain Just copy this thing And paste And save your zone file So usually it takes Couple of minutes to Propagate the changes At the max one hour Let's try to access So this is a test HTML file So now I do not need to use This domain name to access My distribution I can use my own domain name So far We have seen that we can access The bucket Or you can say test.html In that bucket using Bucket URL as well as The CDN URL But what I want is My users Should not be able to access This file or any file in this bucket Using the bucket domain name Because if they do so They will not be using the Capabilities and functionalities that My cloud front provides So I want to force my users To use the cloud front URL To access the objects in the bucket Rather than using the S3 URL So this is where your Origin access ID will come into the picture Right now you can see There is no origin access identity But we have to create that Click on edit Access enable this thing And here you have to specify Either you can create a new identity You just have to provide the name here That's it Or use an existing identity So there are A lot of identities that I created earlier So let's try to create a new identity My Test User And automatically update my bucket Policy Now when we create this origin access Identity Basically it creates a special user By this name And only this user will have access to Fetch all the objects Within the bucket And everybody else will not be able To access the bucket objects directly They have to pass through this cloud Front URL In order to access the objects And to make these changes take effect We will update the bucket policy So once this will be done You can see this String has been generated So this is a special user Only this user will have Access on all the objects Within this particular bucket Now if you go back to Your distributions You can see that it has started redeploying This particular cloud front Let's go back to S3 Go to the properties of your bucket Group permissions So here you can see Edit bucket policy Policy has already been added And this is the user we just created This is the cloud front URL And only this user will have Allow access To fetch every object On this particular bucket So this is how this policy Has been updated by default And you can remove these Extra permissions And click on save Now go back to cloud front once again Once your distribution will be deployed Then we will see what are the changes That happened Now as you can see This distribution has been deployed Now let's try to access this First with your Cloud front domain And it's working And now Let's try to access the same file using The S3 domain name And see access is denied So it means Users can access Any object in this particular bucket Only using the cloud front domain And they will not be able to bypass that By using Straight away this S3 Domain So this is how you can restrict your bucket policy Thanks for watching guys Hey guys In this video I'm going to explain about Elastic load balancer Now before we actually start configuring Elastic load balancer Let's look at the architecture of it Now as you can see on this diagram This is the basic architecture of Elastic load balancer Now this thing here is your Amazon account And within this account You have selector one region It could be any region And within this region there are a couple of availability zones And in each zone You have launched one EC2 instance Now both these instances Need to be of similar nature Means If it's a web server Both needs to be a web server And we have put both these servers Under Elastic load balancer Now Elastic load balancer Communicates with EC2 instances Using these protocols STTPS TCP And SSL Now here This is you Your Elastic load balancer owner So you can access your Elastic load balancer Either via management console Or Using APIs or command line tools Now this user here Is the end user Who tries to access your servers Now let's say both these instances Are web servers Serving a website request And this user is an actual user Who is requesting for your website Now this user Communicates directly With your load balancer Using these protocols Now it's the job of your load balancer To send these incoming requests To the Backend servers Now it can send a request to any of this Server Using these protocols Now the important thing is If one instance goes down Your user will not come To know anything about it Because your load balancer still Get those requests From this particular instance And send it back to the user But if both servers goes down Then only your user will see That your website is down So the main idea behind Elastic load balancer Here is To provide the high availability Of your services Now login to your management console And you must have Couple of instances running here Both these instances Of similar nature And then Go to your load balancer And create a load balancer here Here you can specify The name of your load balancer This is the protocol of your load balancer And the protocol of your backend servers So as I explained earlier We have four protocols that we can use As you can see here For load balancer As well as the backend instances So we will keep this as the default only And this is How your load balancer will check Whether your instances are healthy Or not Since we have selected HTTP protocol It will send a request On HTTP protocol Port number 80 And request for a specific file Let's say You can specify any file there But it should exist on both servers Now your load balancer Expects that Your web servers or your backend servers Will respond in 5 seconds You can increase it to 10 seconds Now it will check your backend servers After every half a minute Now here are couple of thresholds One is unhealthy threshold Other is healthy threshold Now let's say Your load balancer Tries to access your web server And It gets unresponsive Or It gets negative response From your backend server For two consecutive attempts Then it will mark their server As an unhealthy server And their server Will not be the part of your load balancer And let's say If it gets 10 successive Positive response From your server Then it will include that backend server Back into your load balancer And that server will start serving the request As part of your load balancer Now here you can specify The number of instances that you want to Include in your load balancer So it's always advisable That you should have at least two instances And both instances Needs to be in different availability zone This is the summary If you need to edit anything You can still edit Otherwise just click on create And close Now this Now this load balancer will take couple of minutes Before it becomes active As you can see here The current status is out of service Because The instances registration is still in progress It will take couple of minutes Before both of your servers Will be registered in the load balancer And you can then try to access Your load balancer So now both of your instances Have been registered And you can see here The status has been changed to in-service Now before we actually try to Access the load balancer Let's go back to the instances Click on first instance And try to Access it over HTTP Copy this public DNS Now as you can see here The index.php Gives me PHP info Simple page Similarly Copy the DNS of your next server And try to access it in the same way So it also gives me The PHP info page So now you can consider this As your website So both of your web servers Are running with same set of your code And serving the same pages Now let's go back to your load balancer And here in the description You can see the DNS name That has been automatically generated For this particular load balancer Copy this domain And Try to access your load balancer now Now as you can see It also gives me the same page Which both of these instances Has given me Now what we have seen here Is The user is going to access your Website using the domain Which will be pointed to your load balancer And users are not going to Access your web servers directly Now what happens if One of my instance crashes Let's Stop this machine Now once your second server Will be stopped Try to access the URL Of your second web server And as you can see this is not going to respond Because it's taking too much time The thing here is The scenario that I'm going to present In front of you is that Your one web server crashes What happens then Now try to access your URL Which has been provided by load balancer And still you can see You are able to access your page And the page has nothing wrong in it So even if your One server crashes Your user will not experience this Thing and user will be able to Access your website As the normal way Now as you can see here The second server is down and the connection has been refused But if I try to Access this again and again I'm still getting my page Because my one server is still up And running And the page has been served from this particular server And because of the load balancer I don't see this thing happened So this is All behind the curtains Which is happening and normal user is not aware of this Now this thing provides High availability solution To your services That was all about load balancer How you can create your load balancer How you can How you can configure the thresholds And all that stuff Thanks for watching guys Hey guys In this video I'm going to talk about auto scaling So As we have seen how to create load balancer But this time I have launched new servers In US East region And created a load balancer in this region I have used same AMI Which I created in Oregon region So I copied that AMI from US West to US East And based on that I have launched couple of instances Now why I need to do that Because integration of auto scaling Policies And cloud watch alarms Is only supported in US East region as of now We will not be able to create Alarms and then call the Auto scaling policies In any other region except US East So we have two servers running And We have one load balancer So these instances are still Registering themselves So we don't worry about that So let's see how to create Auto scaling policies Before we do that We need to download auto scaling Command line tools as well as Cloud watch command line tools So I have set up auto scaling Command line tool And I have also quittered A simple shell script That contains all the environment variables That we need You will find this information In this readme file For every tool that you download We simply create this script And then call it So all this environment variables Will be exported to this grunt shell So we have various options here Or various executable scripts When we set up auto scaling We need to configure four things First is Launch configuration Second auto scaling group And third Your auto scaling Scaler policy And fourth auto scaling Scaledown policy So first we start with Auto scaling launch configuration So it takes Few parameters First is the name of this configuration Let's say my launch config And then the AMI ID Based on which you want to launch New instances Let's copy the AMI ID From here And put it here Then you have to specify instance type Is t1.micro In this case And then you have to specify The Key pair that you have quittered For these servers So the key pair that we have Is AMI key And then the group name This is the security group that you have quittered That is web server And hit enter So here you get the successful message That a launch configuration has been created for you Now based on this We are going to create the Auto scaling group Now it will take A couple of parameters Let's check it out what it needs So we have to specify The group name The launch configuration that we have just created Its name is my launch config Max size of the servers That you want to keep in your environment Let's say 5 And minimum size that you want To keep it under load balancer Let's say 2 Availability zone US East And then health check Parameter Name of load balancer that you have quittered And that's all very much that you need Now let's create it My scaling Group Launch Configuration is My Launch Config Availability Zones US East 1C This is where other instances have been launched Minimum size that you want to keep Maximum size that you want to keep 5 Load Balancer's name This is the load balancer name I have quittered Then health check Type It's going to be ELB Then grace period Now grace period Is the number of seconds between Your consecutive auto scaling events Let's say if one event happened at 10 o'clock How much time difference You want between the second event So this is the difference between The two successive events of Auto scaling It's always in seconds So once you got this auto scaling group Created The next is we have to create the policies Now in this Example We are going to create two policies One is to scale up the environment And the other is to scale down the environment For that We are going to use AS put Scaling policy Command Let's take a look at here So this command takes few arguments One is the policy name And then the type of the policy Auto scaling group that we have just created Adjustment Adjustment means number of servers That you want to create When this policy actually executes So if we say Adjustment 1 It means one server will be added If you say adjustment 3 Three servers will be launched At the same time Let's name it scale up And then auto scaling group name My Scaling Group And then we have to specify Adjustment How many server you want to create If this policy will be executed 1 Type is Change In Capacity Hit enter So once this policy has been created You have to keep this value You have to save this value somewhere And similarly You can create scale down policy Here you have to specify negative number But when you specify negative number You have to specify equals to sign And then minus 1 For my scale down environment How many servers will be shut down At once Only one server will be shut down If this policy will be executed And hit enter Save this value as well Now we are done with the auto scaling setup Next what we have to do is We need to generate the alarm Based on which These policies will be executed For that we are going to use Cloud front command line tools Let's go back there And see if everything is working So if you get this output It means your environment Is properly set up So to create the alarms We are going to use Put metric alarm command And let's just take its Let's just review its parameters So it will take a couple of options Alarm name Comparison operator Means greater than equal to Or greater than less than Whatever it could be Evaluation period The sample period for which you want To collect the samples Metric name, CPU utilization Or input drive disk input output utilization Or memory utilization Name space always EC2 Static Stats means average Minimum max what kind of stats you want Based on these stats Your alarm will be generator And the most important thing is alarm action If this alarm will be generator What will happen Here we are going to call our policies Now let's Start by Creating a simple alarm Alarm name is going to be Scale Up Alarm Alarm description is going to be Scale Up for 80% of CPU Right And Metric name is going to be CPU Utilization And name space Is going to be AWC EC2 Static stick Is going to be I'll keep it minimum Because I want to show you guys In real time how this auto scaling works So we will see how instances Will be created automatically Period of 60 Threshold Here I will keep threshold as Let's say 30 And Comparison Operator Is going to be Greater than Threshold Let's specify some dimensions Instance ID So on which instance We want to put this alarm Let's note down the instance ID of this server Evaluation periods For how many minutes we want to Capture the samples Let's say 3 minutes Unit is percent Alarm Actions Here we are going to specify the Scale up policy If the load on my server The minimum load on my server Goes to 30% Then Scale up my environment with one instance Means Add one instance to my environment Now once done hit enter And as you can see We have a scale up alarm generator Let's go to cloud watch And also check the monitoring of this server We enable detail monitoring here So we have got scale up Alarm generator The CPU utilization here is Greater than 30 For 3 minutes Similarly we create a scale down Environment Let's copy this thing Paste here We will see It as scale down Scale down for 10% of CPU All this thing is same Threshold We keep it as 10 And If it is Less than threshold And here we Replace the scale down policy With this one Copy this thing Alarm generator Let's refresh this thing Now we already got an alert Because The CPU utilization is already less Than 10 So that is why this alarm is already there So what it will do is If Any server Is running less than 10 It will shut that down But it will keep minimum Value of 2 If you find in the scaling group We have provided minimum value And maximum value So based on that Minimum servers will remain there Even if Scale down alarm will be generator So this is how you can set up your auto scaling policies How to set up your auto scaling groups And configure alarms Based on those alarms You can call your scale up or scale down policies Keep in mind one thing Integration of auto scaling Policies is only supported in US East as of now Rest of the regions are not supported So if you are configuring it With any other reason It is possible to set up auto scaling In any region But then you have to take care of the Alarms Thanks for watching this video guys And stay tuned for more videos In this video We will talk about RDS Relational database system By Amazon So earlier RDS Was only providing services for MySQL But nowadays RDS provides services for Oracle standard and enterprise edition SQL Server As well as MySQL So this is very much similar to how you Actually launch an EC2 virtual server So this is the dashboard of your RDS And let's get started Click on launch DBS instance wizard The options All these databases Are supported now So click on your desired one So here you can select the engine version If you want to Use the legacy version 5.1 You can use that This is the default one If you want to use the latest one 5.6 You can use that too So it gives you a lot of flexibility DBS instance class Micro, small, medium, large To what we have seen in EC2 virtual instances Multi-availability zone deployment If you click on yes The database will be deployed In multi-availability zone But in the same region US East region has Three availability zones So this will be So this will be deployed in all those three zones But if you say no So it will be deployed in the default one Auto minor version upgrade So if a minor version Will be released by MySQL And that will be supported by AWS Do you want your server to be Upgraded automatically? Yes or no? We keep it no How much storage do you want to allocate To your database? Minimum is 5GB Provision IOPS volumes Actually provide high IO rates So that is what suits the database Servers most As we have to specify the Ratio between 3 and 10 So MySQL scores 1000 to 30,000 IOPS With increment of 1000 But this thing works well with Large instances If we are using t1.micro It does not make any sense to use Provision IOPS Instances identifier Mytestdb Username will be username user And put any password Password must be 8 characters So here you can give the Database name Mytest database The port number Default Availability zone Let's give it default And here you have to specify The retention period For how many days you want to keep the backup 3 days, 5 days, 10 days Backup window If you want to specify That my database backup should be taken At this particular date At this particular time only Then you can specify it here Otherwise keep it default So it's better to keep these values default This is the summary And then we can say launch the db instance Now go to instances once again And here you can see What this is creating So mytestdb With class micro 5gb of storage skewtygroup is default Ingenious MySQL And username, password all that stuff So after couple of minutes The database will be ready And we will get a domain name Using which we can access These databases So as you can see the instance is ready now Let's see couple of more options here If you expand this thing You can see the endpoint has been created for you So this is what you can use in your application Or Or you can connect with this database instance Using any MySQL client Let's take a look at monitoring also So as the EC2 virtual instance We also have Monitoring for Database instance The CPU utilization, db connections Free space, free memory These are very good stats to have And the good thing is that We can create the cloud watch alarms here also So in case Anything goes alarming We will get the notification We can create an alarm over here So As we have seen earlier We can create a topic Send a notification to MyDB topic And Put your email ID here So For what service You want notifications Let's say db connections If average db connections Or you can say maximum db connections Is greater than equals to Let's say 90 For at least 5 consecutive periods of 5 minutes Means total 25 minutes Then Send an email notification to currentandalignexpo.com So it will automatically create a topic And Create an alarm for that So as of now It does not have sufficient data But once this instance Will run for few Hours It will start collecting data And if the value reaches to 90 It will send out the notification So this is how you can actually Configure your instance And then configure automatic Monitoring And if anything goes wrong You will also get an email notification So RDS provides a lot of options To manage your database Instance Apart from this monitoring And notifications It also provides you high availability And high scalability So that was all about RDS configuration How you can set up an MySQL instance How you can configure alarms And notifications Like our policies How you can configure the snapshots Thanks for watching this guys Hope you have enjoyed it Hey guys In this video I am going to explain about Amazon Route 53 Now Amazon Route 53 Is a DNS service provided by Amazon A DNS In general Is a system that translate Your domain names Into IP addresses For example www.google.com It is Mapped with certain IP addresses Now if you ping google.com You will get an IP address Now how this domain Has been resolved to this IP address This is where your DNS comes into the picture Because it is not possible For someone like us To remember IP addresses for all these websites So that is where we keep One specific name For our website And that name Or you can say that domain is mapped With the IP address For example facebook, youtube, yahoo All these websites We remember easily But we cannot remember the IP addresses Now in a typical DNS We have certain set of rackers Which actually makes the DNS Zone file So every DNS system has a zone file And each zone file Contains several rackers For example this A record C name record Your Mx record So A records are basically Point to your main domain To your IP address And C name records are the Canonical names that represent your domain And that can be used To map your domain with any other Domain as well For example I have a domain called A1consultance.in And if I say like Mysubdomain Which is called Google.A1consultance.in Will map to www.google.com So that is something I can do here in the C name section I create a canonical name That will map My domain into any other domain And similarly we have other Rackers called Mx So if you need to specify Your email server This is where we can use Mx record And most important is your DNS record DNS record is called your name servers So I have purchased the domain Called A1consultance.in From Godary Godary provided me with Couple of name servers If I don't have these name servers My website Will not open These name servers are actually the Domain name servers which are running And which are hosting my This particular zone file Now what Amazon provides Amazon also provides Similar DNS servers And it also allows That you can migrate your existing Domain to AWS platform Now before you actually Use Route 53 You need to have one domain Registered with an Authentic registrar Like Godary and there are several others Amazon does not provide you The facility to register Your domain This is something that you need to get from somewhere else Now once you have your domain Registered By creating a zone file So here we call it Hoster zone Now on the right side here You specify the name Of your domain My domain And create the zone file Now once this zone file Will be created for you You can see on the right side These are the name servers That's provided by Amazon Now if you want to migrate To Amazon Route 53 You need to change the name Servers and replace Your existing name servers with These name servers Now let's try to do this thing Copy each name server And paste it over here Now it will take couple of minutes Before your new name servers Will be activator Now let's try to open this zone file Here you can see couple Of rackers have been already Figured for us Now let's say I want to create a new record Here you have to specify The name of your record For example Google.a1consultants.in And what type of record you want I want a C name record And it should be mapped With google.com And this is the routing policy We'll talk about this routing policy But for now Just keep it as simple Simple is the default routing policy Which is provided by all the domain name servers And Amazon comes up With three other routing policies That is where you can take advantage Of your Amazon Route 53 As compared to your Normal domain name system And here you can select To create the record set Now this record has been Met with google.com And I have created one C name For this Now if you can go here You can see there is no such Domain exist here But since I have migrated My domain to Route 53 I'm not going to use this Management console anymore Because that is something can be used If my DNS server Is still lies with GoDaddy But now I have migrated it from GoDaddy to Amazon Route 53 So everything I do I need to do it here Now if I try to Ping this domain It will not work as of now Because it takes few Minutes to actually propagate this domain To all the DNS servers And once propagator You'll start getting reply from this So till the time it's propagating Let's try to explore some other features Of Route 53 Now Let's talk about other routing Policies Now there are 3 more routing policies Waiter Latency And Failover Now waiter latency means You have one record For example dummy.a1consultance.in And you have Couple of web servers running Now what you want to do is You want to send 70% of your request To web server 1 And 30% of your request To web server 2 So you are weighing your request Which are coming from the users Based on the policy that you specify here Now let's say I create one new record Set Which will be called MyWebsite.a1consultance.in And I'm going to provide IP address Now IP addresses Of my EC2 instances Now let's go back to Your EC2 console And copy the IP addresses Now select one server And copy its Public IP address Or if you have Elastic IP address associated With your instance Then use your Elastic IP address Now replace these dashes With dots And make it waiter How much weight Let's say 70 So this is the unique ID You can name it anything MyWebPolicy1212 And create the record set Now as you can see One record set has been created Now let's try to create The second record set Now start your second server And copy its IP address Make sure all of your servers Should be running And here you have to specify Same domain name You don't have to specify Any other new name And specify the IP address Of your second server Now this server Should receive 30% Of my request My second Weight 1212 Whatever name it is Now as you can see The second record has been created But this domain name Will not change Both of your records Should keep the same name Now 70% of the request Coming for this particular domain Will be served by this server And 30% of the queries Coming for this domain Will be served by this server So this is where it actually Is the job of your route 53 To take care of these Percentage of the request You don't need to Specify or configure anything extra For this Now let's try to create another record set And see What's the use of latency Routing policy here Now let's say you have one load Balancer running in US And the other load balancer Is running in Asia But A user from Europe Tries to access your website What will happen Where should that user go So that is where your DNS Or you can say route 53 will come into the picture Route 53 will decide Which is the best Possible path for a user Who's coming from Europe And that path is going to be decided On the basis of latency Whichever load balancer Is experiencing less latency At that point of time That user will be directed Towards that particular load balancer Now let's create a new record Set and Again provide the name Let's say www5 And here we are going to use Load balancer This one And once you select the load balancer It automatically selects its location Or you can say the region It's in US west And set ID is going to be my Load balancer testing Any ID you want And create the record set Similarly create another record set www5 And Select a Singapore based Load balancer See again the region has been selected Automatically My LB testing Now both of your load balancers Should be serving the same Set of Services Or you can say website So whatever you have configured It should be identical And create the record set Let's actually open up www5.a1consultance.in And it will be routed to Either this load balancer Or this load balancer Depends on which one is experiencing less latency You can also use Your EC2 servers Instead of load balancer for this thing Now the next routing policy Is failover Now let's say Again I'll take an example Of the load balancer Let's say you have one load balancer Is running in US And the other is running in Singapore And both are serving same content If You are one load balancer Goes down Here we take example of Active active and active passive Method Now one of your load balancer Will act as an active End point And the other is going to be passive At one point of time Only one load balancer will be Serving all the request And if that goes down Then only your secondary load balancer Will come into the picture Now I'll create another domain www6 And I'll select the primary domain Or you can say primary load balancer Is going to be this And this is going to be primary Set ids or primary selector You can change this thing as well Evaluate target health And create the record set Now it will actually keep on Checking its health If this health goes down Then what will happen This domain will be switched to The secondary load balancer Now create another record set And this is the Secondary load balancer And make it as secondary Now we are not going to evaluate health Of the secondary load balancer Now this is how you can Take advantage of your failover Routing policy If one goes down The second will be there to serve the request So this is how you can Migrate your existing domain To your AWS route 53 And start configuring Various parameters Now you can take advantage of Several routing policies Provided by route 53 Which are not provided by Any normal domain name system provider So hope you guys Have enjoyed this video And thanks for watching Hey guys, in this video I am going to explain what is VPC Or virtual private cloud Now Amazon Provide two types of networks One is called EC2 classic And other is called EC2 VPC Now when we say EC2 classic It means a flat network Which is shared among Various AWS users So whatever we have done so far Like creating a new virtual Servers Using Amazon EC2 And creating load balancers So that was done On a flat network Or you can say a shared network Now Amazon provides A service called VPC Which allows you To have your own private space Or own private network In the cloud Now that is either going to be Totally private Or it's going to be partially Public or completely public Means AWS allows you To create your services Like EC2 servers Load balancers In a totally private cloud That won't have Internet access at all And that is something that you can Access using VPNs Now the other is It provides you Two networks One with public interface And the other with private interface Whatever service that you launch In your public interface Will have your internet access Means Those services will be publicly accessible And the services which You create in your private network That is not going to be accessed By Anyone Via internet Now in this particular course We are going to talk about Two types of VPC One is Entirely with public interface And the second is With public interface As well as private interface There are two more options Which actually needs The VPN Technology and the discussion on VPN And to understand VPN first So that is something I am planning To cover in my next course Because those two types are Advanced as far as this course Is concerned Now this is your VPC console And Click on this button To get started with VPC Now here as I said earlier We have four options VPC with single public subnet And VPC with Public and private subnet Now VPC with Public subnet only Will be having Internet access entirely And it can access All other Amazon services Like S3 RDS Other EC2 instances etc And VPC with Public and private subnet Here you can Either have your Instances or services Launched in your public subnet Or you can create those services In your private subnet So I am going to explain The second option Which will cover the public subnet As well as the private subnet Now here is the summary This is your Public subnet And this is your Private subnet Means Whatever you launch In your public subnet If you launch your EC2 instances Or great virtual server In your public subnet That will get the IP address From this range And any instance Launched in your private subnet The IP address from this range So as you can see here Each range have 251 IPs available Now these IPs are specifically for you No one else On the Amazon network Can share these IPs with you So this is where Your private space Comes into the picture This is the whole concept of your VPC Now since we have selected Public and private Type So it will create one NAT instance for us NAT instance With one elastic IP Of type small So the NAT instance Is actually responsible For providing the internet access To all instances Which we launch in the Public subnet So if you want to change anything You can change it here Otherwise proceed with Creating your VPC Now it will take couple of minutes Before this process will be completed Now you can see That your VPC has been created successfully Let's try to Explore some of the features So you have got One VPC You have got two subnets One public, one private One access control list One internet gateway Which is responsible for providing internet access Two routing tables So you need to have a little bit of Networking knowledge here One elastic IP Which is associated with your internet gateway One security group And one running instance And that instance is also The one with your NAT interface Or you can say your internet gateway Now we will try to launch couple of VPC to instances The process is very much Similar what we have seen When we actually created the EC2 instances in the Normal network Now you have You can see the same console Has been opened Launch the instance Select the AMI And here you have to Select your EC2 VPC If you are going to select EC2 classic that is not going to be Part of your VPC Here you have to select EC2 VPC And these are the two Networks that you have One is public, the other is private So let's go with public first And select continue Again If you want to change anything you can Otherwise click on continue So my first VPC server So the key pair that you want The security group that you want here Let's say VPC group Group for VPC Add couple of more services I think that's enough Continue So this is the summary Still if you want to change anything you can Otherwise click on launch Now this EC2 instance Is going to be launched In your VPC Not in your EC2 classic network Or the network which Has been shared by other AWS users This is specifically for your Private network So once your server will be launched You can see a VPC ID has been assigned here So this means This server has been launched under This particular VPC ID Now let's go back to Elastic IPs And here you can specify A new address And the new address will be used in the VPC So once you get this address Try to associate it With the newly launched VPC server And Then you can access your Newly launched VPC server Using this Elastic IP And this server Will have access to all other Public services like Amazon S3 Amazon You can see RDS And rest of the services So here you can see That Elastic IP has been assigned To your server Let's try to access it And then provide the Key pair name And open So you get the Connectivity So here you can see that You have got the connectivity And you have been able to login to your Server Now let's try to Ping any site here That is working on this particular Server Now there was the case When I launched an EC2 instance In the public subnet Let's take one more example And this time I'm going to launch The server in the private Subnet Again you have to Select EC2 VPC And this time select the private Subnet Second VPC server Will be launched in the private Subnet Means it will not have access to Other Amazon services As well as it will not have access To internet Now once done You can see that your second server Is running now It's just initializing the checks And it will take I think couple of Minutes Alright So we are good to go VPC server which we have created In the private subnet And here you can see It contains only private DNS So this is the private IP that has been configured With this particular instance And if you go back To your VPC console and refresh this Thing You can see you have three instances running Now let's take a look At your VPC Defined here This VPC has One public interface And one private interface This is the main routing table And this is my default access control List So again we have two subnets One is private and the other one is public So these are the routing tables So you can see The second routing table has One subnet associated with it And that subnet is the private Subnet This guy here So any request coming from this subnet Will be routed to the internet gateway So That means With internet gateway That Stance or all instances in that particular Network will be able to access The public services But this routing table here Does not have that Internet gateway defined So the request From this particular network Is not going to be entertained By the public services For example Amazon S3 or RDS So this is your one internet gateway And the DSCP options So this is the DSCP server name which is responsible For providing you the Private IPs And these are the two elastic IPs Which we have used so far And this is your default Network access control List. So here are a couple of Rules that has been defined here And There is one security group Or you can say two security groups One is default and the one which is created by us So it is very much Similar to what we have seen in Amazon EC2 So the whole idea here is To understand what is VPC So again VPC You can use space in cloud Which is your private space And in that space You can launch your services The way you launch in your Flat network or the default network So in the next course We will talk about VPNs And at that time I am really going to show you the real power And real strength of VPC And how you can take advantage of this And how you can extend your Existing local network to the cloud Without actually Connecting the cloud servers With the public interface But rather you connect with the Private interfaces using the VPN Thanks for watching guys Hope you have enjoyed it Hey guys So we have reached To the end of this journey But you know what This is not the end This is the beginning of many more journeys To walk together To summarize what we have covered In this particular course So in this course I have tried to explain Various services being offered by Amazon Web Services It includes how to Configure and create your virtual servers How to configure storage In the cloud And how to take advantage of Content living network service Provided by Amazon Which is called Amazon cloud front I also explained How to monitor your services Using Amazon cloud watch And how to get An automatic alert In case anything goes wrong Using Amazon simple notification Services Then we have seen how to Configure Amazon RDS database In that example We have seen how to configure MySQL database We also talked about How to configure load balancer To make Your infrastructure Highly available And how to scale up and scale down Your instances as per The current situation And we talked about How to configure highly available Domain name system Which you can use To take advantage of Functionalities like routing policies Which includes latency Waiter And failover policies So you can make your services For example load balancer To be highly available If you are using the failover policy Of Route 53 So that was pretty much For this particular course And I really appreciate your feedback And in case you have Any questions, any concerns I am always available I have included 5 live sessions In this particular course In which I am going to demonstrate Some of these services So in case of any doubt You can join those live sessions And if you feel that you need More live sessions For any particular service You can let me know And I'll schedule more live sessions For you. Thank you very much For enrolling into this course I really appreciate And I hope I have made a successful Attempt to distribute the knowledge In the next course on Amazon Web Services I'll come up with more advanced topics And I bet At the end of that course You guys will be professional As far as Amazon Web Services Are concerned Thank you very much guys And I really appreciate once again