 We are going to talk about what is an infrastructure code and how Terraform is useful to implement infrastructure as a code. So I think Rithik has pretty much covered my introduction. I am suddenly working as a solution architect at the InfraCloud Technologies. So predominantly I work as a DevOps architect and working on various cloud-native initiatives and I am also an AWS Community Builder. And I would love to give back to the community via my technical blogs and I also provide the answers to related to all DevOps and the goal-related questions in StackWorkflow as well. And I also have my GitHub in which I kind of provide a workable POC solution related to infrastructure code and DevOps areas. And I am on LinkedIn, Twitter, GitHub, as well as I have my profile in dev.to as well in which I write technical blogs. And I also have my website in which I publish my work and everything is available there. So this PPD should be available at the end of the session, I believe. So let's move on to the discussion part of it about infrastructure code and Terraform. So today, this is the agenda, kind of high-level agenda like that I'm planning to cover. We'll go through about IAC overview and Terraform reduction. And then we'll discuss about various Terraform concepts and terminologies. And then we'll kind of jump on to the hands-on part of how to create AWS resource creation using Terraform. And then we'll kind of have a small discussion about Terraform best practices and we'll conclude. And then we'll have Q&A session actually. So this is the agenda for today. So let's start then. So basically like first we'll talk about what is the infrastructure code basically. So infrastructure as a code is basically as a term like no mentions, it is like no defining an infrastructure using a coding format. So basically like why we go for infrastructure as a code is in traditional days like no, every infrastructure has been created manually for creating a simple like no server itself. It was taking a week's time to like no go through all the approvals and stuff and it was taking a longer time for deploying a single infrastructure. So that's when this kind of infrastructure code tools came into picture to define the entire infrastructure with all the network connectivity and the system configuration as a code coding format and it can be deployed into whether on a cloud provider or on an on-premises server on-prem services to create infrastructure basically. So basically like so using this infrastructure code it will help to fix any configuration drift also in case if it is happening. So if you are running a server in the sense if someone accidentally deleted any kind of configuration the sense that server may go down. So using intra we can make the configuration like no be in the desired state always actually so that the configuration drift also can be solved and there are many infrastructure infrastructure as a code tools available now in DevOps space and SRV side actually. So some of the tools I mentioned here those are all like Terraform, Ansible, Chef, AWS CloudFormation, Azure and GCP CloudDepartment Manager and the similar kind of tools basically. And there are also like now programming based infrastructure code also becoming famous like Terraform CDK and AWS CDK Volumi and all. So these are all some of the basic about infrastructure code and when it comes to infrastructure code there are two types we can split them actually one is mutable infrastructure as a code technologies and other one is immutable. So basically the major difference for mutable and immutable is something like in mutable infrastructure that infrastructure will be considered as a pits basically pits in the sense like if in case I was talking about a configuration drift in one server or few number of server when that particular agent based infrastructure code runs on the particular instances or virtual machines that the configuration drift that was missed in the previous like it was deleted or something can be fixed actually. So that's the kind of mutable infrastructure that Ansible and Chef can be used to implement such implementation. And immutable is something like that infrastructure can be considered as cattle basically like if there is one issue with some configuration drift happens in this you cannot fix that particular system alone so we have to redeploy the entire infrastructure and so Terraform and AWS or such a kind of immutable infrastructure code technology examples actually. So this is somehow high level about infrastructure code as an overview and in this infrastructure code like no Terraform one of the very famous infrastructure code technology in which it has support for many cloud providers and on premises providers basically and it's kind of widely adapted infrastructure code technology now and we will see a bit about Terraform as an introduction so Terraform has been created by company called Hashi Carp and they have multiple other products as well like Vagrant, console or similar to other technologies and products as well they have and as I mentioned earlier Terraform supports many cloud providers AWS Azure, IBM cloud, Google cloud Distellation and on-prem side they also support VMware kind of providers basically and Terraform manages the external resources whatever the cloud provider and on-prem provider we are talking about using the using the implementation called providers basically like Terraform maintains Terraform has a wider community support and it also have a library of functionalities for supporting their infrastructure resources basically so that is called the providers and so Hashi Carp maintains extensive list of such official provider and can also integrate with the community developed providers actually so in case such in some cases if there are some provider is not there and if that particular instance or resources can be created via an the API call right then we can also like know or anyone can also develop that community based provider and it is called kind of custom development Terraform actually like that is a kind of bit advanced topic and it's a kind of out-of-scope for this session basically so yeah this is high level and as I mentioned earlier like this is one of the extensively used infrastructure code tool and developed automation space actually so that is the introduction of Terraform so now let's move on to the basic concepts of Terraform like how Terraform works and so basically like one take away from this session could be for anyone who is starting to learn Terraform or starting to want to build infrastructure using Terraform by using this session or at the end of the session they would be able to create the infrastructure at least in the AWS provider so that is the kind of high level take away from this session and it will be a very basic beginner session so we will not cover any advanced concepts basically so let's dive into Terraform concepts so let's see how Terraform works so in Terraform we have we would be defining the infrastructure using the Hashicard language and it will be having the respective provider calls to create infrastructure resources in the respective provider so how it is internally working is like now for each cloud provider for example in AWS, AWS have SDK and API for accessing or managing their cloud resources internally and that target API would be implemented using Terraform provider that is kind of developed using Golan actually and whenever that particular functionality of running the creation of resources running on a specific provider it will be interacting with the provider API and it will be creating the resource actually so basically Terraform consists of three stages first we will be designing infrastructure and then writing it as a code and then we will be doing a plan of how that infrastructure indicative infrastructure going to be in the respective cloud provider and then once it will show anyway like LB showing some examples of all this like how to write and how to do plan and apply and all and once after that plan if you are kind of satisfied then we can move on to like apply that particular infrastructure on to the cloud provider and basically like Terraform has extensive support for community and it is also having a detailed documentation for everything in this each slide actually I have provided links for each of the documentation references and once you deep dive on the Terraform concept they can just do like they can just access this link and they can read through all of this in detail actually that is a very basic concept of how Terraform works basically like it is underlying layer it is interacting with the respective cloud providers API to manage their resources actually so that's the high level idea for this slide and Terraform has a executable called the executable name itself is Terraform and we call it as a CLI and Terraform using the Terraform CLI only we would be kind of kind of it is basically providing various commands to manage our infrastructure and some of the commands I have provided in this slide basically like you know it is having a command like Terraform in it in which whenever we are doing the Terraform in it after creating a HCL code for a particular resource definition we would be running the init command so it will be interacting with the cloud it will be looking for the what are the providers defined in the Terraform file and it will be like downloading them into the user's local location and then validate command is something like a syntax checking of Terraform code whatever we have written and there is also another command called Terraform FMT that would be kind of doing a formatting of the Terraform code whatever we have written and then once it is been confirmed we can do a Terraform plan in which as I already mentioned it would be showing an indicative infrastructure whatever we are going to create onto the prospective provider basically and it will also show how what are all the changes that is going to make whether it is going to add or it is going to change or it is going to destroy the resources as per the definition and then the apply command comes like once after we are fine with applying that infrastructure onto the cloud provider we can also do the apply command so that that infrastructure will be deployed and then once after like now everything is done you know destroy command is something like if you wanted to destroy the entire infrastructure that is defined in the particular Terraform code actually so that we can use Terraform destroy command actually and using this Terraform sub command and help option we can also explore command in options for each of these particular resources and then we can we can come to know more about each of these commands actually so I will move on to the next slide yeah so as I mentioned like HashiCard language is the one in which we will be defining our infrastructure and it will be having various blocks actually we will also discuss in detail like what are the blocks actually one such example is given here basically like it will be resources it will be starting with a block type in this case resource block has a AWS VPC we are going to create so the resource name defined in like HashiCard module is basically it is called AWS VPC and this is the like you know identifier for the particular VPC component within the Terraform code actually so this app VPC name would not be the name that would be created in Terraform but it would be the internal representation of particular resource name that we can say actually and then inside that we would be having various argument references in which Terraform has implemented the AWS implementation exactly the same implementation of what are the arguments and what are the optional arguments so according to our need we need to define that argument references and we can also add tags actually for the particular resources so that it can be easily identified as a group of tags in the AWS actually so this is about like you know high level defining a HashiCard language for defining Terraform code and as I mentioned like you know I have also provided a document reference so whoever wants to want to go through about this deep dive on this code they can go through that since this session is more on hands part I will be kind of running through this high level concept actually so these are all the basic components of Terraform or terminology basically so as I mentioned in Terraform you would be defining the resources the resources is nothing but as the name represent right so the resources are component of the cloud provider or on-trend provider in which it would be instance virtual machine instance or like in GCP it would be a computing instance something like that right so that would be the thing and the networking components also called like whatever that would be created in the cloud provider or like on-trend provider is called the resources and the provider name is the as I mentioned like provider is the one that would be the cloud provider or on-trend provider that is kind of implementing the underlying resource management APIs so that it can be written in the Terraform code and it can be like you know used for deploying that particular resources on the provider and module is something like it's a kind of modularizing to avoid the code redundancy of such resources basically so for example if you wanted to define an easy to infrastructure in AWS we can define a module for easy to infrastructure and that module can be reused multiple times for creating environment-wise infrastructure or by passing the different variables to the same code so that the code redundancy won't be there so that's what we have kind of explaining in the module and the provisioners are something like in each cloud provider they would be providing a facility to run some initial commands whenever the VM is getting created so basically in AWS it's called user data and in in GCP it's called metadata and Azure also it's called the user data so in provisioners we can define such kind of user data so that whenever the VM gets created we can provide some basic commands to basically execute that can be executed whenever the VM is gets ready after that it is getting springed up to the provider so this kind of provider section can be useful to install some basic packages that needed to be available whenever the VM is getting whenever it is accessible right so we can use this provisioner for that and Terraform has various variable implementation so input variable is something like global variable we can declare in which we can pass the default value and the default value whatever is defined in that input variable that can be overridden by the file called tf-files file so as I mentioned earlier like if you are having a module and if you wanted to overwrite the values that was defined in the variable section we can pass the tf-files file for the particular infrastructure to pass our values according to our need actually and output section is the one in which after the VM is getting springed up or whatever infrastructure resources is getting springed up right for example if you wanted to print the S3 bucket domain name and all that would be known after only applying the infrastructure on to the AWS so we can print that value so that we can immediately access the particular AWS bucket whenever that infrastructure is completely creating and similarly we also have local values in which that scope of the value will be available only inside that particular tf-files and we can use this file local values to define some of the intermediate values that is needed for creating our infrastructure so these are all some of the basic concept resources, provider, modules provisionals and tf-files variables actually so next some of the important concepts in tf-files we have they are actually a teleform state file and teleform dependency lock file so whenever we are creating a plan or applying that particular infrastructure it will be creating a teleform state file in which it would be updating the infrastructure resource names and IDs and secrets everything that would be stored in the teleform state so this state file would be able to know what is infrastructure that gets created in the provider basically so this is one of the key component of infrastructure in teleform so that it needs to be not exposed to the outer world because it has user sensitive data in it actually and whenever we are doing a teleform destroy or teleform plan whenever there is a change in the teleform state file so whenever we are destroying some infrastructure it will be looking for a teleform state file and it will be changing or destroying the infrastructure according to the state actually and the dependency lock is something like a version constraint file in which we would be defining the provider we have a provider block definition right so the provider block we have tagging for the providers and the teleform versioning and all and what teleform lock file will contain is basically it will contain the basic version locking of the provider files or whatever all the providers defined in that teleform file so that if we wanted to this is basically not to change the infrastructure sorry not to vary the provider version whenever the same code gets deployed in the other places right so that's why we need it is also important to check in this file as part of our source control so that provider locking will also be tracked actually in the source control so this is about high level about teleform infrastructure components so this is highly about the teleform concepts and terminologies of teleform and now let's try to deep dive into the demo part of teleform basically in which I am going to cover how to install teleform and how to configure AWS credentials and I will also help you guys to create some infrastructure for a beginner level right so that's what I am planning to do now so basically for this workshop actually I have created a folder and let me check in all the module files there so basically we can just go through about installing teleform those kind of steps here so as I mentioned like for basic components of teleform is something like no for any cloud provider if you wanted to create infrastructure we first need to install the particular like no command line tool for the particular provider so just to configure the infrastructure related provider related authentication details into that particular system basically and for AWS there are multiple ways now being supported by teleform so I will just show you then so whenever we are planning to create authentication for AWS right there are multiple ways we can provide the authentication details to teleform now in earlier versions of teleform we have to explicitly define the provider configuration and the part of the authentication detail but now teleform by default understand it from the mechanism that is implemented by AWS so first it will look for authentication details in the parameters and then it will look for authentication in the environment variables basically the AWS authentication would be the security access key and if there are any tokens we needed we needed to define for the particular session right those kind of detail we can get from our AWS account and we can set those details in this one and the third one is a shared credentials file and there are few other methods mentioned so for this demo purpose I would be using the shared credentials mechanism in which I have installed AWS CLI in my local using the method I mentioned here so basically you can see here right so basically this is the method to install AWS CLI command and we can use AWS configure command to define our access key and security since I have already added my access key and security it is showing that value here so whoever trying to create for the first time it will not show any value here they have to first and then security next and then these two values they can provide their own for this value they can provide their preferred region of in which their resources needs to be deployed and fourth one is the JSON like format or like a ML format whatever it is we can define so these are all some of the AWS configuration and what happens when we do this right so these details will be written in the AWS file that is located in the home part actually so we can see here right so basically it would be whatever we are adding as a credentials it will be added here and what would be we are adding as a region and JSON that would be added in the config details actually so just to confirm I will be doing my once after we configured this we can just do a check of like whether it is working fine or not by doing AWS easy to describe instances so since I don't have any instances in my AWS account it is not showing anything so this is just confirming that this AWS configuration works fine actually so that is the first part of like no second part of this AWS configuration basically configuring the AWS credentials so in this method I have configured AWS actually using AWS configured command and added my credentials and then next is like installation of Terraform so basically for that so Terraform has an official documentation and now it is supporting a packaged kind of mechanism to install our Terraform code basically so that we can just copy paste the command and install Terraform actually in our local it is also having CL Fedora, Amazon linux and homebrew and few other operating system as well to install since I am using linux I have installed the Terraform as part of preparation of the demo and it is also ready in my system so all these details have been I have documented clearly in this one so whoever wants to start with storing the Terraform they can use this readme file to understand more about it so that is about Terraform basic setup of AWS configuration and installation of Terraform so as I mentioned once after install Terraform if you type Terraform command it will be showing that what are all the commands basically we can use Terraform and I have installed the latest version 1.2.2 so that can be checked using Terraform version command as well so these are all the things I was talking about knowing about how to knowing about each of the sub commands of Terraform so this is about installation of Terraform and configuring AWS credential so any questions you can post in the chat or in Slack so that I can see through later and I will just play to that and let's now jump on to creating the first Terraform code actually since I would as I mentioned Terraform has a detailed documentation for everything I would be more or less referring to the official documentation of Terraform to create whatever the demo part I am going to try now so basically as I mentioned earlier so basically for Terraform we would need a provider definition and some of the basic components so let's kind of deep dive into defining those things so what I will do I am planning to create one directory now just for the demo purpose and what I am going to do I will be creating the Terraform HCL file so the Terraform HCL file would be called as main.tf actually so I would be creating that and what I will do I will just open this in the VS code I will be creating a main file so I have created a main file so whoever wants to create a Terraform code so they can just do a google search we are going to do AWS Terraform implementation right so I will just go through this search called AWS Terraform Provider it will take us to the Ashikar's official AWS documentation in which we are kind of getting to know about how to implement Terraform for Terraform version 13 and later and Terraform 12 and later so basically since I have the latest version of Terraform I would be using this so as you can see this definition has Terraform required provider section and the provider section for AWS so I would be using it basically we need a provider section to define our deploy our cloud actually deploy our resources and then what I would be doing I would be doing it taking an example of creating a easy to instance actually so let's do a search on AWS Terraform P2 it will take us to the documentation part of Terraform Terraform's AWS implementation so in which we have examples being defined for defining our various AWS infrastructure so what I am going to do I have already so if you wanted to use any of the examples having they are mentioned here you can also use this code otherwise I have kind of a basic code for defining the AWS infrastructure in the workshop module so I will be using that code here so that's to implement that so we are using years test to 2 so I will be changing that years test to 2 region and then instance key name and all I am not going to define now so this is the bad minimum example I can say for creating the Terraform code so let's run this code now as I mentioned we have provider and resource definition has been declared here and let's run this code so I would be running this Terraform so it would be downloading all the necessary packages required for this code to run actually so it would be downloading them to my local so now it has been Terraform init command is completed so as you can see this init command shows what is getting showed in init command so basically like Terraform as I mentioned earlier while doing init command it would be initializing the Terraform back end actually in our local basically that is a log.htl file which contains the provider version locking and every detail that would be available in this Terraform init command sorry log file so Terraform init would be initializing the back end basically in our local and little bit advanced concept if I can talk about there is also way to keep the back end basically so we can either to keep the Terraform log file locally or we can also store it in the S3 kind of remote storage basically in which time we have to use Terraform back end as a remote actually so that is the kind of high level information I wanted to say here but for our implementation it is just initializing the back end in our local and downloading the necessary files it is needed for running the Terraform code actually so it is creating a dot Terraform folder and you can see it is downloading the Terraform package provider package from HashiCorp and it is also like marking what are all the versions that is being locked in that is getting created while doing the init command actually so that is about init command once after this init is done we can now see like what do the plan command to see what happens so basically while doing the plan command it looks for the credentials and authentication details of the respective provider and it will show us like what is the tentative infrastructure that is going to get created so this is what actually it is getting showed actually you can see like our so our resource we are going to create this database instance and the name internally we have defined as sample EC2 and what are all the values we defined for like creating the we are kind of like know it is getting showed here that AMI details and those kind of things and it will also show as I mentioned earlier right so one resource going to get added actually so it is showing that one is added and zero change and zero destroy right so that is what the tentative plan it is getting showed and then next we will try to apply this now what are the tentative infrastructure it was showing right so it will be created in the respective cloud account of mine right so that it is now confirming that are you going to apply this like know deployment or not so I would be typing yes now it will start spinning up the instance in the AWS cloud account whatever I have so you can see like know this sample instance is getting created so now in the meantime I will just go to that AWS console and I will show you that that instance should be getting created here since it is getting created it is in the building state so you can see the progress here in the console itself that what are all the resources getting created so it will be showing that the progress in the console actually now that resource is added so you can see that you know apply is complete and one resource is added right so you can now see come and see that like console that resources added here okay so this is the very basic of like know creating a AWS infrastructure like know in like know Terraform actually so let's like know go to the code like know whatever we had actually so yeah so this is the code we had so basically in our code we have defined provided Terraform sorry provided AWS and we are indicating the Terraform like know package that it needs to be downloaded from the HashiCorp AWS module and if at all if there are like know custom provider created by our own purpose then we have to source that particular provider name here and we have to provide the path of that particular provider path in our own GitHub repository or whatever it is right and we are mentioning that that minimum version of this provider should be version 3 and it should not be exceeding more than 3 actually so what it is doing is like know it is downloading the last package that was available in the version 3.75 that was seen in the cloud init command actually Terraform init command and then in the like know this provider part we are mentioning the region right and there is also few other variables like know if we are managing multiple AWS profiles right we can also have a variable definition argument definition for profile and we can also provide like know the profile name so by default AWS have the profile name as default so if you wanted to like know indicate that if it is something else right by default it will be creating the default profile only but if you have multiple profiles for creating infrastructure like know some other account or for various environments various profile in the sense then we have to mention the profile name accordingly here in this profile detail actually so now then we can we have this resource definition part where we have this AWS instance resource defined and then we have provided the AMI so this is the equal to AMI and we are also mentioning that what is instance type for creating this instance actually and then we are kind of adding tags and this is the user data session I was talking about right in the part so this script will be executed whenever that VM gets created so basically like know I will also have a detailed example of like know creating Apache server that is a kind of larger example I have created for this demo but whatever code I have showing here is anyone wanted to start with AWS they can simply have these three definitions just for their learning purpose and explore more according to that they can add more resources for it so for that I can summarize that basically they would need like know AWS credential and AWS account access and then they have to have a telephone CLA command installed on their local and then they can use telephone documentation to have the basic need of code and then they can create infrastructure and hashikarp also have a portal called learn.hashikarp.com so these are all the details more or less covered there also but I wanted to provide for beginners benefit I am sharing all these details actually so this is about like know high level and then these are all the code blocks that is very basic for anyone just getting started with telephone actually so this is about it for the live demo part actually and then I will also show you the other parts of code whatever I have created and let's also see like know what are all the other topics I am planning to cover in this demo part basically like know we have covered like writing the telephone code in AWS and we also like know made a walkthrough of creating an easy to example in our like know live demo code right so now and then I have also showed you like know what are all the commands that would be useful for like know getting started with the telephone like telephone unit command would be initializing the back end and do the version locking in the lock file and then telephone plan would be showing the tentative infrastructure and what are all the changes destruction or like additions that particular code is made to use in the cloud provider and then telephone apply command will be actually applying that particular infrastructure on the cloud so now I have added one change in this file basically I have I have added a like know profile argument here so let's try to do a plan now again and see like know what it is showing if we do it here so you can see that it is reading the state from the state file telephone state file and it is mentioning that so since we haven't changed anything related to infrastructure it is showing that no changes are made and one part I kind of missed you to mention earlier was after doing apply it is creating the state file right so as I explained in the introduction right introduction part it would be having the details of details regarding all the resources associated with this instance ID basically it is also having the EBS volume ID it is also having what is the VPC and subnet this particular instance is getting deployed on to and it is using the state file to kind of like know using the state file for understanding what all the changes made in this infrastructure actually so then we will try to destroy that infrastructure so this is for since this is a kind of demo we also see like know what steps are involving in destroying an infrastructure so so for destroy also it will be asking for confirmation so again it would be using the telephone state file and see what all the resources that are associated with this particular runoff state and it will show that changes so it will be showing that plan one plan it is showing that one would be destroyed actually so then we can confirm that yes then that particular instance whatever we created that will be destroyed so we can now see in the console that that instance might be terminated so it is now shutting down and later it will be moved on to the terminated state actually so anyone can post any questions related to that and I will be answering at the end of the session so since that infrastructure is getting destroyed the state file does not have anything now so that state would be moved into the backup stage actually so and as I mentioned the state file also can be stored in the S3 right so basically the advantage of storing telephone state file in the S3 packet is multiple developer environment right so basically there is a situation that multiple provider would be running environment creation right on the same code right so at that time this lock and state file would be used to avoid that and it will indicate that the infrastructure creation is already in progress actually so that's the reason like multiple developer environment it is better to maintain the state in the S3 packet or similar kind of places so now that instance is getting in terms of what terminated right so now let's move on to few other examples what I have what I have explained here in this deck so basically like this is the this is the examples I wanted to explore more so basically like I have example for module moduleization as well as S3 and one full example of creating an Apache web server so let me show you that so as I mentioned earlier so this is the telephone like you know demo code I have created right and this is the same code I tried to run using the live demo as well so so like as I mentioned earlier like you know we are having a concept called modules right so modules would be useful for modularizing the code or resource it is right and then use the cover instance that example you have so whatever code I created as a easy to share right just before going on to moduleization so demo code very cool in the main file but it would be recommended to have separate files for the best practices right so we need to keep this provider definitions in provider.tf and resource definition and those kind of definition in main.tf and similarly output tf so that kind of moduleization we need to do whenever we stop in so since this is the demo I haven't kind of created the file just to show what needed for creating the same file but in my other example of easy to write I have kind of modularized everything like no I have created separate files for each of the things actually so basically like no main.tf only has the resource definition and output I am kind of printing out the instance ID as well as the public IP whatever that is getting created right and then provider.tf I would be having whatever the provider definition I have had and then in variable no that is needed for logging into the needed for doing the SSH to the input to instance and we are mentioning actually but I will just quickly try to apply this in the meantime actually so that we will see that let's do a teleform in it since I have already run this teleform in it earlier in my local it will be using that teleform log file in my local and it is showing that teleform is initialized and using the reusing the previous version of log file right so that's what it will show but for the first time it will download everything that is needed for running this and then I will do a teleform plan now it is showing that what are all things are technically created and I will do apply so one quick thing on the option part actually like for teleform apply and destroy there is an option called auto approve actually and there is a caution that it should not be using the production kind of environment just to avoid the prompt of having a s whenever we are running the apply command so for since this is a demo I am just adding the command for auto approving this so that it will not ask for the prompt and it will start creating the infrastructure so it started creating the infrastructure actually so let it create the infrastructure and we will come back to that later and let's move on to that modularization like an example basically so whatever the easy to instantiation code I have created for this demo I have made that same code as a module here and in this module it will be having the replication of the same files but while defining that using that particular creating the particular resource what you have to do we have to define a similar structure like main provider and those kind of things this is a demo those files are not added in this demo code and what I have done is we are calling the particular module using the block called module block actually in that module block we would be adding a indicator name that this would be for the easy to instances and in that particular module block we are kind of sourcing whatever the module we have created for creating the easy to instance so we are creating providing the module as a path for that particular to get the code from there and this module needs region as a variable so we can pass that variable as well in this module so let's try to apply this as well so the earlier example of the easy to instance was completed whatever code I showed here and and it was printing the instance IP and the public IP and now we move on to that module part actually so then go to that code and I will do a terraform right here and in this one you can see right it could be doing the initializing modules as well as initializing the back end because module is the one containing the actual implementation of the code so it would be doing the initialization of that as well what I will do for the demo purpose remove that dot terraform implementation whatever that was created already right and then I will rerun that terraform in it command again so you can see right it is initializing the module easy to instance and that is available in the module directory and create instance directory and then it is initializing the back end as I mentioned earlier that locks the creator locking the and before downloading that necessary to dot terraform packages from the AWS HashiCorp module now it is kind of initialized then we will run terraform plan now it is showing that one resource will be added here right so we will do a apply here again once after this is it is getting applied we will just go to the code once again just for the purpose of revisiting now the module it is using the module code to create easy to instance right so that is what it is showing as I mentioned so we are sourcing the module path here and using the module block in the main dot tf and it is going to that create instance module territory and use this module for creating instance actually so whatever the resource we are defining so we can have multiple modules and multiple module definitions inside that so we can have modules for VPC subnet and everything and we can just call it multiple times so that the code redundancy would be avoided actually so that is the high level idea for this module so it is now created so we can now see in the console that particular instance would be created so since we have also ran that other example of VPC2 so it was also showing that and now that created instance is ending with D1F so this is the instance that was created now this is about high level on how to convert our plain telephone code as modules actually so that is about the telephone module and then I was talking about various sections like variables and stuff so for that I have a sample code for S3 packet creation so this code also simple plain S3 packet code that was available in telephone module definition in that telephone documentation so basically we would need to define in main.tf we would need to define S3 packet and then we would be providing the bucket name and then S3 packet needing ACL details and then we have to define that as well and then we have to define the it is not mandatory but for the purpose of for the purpose of for the purpose of demo purpose I have added this versioning block as well so in this one also I think I have kind of created an output file in which to show the S3 packet domain name and S3 packet ID and then in provider section I have defined what is the provider and what is the minimum telephone version required for this and this required version helps to indicate the use of that this is the minimum version of telephone anyone needs to use so that that can be used and then in variable section we have this region file and there are so many questions so maybe we can start taking one by one actually I haven't completed it then let's complete and take it at the end I just wanted to see that whether it's active or not so that's the reason I came back here you are in backstage we have a lot of questions only host you can see in the backstage we have a lot of attendees I entered to 600 attendees just wanted to check actually because it looks like I was only talking so I wanted to have a quick check on that so basically this S3 example I was running through so one catch here we talked about variables and we talked about output variables in my high level definition I was showing about this variables block and in this example I have implemented something like local variable mechanism because in AWS S3 packet name should be unique it should not be created anywhere else so what I am doing in this code is basically I am creating a random string and then I am attaching that string with the S3 packet name so that that name should be unique to this implementation alone so that's the reason I have defined a resource called random string these are some of the facility provided by Terraform I think that's available from later version 13 only it will not be available earlier versions actually so you can check in the Terraform documentation that this Terraform random string is available so for making that string randomized and unique to our implementation what I have done is I have created a local section of block and in which I am defining the packet name and in that packet name argument I am passing that some common string as sample S3 packet because this is a demo kind of thing we are creating and I am attaching the string of particular random string that was getting created using this random string resource actually so that S3 packet name would be so unique for us actually so that's the idea for creating this local so this also shows the example of how we can make utilizing the facility of local variable in Terraform and that packet name would be referred in the S3 packet resource as a local dot packet name so that's the high level and let's try to apply this resource as well we have covered so far like creation of easy to instance from the scratch using Terraform documentation and we have come across what are all the basic components needed for that and then we have modularized the same easy to instance code and we use the module block to create the instance and now we are using example of local in the S3 packet code actually so now I will move to that S3 packet so now I will be kind of doing the usual procedure of running the Terraform in it again since I have run it run that Terraform lock file earlier it is using the same lock file and showing the message that reusing the lock file and then run the Terraform plan so it is showing that tentative infrastructure what it is going to create now and then I will do the apply and I will use the auto approve command as I mentioned earlier be cautious when using auto approve in apply and destroy on the real time environments so yeah since this is a demo code demo purpose I am just adding it so what it will do first it will do the random stream creation right and then it will be showing that so now you can see the Terraform packet name should not have the other special characters it should have only dot and hyphens right so since it is creating a random alphanumeric characters right so it is showing that at only hyphens and characters like dots are allowed so what we need to do we need to change our code now to set only that right so basically what I will do I will so these things can be explored in the documentation I will show you that as well so we are making we are indicating the random stream that there can be special characters in our random stream and we can we need to we can also specify what are all special characters that can be added in the stream basically right so what I am doing as per the name convention only dots and hyphens are allowed so only I am giving that special characters in the sd packet randomization name right now I am just saving this file and running the code just remove that state file because the state file contains the stream right so I will just remove that and I will read and make so now we can see that randomization stream created some random character right so that it does not contain only the alphan numeric ones so that is the update we are done and the sd packet started creating now we can see that the sd packet is created and the bucket name is sample right so let us do a quick check on that sd packet using keraform sorry AWS sdcla command actually so I will be just repping it for the name so it is having that sd packet created in our AWS account right so I will just quickly run through this example once again basically like in this example we have created a sd packet resource in AWS account and we have defined various other resources as well that are needed for creating sd packet so in this one we have acls and versioning also defined as resources and for sd packet name we made the use of local variable in keraform and then we have created that random string mechanism to create the string and then we have created the bucket using that right so this is the high level about using the locals in the keraform file and it is also showing the example of how it is changing things and one other thing I wanted to show now is quickly what happens when we do a change right so I will just change the string to disable the versioning string as disabled right and then we will try to see like what happens now so I will run the keraform plan once again and it will show that one of the resource gets changed now that is actually the versioning resource so the name should be something else so what we will do we will just quickly refer the documentation for AWS S3 keraform that is the versioning right so this is how like whoever wants to create work on keraform if they are any facing any error documentation should be the first place they have to refer right so how to refer the documentation is basically I will also cover that as well so we have this example usage right in which one in which it will be showing that example of particular block actually and then there are various things like argument reference and attribute reference so argument reference are the one that needs to be added inside that particular resource and then attribute reference is some of the outputs that is getting out of the particular resource actually so for versioning basically we will just hit for status okay so for that we have to recreate the entire resource theme so let's move on to like changing the S3 bucket ACL actually so that at least I can show the which one like that change basically like what happens when we are changing the environment right so just quickly I will one minute just one minute sorry for this basically yeah sorry sorry for this so basically what I will do I will just remove this ACL itself actually because that is an optional parameter for the purpose of demo so that we will see like what happens now because as per the documentation that AWS ACL that argument is an optional one so we will just try to change that we will try to apply now so it is showing that anyway that is by default a private bucket right so so the default for S3 is a private one so I think it will not show any further changes I think so let's see that so one change it is showing now right as part of plan it is showing that one is to change so let's do that this is the plan it will show whenever there is a change in the particular infrastructure right so in our case it is going to change this ACL configuration so it is showing that one is getting changed actually okay so I think things are acquired in S3 packet team so I think we will keep that as a separate thing and I will show you some other example for changing environment but basically this is how it will show whenever there is a change in the environment actually so that is what and then it will apply and then the next example I wanted to show was the larger example of creating an Apache server so this is a kind of end-to-end example of creating a VPC subnet and then it is also having example of creating an Apache server while doing the creating server instance basically so that it will it will be like a demoed actually so that if we try to apply that quickly okay so let me go to that example of showing that particular Apache server example and for this one like for this kind of implementation I have added a detailed documentation as well in the workshop module so that you guys can have a look into that and make use of that documentation and know more about the code actually okay and so what I will do is just destroy the S3 packet I created go to that example of this is the code actually AWS TF code is the example in which you can see like various modules I have defined actually so basically like in main.tf I have I will show you here actually this one you can see like we have defined the complete network that is required for spinning up a VM so we have defined AWS VPC and then internet gateway for providing the internet access and then public subnet that is allocated inside the VPC and then a route table for that particular VPC and the internet gateway and then we are associating the route table with the subnet actually and then we are coming to the part where we are defining the infrastructure for creating the instance basically in this one I have my AMI defined instance type defined and instance key name also defined and I am also associating the submit ID that I have created in the above definition and then I am also associating the security groups as well so security group is nowadays by default AWS doesn't allow public user access to SSH or outer world internet actually so for that we have to have a special security group that needs to be created for allowing the SSH and HTTP access so basically what I have done is I have created a security group file separately for defining the SSH access and the HTTP access for the particular instance via the subnet via the VPC security groups so you can see like now that inbound is ingress and I have defined SSH 22 connection and inbound HTTP HTTP also allowed and for outbound I am allowing all traffic actually so this is the security group I have defined let's quickly apply this code and we can see what happens so basically in this example I made use of variables file in which I am kind of customizing various inputs I wanted to pause into the particular infrastructure creation so predominantly majority of things are kind of like can be used but only this key name variable I have added using the tfvar style so this is the tfvar implementation initially I was talking about in the introduction part we can customize our own variables and we can use this variable file alone to pause the customized variable inside our terraform code actually so for that I am what I am doing while applying the code I need to pause the option called var file in which I need to give the path of this variable file actually adbs.tfvar in which it contains all the necessary variables customized variables that is needed for this infrastructure creation now I am applying this it will be using all the like no variables that is needed for creating this infrastructure using the tfvar since the lack of time I could not show the change of s3 packet options actually because that needs some exploration of options s3 packet command creation so basically that is the reason whenever we need to create infrastructure we need to keep ready with all our changes or definition that is needed to be created for defining or changing infrastructure so that we can avoid this kind of error and even if there is an error then we can explore the terraform documentation for fixing those errors actually so now you can see in this web server example it is started creating all the code in the meantime I will show you that what is available in the main.tf as I was explaining like it has vpc, intrat gateway subnet and all the definitions in the instance definition you can see that I have defined the data section right in user data even though it accepts shell scripting and for windows kind of systems it accepts power shell right that is the kind of aws implementation that is made use in the terraform implementation here in which I am kind of installing the apache simple apache server using the four lines of shell script actually so what happens right whenever this instance is getting created script will be executed and it will be kind of showed it will be having the apache installation then whenever the system is ready for accessing right so now we can see that this instance is created now what I am going to do now is I am going to show that this script should be should have been executed inside the instance that got created right so since I have configured vpc subnet and using my instance accessing key right that is a temp file we need to provide what I will do I will just do a ssh and I will do the this is a ubuntu server so I am using ubuntu user name this might be felt too heavy for some beginners but whenever you are planning to create instance or resource creation you can just refer whatever I have explained in this session so that it would be easy for you to learn more on this right so basically this session as I told earlier can be used as a basic thing to start learning as a beginner for creating code actually so since it is asking for that adding that known host I will be adding it so now it will be going into my VM that got created so this is a VM that got created and it is we are inside the VM so as I told you earlier it was creating it was running a script set of scripts right whenever the VM is created so that can be checked in the cloud init log that would be available in the slash where log part actually so this is just to show that this user data has been executed successfully so this is the user data file and it is having set of instructions right so you can see here right so we are we have given it given the command of inside Apache 2 that you can see here so it means that it runs this cloud like no user data section and run the script right for accessing this for installing the Apache inside the instance so this is the kind of example that that shows that we can also use make use of this user data section to spin up whatever the beginning command that needs to be installed in the VMs actually and now I will also show the Apache server that got created using this server since this is a public IP we would be able to access it this is the sample right so this is the apache default page that comes from the slash where slash where www.path actually and so this is kind of like you know full level example that shows how to create a instance like with all the networking and stuff actually so this is the high level I wanted to cover and there are few other slides I wanted to quickly run through so we are pretty much done with the demo part actually and on a high level these are the best practices right use the terraform where files example I showed you for customizing for customize variable and version control the terraform state file for like no reverting back to the earlier versions or like no for whole batch and managing the S3 that is for the multiple developer environment right the state files so that it can be avoid like no for multiple terms of the same and then retrieve the state metadata from remote backend right so that is also similar to that and use shared module and isolate environment variables and modules actually so these are some of the best practices and in this deck also I have provided kind of github reference for this particular best practice github page yeah and we are pretty much covered ability and that is it hope this session might be might have been useful for people who are getting started with terraform right