 Hi everyone. Thanks for joining. My name is Wilfred. I work for Moondraft Innovation Labs India. It's a design and customer experience company. I'm very excited to present a session on Terraforms and infrastructure as code. So recently around one year back, I got a chance working on Terraform and that was for a migration project that we supported. And the client had to migrate the whole of their application, including tech as well as infrastructure from their existing environment to AWS cloud. So, and part of the request was to also automate the whole process. Like, and this involved around, I think, 350 or 400 odd resource that has to be created on AWS. And initially it was a bit of, I would say, a challenge for us to create a Terraform configuration for each of these resource. But with time, it all made sense to how and it made more sense to why a lot of customers are moving or leaning towards infrastructure as code and leveraging tools like Terraforms. So here am I giving my thoughts on Terraforms and infrastructure as code and we'll be going through some of the concepts of IAC and Terraforms. We'll be also talking about some of the Terraform commands and we'll end the presentation with a demo on how Drupal is created on AWS with Terraform and CICD tool we'll be using Azure DevOps for that. Also, the combination that we'd be using would be Terraforms with Ansible. So, yeah. So what is infrastructure as code? So put in very simple term, infrastructure as code is the concept of consolidating all your, all the knowledge of a system admin and operator and then putting everything in programs. So these programs helps us to automate things and make it in a very systematic manner. And also, it also involves automating and the creation or process of doing things would be very streamlined in a better way. And for those of us who are new to this term infrastructure, infrastructure is nothing but I can say it's resource which helps us to run an application. So for example, if it's a PHP application, those resource like you will need a server, you will need a PHP that's installed within that server, you will need some of the things like load balancer, database, DNS. Everything together is your infrastructure and without that your application wouldn't work. And think about manually installing all of this in one environment. So this is more of a redundant task, I would say. And there are high chance that there will be human error and we might get last minute surprise when we always deploy to prod. And I'm just talking about deploying that to one environment. Think about multiple environments like dev prod, QA preprod. You get multiple environments and you always tend to miss out on things. So that's where infrastructure comes, infrastructure's code comes in play. And with that, the few benefits that we get is you'll have easier deployments. You can create good documentation. You can add this to your version control. You can also do review this with your peers. You can of course reuse the code and also test it as well. Let me take a quick pause and quickly show you how easy this is to just run it from. This is my Azure DevOps. And I have a pipeline which is already set up. This is my pipeline. We'll go through it in depth in a bit. So I have a branch called Azure Pipeline which is already there. And the whole idea is when I run this branch or this pipeline, this is my AWS console. And right now you can see I am having zero EC2 instance. Which is running. And as soon as I run this pipeline and once this is complete, you'll be seeing a new resource that will be set up in my AWS console. So let's give it a minute and let me switch back to slides. We'll come back in a bit here. All right. So when we talk about IAC, it's just not one single task. It involves multiple tasks. So it involves creating your infrastructure. That's the first thing that you'll have to do. And once the infrastructure is provisioned, you'll have to configure that infrastructure. Once that's done, you'll have to deploy that configured and provisioned infrastructure. And due to that, we don't go with just one tool for this whole process. It would be a combination of multiple tools. So it would be more like a combination of Terraforms with Ansible or CloudFormation with Ansible. So tools like Terraform is mainly used to provision the infrastructure. But Terraform won't be able to configure your provisioned infrastructure. So for that, we'd be using another tool like Ansible. And for this demo, I have gone with Terraform with Ansible combination. And for deploying, of course, I've used Azure DevOps. And you can also use other CACT tools for that. So that brings us to Terraform. So Terraform is open source. It's founded by HashiCorp. And there is HashiCorp configuration language, which is basically a .df file extension, where you'll be defining your resources. It uses declarative programming language, which makes it easier for everyone to understand the syntax. So let's take a quick look at the architecture on your right. So basically, it involves two components, that is core and providers. So Terraform core consists of the Terraform config as well as the state. Terraform config is nothing but the Terraform files in which you define the resource. And it basically has a set of resource and its attribute. Terraform state is where the Terraform stores the state of your current infrastructure. And it's usually a file with the extension .df state. It's with that file that Terraform knows what is the current state and what's the desired state that you want to be. And if you make any changes to the infrastructure, it's with that state file, Terraform tells you exactly this is going to be, this is the modification that you're going to do in your infrastructure. Coming to providers, provider is where Terraform knows which is the provider that it needs to connect. Terraform can talk to any system which is backed by an API. So there are a few providers like Azure, AWS, Kubernetes, I think over 3,000 providers are supported currently. So yeah, so that's how you can define a Terraform configuration. You can use resource keyword followed by the resource name and your name. And under that you can, inside this block is where you can define the attributes. We have covered this Terraform states, but yeah. So one thing I didn't mention here is the Terraform states is by default going to get generated within the project. So within the project route with a .df state extension. But it's not recommended to save that within the project route because this file has all the infrastructure details. It might have all your access key, secret and things like that. So due to that, it's better that we store this remotely. It's always recommended to use Terraform cloud or S3 bucket to store that back in. In this case I have used S3 backend, S3 bucket to store the .df state. That's how you define the provider. You can define with provider keyword and followed with the provider name, version and region. So let me quickly open this page. And right now, if you go to registry.terraform.io slash browser slash providers, you will see the list of provider it supports and it supports over 36k providers. So yeah, let me go back. So resource block is implemented by provider. If you connect to an AWS provider, AWS has got its own resource block. And this is the block where you define what type of resource that you need. If you go back to the previous slide on the right, that's an example of how you define the AWS VPC resource. And one thing to keep in mind is, if you define an AWS provider versus another different provider, that provider might have a different resource. So it all depends on what provider you connect. And each resource type is implemented by a provider. Similar to resource, we have data source as well. It's very similar to resource. This is also implemented by the provider. But the difference here is you can use this data and you can whatever output that this provides, you can easily use that as an input to another Terraform block. We'll go through this when we're in our demo. So the next thing is using variables and outputs. There are three ways of using or defining variables, input variables, output variables and local values. Input variables is where we can serve, you can put this variable as a parameter to a Terraform resource. Output variable is where it will return values from your resource. And local variable is where you can assign a short name to a block. That's how you will define an input variable followed with a variable keyword, the input name and the type. You also can provide the default values for the variable. And this is how you can define the output variable. And local values are defined inside local block and followed with the variable name. Local variable can be referred as local dot the variable name that you provide. A few Terraform commands here. Mostly you'll be working with these commands. The first one is TF init. This is where you can or you will initialize Terraform projects. And this is where some of the files will get generated. Once that's done, you will write the Terraform configuration. And when you run the TF plan, it's like a dry run of what your infrastructure would be looking like. It will exactly show you you're going to create this many infrastructure. When you run Terraform apply, that's when the actual plan is going to get executed. Followed with if you want to destroy or remove any of your resource, you can also use the TF destroy. All right. So let's switch back to demo. So right now you can see these two. Are you all able to see the screen? Yeah. So there are two main stages, validate and apply. So that two stages are right now complete. Before going into that, I'll just quickly show you how the Azure is talking to AWS. So I'm using these security credentials. If you go to security credentials, there is access key. And this key I'm going to add in my Azure service connection. These are service connections and I'm using this particular service connection. If you edit this, you'll be seeing the key and secret that I'm using and granting permission for all pipelines. So that's how the Azure is connected with AWS. Now going back to the pipeline, so that's my current pipeline. So it has basically two stages, validate and apply. And this is driven by the Azure pipeline. You have a file which I'll be showing in a moment. But within these stages, all your steps installed in it validate. All these are certain terraform commands. And inside apply, you will again see the same one. But here you will see the plan. And here you will see these are all the resource that's going to get created. And if you scroll all the way to the bottom, you will see there are going to be 18 resource that's going to get created. And these are the outputs that's going to be. And in apply, you'll again see the same step. But this is where your resource are going to get created. So it's going to create a VPC, some IAM role, some policies, some subnet security group, the DB instance. And then finally it's going to apply that. And it's going to show that 18 resource got added. And it shows the DNS, the IP and other details. So now what happens is here, this is what terraform does. So here, the resource got provisioned. But the provisioned resource is not configured yet. So this is not enough to spin up a Drupal. So what happens is now once the provisioned resource is initialized, we need to also hook in another tool like Ansible to configure some of these dependencies. So here what I've done is here, before that, let me just go to AWS and refresh this. Now you will see a new Drupal instance got created. And if I go inside this, you'll see all the VPC ID, the networking details and everything. All this are created with that terraform code. And the most important thing here is if you go inside actions and there is something called as user data. So user data is where you tell the resource that when this is getting initialized, you'll have to install or you'll have to run this task. So in this case, I'm using cloud config template for that. And that's creating some of this Git package, Python. It's setting some Ansible user which I'll be using to SSH to the server. It's writing some files. It's basically installing Drupal with Drush. And the other important part is it's also pulling an Ansible playbook from an external repository. And the playbook is site.yml and the repository is Drupal AWS playbook. So if I go into Drupal AWS playbook over here, which is, okay, it's showing in. Hope that's visible to everyone. So this repository is what is pulled by that cloud in it. And here if you go in site.yml, it's got some task with CloudWatch, PHP, Drupal, and Nginx. And each of this rolls out task if you go inside, if you go inside CloudWatch, for example, and go inside main. So this installs an agent within that easy to instance to set up logs. And it just parsed that logs to your CloudWatch logs. So similarly, there is a task for Drupal where you can basically write all the steps which involves installing a Drupal. You have Nginx task as well. It removes Apache. It creates Nginx. It's copying a certain configuration to a certain path. And all this would be executed when this server is set up. So I'll just quickly go inside CloudWatch and reload. Let me refresh this one. So there are two log groups, one on Cloud in it. Let me just refresh this. So let me just take the latest one. So this logs, whatever you're seeing here, is exactly the one which is driven by the Ansible. And you will see it's installing all your PHP dependencies, your Drupal-related dependencies, Composer. It's installing Drupal and finally it's showing as 26 got created and 21 got changed. And also there is another log which is on Nginx. I think this is the one. Yeah. So this would be add or this would be update. This is the logs where if anyone access your Drupal, you'll be seeing this logs errors or notices here. And coming back to the Terraforms, let me open. So this is the repository which has all the Terraform definitions. And the main file here is provider.tf. Let me just zoom in. So this is where we define what's the version of the Terraform and what's the provider. In this case, it's AWS. And I'm also specifying a region which is coming from a variable. And all my variables are defined within variables.tf, which has got its own default values. There's some output variables as well. You might have noticed those output variables that's showing in my output. So that is coming from this particular output.tf file. The next thing is ec2.tf. So this is where the ec2 resource gets created. There is a resource block here, which is AWS instance. And in this, there is an AMI, Amazon image that I'm referring. And this is coming from a particular data that is AWS AMI. And that is coming from, that is coming from, let me go back. There is an AMI.tf. And in this, I have defined which Ubuntu AMI to fetch. And then that is going inside this particular block with this AMI. So that's how the Ubuntu image gets added within that server. And here we define the security group subnet. And the other important thing is user data. So the user data that I've shown is coming from this particular line. And this is also using data. This is coming from the same file that is this particular block. And in here, I'm using a template that is script.yaml. If you open script.yaml, this is where you will see the exact user data cloud config, which I have shown before. But I'm using some of the variables here for installing the Drupal related activities. And that variable is coming from my, it's again coming from the ec2.tf from this block. And all this is coming from the respective Terraform files. For example, the username is coming from the db instance, which is a Terraform file that is defined under rds.tf. So this is where the MySQL is defined. And it returns back the output. So yeah, that's how you define each and every resource with Terraform. You also have VPC that's defined in here. You have security groups that's added in here. So once that's defined, that's how your instance gets created. And now let's see how this instance, let's open this. I haven't added HTTPS. I haven't configured SSL. So let's see how that goes. Yeah, so that's, there's our Drupal which got created. And yeah, so the one thing which I forgot to mention is the Terraform state. So I've mentioned where the Terraform stores its configurations, right? Usually it's stored within the project root. But here I have used S3 bucket to store the state. So this is the S3 bucket. And in that, you will see a Terraform.tf state. If you open this, you will see those whole configuration of the infrastructure that's saved within this file. And the way that I did that is, let me go back to repo. So in here, this is my Azure Pipelines. This is where I'm defining what stages that it has, it's required validate and apply. So in here, I have defined the S3 bucket or the backend that's Terraform gets started. This is one of the places where you have to define. The other place is in Provider.tf. You will have this along with your version. You'll also have another block that's to be added for backend S3. You'll have to add the bucket name with the key. And with that, Terraform will consider the backend as S3 and it won't generate that particular S3 within this particular project. So yeah, so let's jump back to... So a few resources. I've referred to this hashicob.com with registry.terraform.io. If you want to get more details of how Terraform works, you can follow this. I've created two repositories, which is public. The one which I've just showed. Terraform gets started on the AWS Playbook. I've also published an article on Drupal Asheville earlier this year talking about the same infrastructure as code. So I'm trying to enhance this even more. I don't want to keep using the access key for the Azure DevOps and AWS communication. So I'm going to use Azure Key Vault and see how that goes. Also, I'm exploring more on Terraform modules and using the Terraform cloud. Also, I'm going to add even more infrastructure like adding a load balancer, an autoscaling group, configuring SSL and things like that. Yeah, so that's it. Any questions? Hello. Yeah, thank you for the presentations. I have one question on the security part. So for example, you have created the EC2 instance and then after, for example, you are using the Ubuntu and there releases some security. So how you will be patching it using this approach? Okay, that's a good question. So usually what we'd be doing is getting, so you got, when we have that image, we can create an image out of that and you can own that image instead of using a third-party image. So that's how, on a real world, we'd be doing that. But for this demo, I have used a third-party image, but it's always recommended to use your image so that you have the whole control of it. Yeah. Okay, thank you. Thanks. Yeah, thanks, everyone.