 Cool, okay, that was quick. I have a lot of slides, but I promise you I'll be done in five minutes because we're gonna time out here. Right, I'm Claudio, so the introduction is done, it's awesome. I'm talking about Terraform. Terraform is a software package. It is a tool that allows you to describe your infrastructure in code. It does not help you say, I mean it supports different infrastructure providers, cloud providers, but it does not allow you to seamlessly switch from one to the other. That's not its purpose, although it can make it easier if you feel like doing that. It's not AWS technology, it's by the guys who also do vagrant in case any of you know that. They have some other stuff as well, it integrates to that stuff. I think a sort of equivalent AWS service would be a cloud formation. I found that a bit more complicated. I don't really have a lot of experience with cloud formation. So the way it works is you create, in your project directory, you create as many files as you feel like you need with a .TF extension. You describe your infrastructure in those and then you run Terraform apply and Terraform will look at what you already have, how your infrastructure looks, how your infrastructure described in your code has changed since the last time you applied and will implement those changes. So for the building blocks, there are variables. You can put them also in Terraform files. You can override them using the command line or you can let's say create environment specific configuration files, say for production, development environments. So you create files and then invoke Terraform apply, referring to a specific configuration file. And then you have the providers, which as you can see, there are many. These provider is a collection of resource types that you can use. I'm gonna talk about the AWS provider, mainly because that seems most relevant right now. But as you can see, you can do digital ocean, module of the sewer, Google cloud, et cetera, all that stuff. So providers usually have a configuration. In that case, for AWS, you can set your access credentials and the region. Here as an example, the region is using a variable. I mean, you can just put it in like that as well. Right, so the AWS provider has a lot of resource types and that would probably be the longest slide of the presentation. I think it's time to like 20 seconds. So these are all the different resources you probably recognize some of them if you've been using AWS. Yeah, it's a lot. Cool, two minutes left. Okay, right, so resources have a type and they have a name, you give them the name, right? And they have arguments, stuff that you specify and attributes in the end, like once they are done, they have attributes, like for example, in an EC2 instance, it will have an IP after it's been created. And you can use these attributes as arguments for other resources. So you have like a dependency graph in the end and took it all up. As an example, this is a EC2 instance, the top one. And on the bottom, this one creates a machine image based on the EC2 instance. So it launches an EC2 instance. Once that instance is up, it will execute the local shell script that provisions the EC2 instance and then will create an AMI from that instance with a specific name. Now you can take this AMI and put it into an AWS launch configuration which are named your frontend and you can see it references the AWS AMI from instance.frontend, references that ID. AMI has already been created. Now it creates the launch configuration. From the launch configuration, you can create an auto scaling group. There's some other dependencies. You see there's an ELB in there for the auto scaling group, for example, and like instance profiles, all that stuff. I didn't put all the code in here because I only have 27 seconds left. And then in the end for this one, for example, this is a nice little auto scaling configuration. In the end you add a record in your DNS that points at the local answer you created. Right, and then you run Terraform Apply and it does that. You can also invalidate a specific resource if you have, if you've done some manual changes and you want to redo it. And... Okay, that's my time. Okay, I'll just finish this one really quick. It stores your state in a local file. You need to keep that around. This is basically... It refers to the instances you configure or the resources you configure. Once it creates them on AWS, it has to know how to address those on AWS. So it basically gets an ID and stores that in your local state. If you want to have multiple people working on the same infrastructure project, you should have that state somewhat centralized. You can use S3, for example, and have that state be stored in a bucket. So you can have multiple people, not at the same time, but at least sort of like, not without, not with some... So the problem is if you commit your state to Git and you have two people doing changes at the same time and then trying to commit it, you can't really merge state files in Git. That won't work. Right, you can do, you can modularize it. That's basically it for that. Any questions? Yeah. Sorry, can we get it out of the way? Yeah, so I would say the main difference is probably that it's not AWS specific. So you can have, say, DigitalOceanDroplet and then create a route 53 PN essential based on that. And then you can create a route 53 PN essential based on that. And then you can create a route 53 PN essential based on that. And then you can create a route 53 PN essential based on that. Based on that, and stuff like that. I think it looks a bit nicer. It's a bit sightlier than CloudFormation as well. Again, if you look at the AVIs for AWS, it's very specific to AWS, which means like, you know, for another service provider like Microsoft Azure. Yeah. They've been having their own APIs. Yes, exactly. That's what I'm saying. It's not an objection there. You can't just call your infrastructure to another Cloud provider, but you can. And well, once you have this definition, they will ideally look somewhat similar. And you can run them simultaneously. You can have an infrastructure of running an old Cloud provider at the same time and have them reference each other. I guess that would be an advantage. Also, probably ease of use. But again, I have limited, limited experience in CloudFormation. I found it a bit too heavy. So does it comes with this one ZLi? Yeah, so that's the thing. It's a Terraform command. If Terraform apply, Terraform plan, all of that sounds very easy. So you can take me to that. Do you have a local instance of... How do you present some resources? So for now, the way I work with it, because it's relatively small scale stuff, I do. So I have a Git repository with my files and I just run Terraform locally. You could say run it through a CI, but then you have to be very sure that what you push through CI actually works. I think I prefer trying it out by myself. Yeah. Any other questions? Any of multiple use projects? Multiple use projects? Like profiles and what's in? Like the access keys and the access keys. Yeah, so you can have multiple... Yeah, so the access keys, you can put it into a file that you don't commit into your repository if you work with multiple people, for example. You don't commit the access keys. Probably save practice anyway. And then you can have your local access keys in your variable file in your local one. Yeah? What's the deeper with other infrastructure support application like Puppet or Chef? But Puppet and Chef are different. They are two provision instances mainly, like servers, right? This is more you describe your infrastructure. You describe, I have like a database server. I have a DNS zone. I have Lambda stuff, API endpoints and everything. That stuff, you can put that into code. Chef, mostly the way I use it is really just your provision one specific server or multiple servers, but not really the infrastructure as a whole. They overlap a little bit because it can do with sort of orchestration, like pointing your application servers at your database server, that kind of stuff. Sure, that's probably the overlap. And also create dependencies like for example, if I'm setting up a list of infrastructure, if one of the servers that I cannot set up or I cannot set up, it would not proceed with the rest of the servers that I want to set up. In this case, you mean? Yeah, for the time, yeah. Yeah, yeah, yeah, so if you do something wrong or if it can't finish what it's planning to do, it will just go like, sorry, I can't do this. And then you have to fix this up. That's actually a case where you'll probably want to have a manual look at your AWS account because it is possible sometimes that it creates an EC2 instance and then can't really finish the job and just leaves the instance kind of hanging around without terminating it again. So sometimes you have to clean up after it is done. One last question. Regarding your state changes, because it goes for the desired state, right? Do I have to keep on applying or it detects that the state has changed and automatically reapplies the changes? Yeah, so basically it uses your local state file which is what it saves itself. So where it saves the stuff that it's been doing, it uses that and your code to see what has changed and then applies only the changes it needs. The problem is if you lose your state file, for example, which you also should not really commit into your Git repository. If you lose your state file and you run it again, it will create all the stuff again and then you have it twice on your AWS. So you have to keep the state file around but if you have that, it will only do whatever it needs to do, like only the delta basically. Cool. I mean, it was a bit quick, but did people more or less get what it is about? More or less? I think that doesn't know. Okay, cool. Anything else? Thanks.