 Yo, hello from me too. Sorry about that. So, I talk a little bit about automation, infrastructure as a code. Mainly I will show the tools Terraform, Packer and Ansible. First of all, who's me? I'm Stöpps. I'm with Linux for ages. Who's born after 1994? Yeah, it's long. So, I work mainly with application servers, but since a year or so I do more and more DevOps stuff and so on. I love Vim and never got into Emacs. So, I stay with it. I like Tetshell, like Lyra. So, let's start. No, I want to greet and thank some guys. So, I want to greet my family, which is very, I hope they sit at home and watch the live stream. Thanks girls for supporting me. Thanks for the organizing team of the Gulasz program. This is a great event. I'm a first timer here. So, but now let's start. So, my history. I started in the last century as an administrator, mainly with bare metal servers. So, really hard stuff with special weight. We had special shoes to run to all of our servers. We had to update them when we wanted to get new application. We always had to order new servers or new metal and deploy on these machines. Network was slow. So, mostly the servers were in-house. So, we traveled all around the town and watched our servers. Then we started with virtualization. I think we started with VMware shortly after 2000. That was the first time where we could deploy stuff without ordering hardware. Quite nice. But we still worked like before. So, provisioning, deployment, still everything was manual. So, a little bit easier. But I would say more work. Because now, before we had an operating system with our application and dependencies, now we have or we had an operating system, a hypervisor, and in each VM operating system which needed updates. So, this huge amount of servers, when you start with virtualization, is named service sprawl in infrastructure as a code. So, the count of servers made it impossible to deploy all patches all around the network. So, most of the time we only deployed high security risk or if a special application needed a special fix. So, like a new Java version for Jira applications or something like that. Or there was a new zero-day exploit. So, we deployed the fix. When we look on the other side, we were two administrators. We mainly installed the same software, but when two admins installed two times or three times the same product, it always looked different. So, even when you have a good checklist and you hook everything, there will be something different. So, latest on the next update, something will break. This phenomenon is configuration drift. So, how did we try to solve that? So, first I tried to write some scripts, shell scripts, patches, silent installation files, then we started installing application servers with graphical interface. So, we had to document each step, each click has to be packed into a screenshot. I saw documentation for installing stuff from company with three characters, which was, I think, 1,000 slides long with 1,200 screenshots or something like that. Just to install one product, it was horrible. When something broke, we had nights and nights troubleshooting. When we deployed updates on production servers, the behavior was completely different from our tests, from test servers. What was the reason on the test servers? We normally deployed all fixes which were available. So, we installed fix one, two, three, four. And in production, sometimes it happened you install only fix four. It's cumulative, so fix four should contain all fixes from the fixes before, but the behavior was different. It was always a crap. So, these differences which creeped in over time, yeah, how did they, how did that happen? So, sometimes a database server needed a special fix. I think Oracle was a classical product for that. The new ticket system needs a new Java version, the application server, get more traffic, so you need different configurations than the other ones just to handle that traffic. It's no problem. When we look at infrastructures, a code, this special server configurations are special snowflake servers. That's one server with one special configuration. It's not bad, but you need to capture that. You need to work around to get that reproducible. So, you want to install a server and you want to install it quick when it fails. So, you need something to automate and get that server reproduced in a failure. Difference is not bad. You just have to understand what makes that server different and how can I rebuild it. And I think, as I said, I'm an administrator since I'm 20 years or so. An operator or administrator team should be confidently and quickly rebuild every server in his infrastructure. I think everybody who is in IT for several years knows that famous servers. So, there's a server under a desk of an old administrator. Don't touch it. Don't point it. Don't even look at that server, please. I think I worked at five or six companies now in my history and there was always one of these special servers on one of these desks. Age is old. Windows NT three-point something. Old network cabling. You know that stuff. So, everybody heads us. But, latest when the hardware breaks, you need someone to replace that service. But how? Automatic deploy is available since years now. There's Chefs, there's Puppet. I think Chef is available 10 or 15 years now. But most of the administrators, I know, use that team, that tools only to initially deploy a server. Nobody will run that against a server which runs already two or three years. Because maybe there will be something break. So, don't touch it. No. Maybe a colleague was there and did a manual change. So, I will break it when I run the automatic tool. So, nobody wants to do that. Only few of us. Why? So, this fear spiral is why are our servers inconsistent? Because we make changes outside of the automation tools. And I'm afraid to run my automation tool. Maybe I break something. And on the other side, the server is inconsistent. So, I'm always turning around and around and around. So, there is, well, it's around some years now. The infrastructure is a code. When I read the wording the first time, I thought, what the fuck? I'm not a developer. So, why should I write code? It's just server. But think about it. Our virtual environment already is code. Our virtual machines, it's just code. There's a virtual network. There are virtual disks. All definitions are just, it's a little bit of code. And what is code? Code is text. So, Python is text. It's just how I interpret it. You know what I mean. For code, there are tons of tools available to work with it. There are editors. There are version control systems. There are build servers to work with it. And our developers work since ages with text and code. So, I think we can handle that too. So, the principles of infrastructure as code is, systems need to be easily reproducible. Systems are disposable. So, when something breaks, I can just exchange it. Systems need to be consistent. Processes are repeatable. Design is changing. Yeah, sure. And our processes should be as simple as possible to meet current requirements. I think, and when we need a change, it must be able, we must be able to deliver it safely and quickly. So, when I look at companies or my company, when I need a virtual machine, it can need some days until I get it. Or I have customers where I wait weeks for a virtual machine. And I would say, when I do that, weeks, I need six weeks for a firewall rule. You know what I mean. So, I would say, when you think a little bit about automation with server deployments, you can get a virtual machine in two or three minutes. Maybe you can generate a self-service to get a virtual machine for your colleagues. How can we achieve this? We need definition files where we can describe our environment. Nobody likes to create documentation. So, maybe we can get a process to document everything automatically within our process. So, the script generates the documentation too. When we put all our stuff into a version control system like Git, we have a history. So, no need to document that too. We can build test and production environment which are completely equal. So, we can really test an update. And when we monitor the environment, we can just do that during the night or very easy during the day when everybody is working. We just need to provide a testing that everything works like before. There are two kinds of tools. One kind is immutable tools. The other kind is a mutable tool. The immutable ones completely replace a server. So, when a server needs a new IP address, a new version, a new operating system update, it will be completely replaced. So, I have really, when I have a test and a production server, they will be completely equal because we learned some slides before when we update with three or four fixes or with one fix, it will be different. So, there we can test a complete update because but when we use an immutable system, we have to think about what happens with our data. So, maybe a storage system but we can solve that but don't forget there is some data which maybe should be saved before you throw a server. A good example for immutable is containerization like Docker. They just have an image, they fire up a new container when the work is done, they throw it away. The other kind is mutable. Mutable is still automatic deployment but they don't replace. So, there it can happen that we have a configuration drift. Mutable software is something like Ansible, puppet chef is mutable. So, there it can happen that production and test is behaving different. Think about a server life cycle. There is, I think the first thing what we do when we start with infrastructure code is building a template. So, we start building a small kind of server which we can clone. From the template we create the server. A server needs to be maybe updated. Sometimes I can delete it. Sometimes I have to replace the server. That's five kinds of server tasks I normally work with. So, which tools do we use today? First of all, some of the code is really simplified. So, it's not that secure. When you want to do that in production, please have a look at that. I will show some clear text passwords. I wouldn't do that in production. But we have only an hour. So, we need to do that a little bit quicker. It will run on a normal VM work station. I use a vSphere cluster to deploy the stuff. I use Pekka Terraform on Ansible to get everything running. So, not that hard. And, yeah, for example, it needs a license. The other three tools are free. It should work on a Mac 2. I haven't tested it. So, just it's years ago that I used a Mac. So, I don't know. All files are stored in a Git repository. There's the link. The link will be in the end of the talk. So, just grab it there. I think that's pretty much all here. Some dependencies. When you use Terraform to provision a template, we need to have the OpenVMware tools and Perl installed in the template. We need a connection with SSH and Ansible for the later customizations needs a Python. Python 2.7 or 3. So, we have to put it in front in our template or it will not run in the end. In my Pekka template, I use a temporary password, which is named password, but I disabled the login in the end of the process. So, I hope that's secure in the moment. But when you use that in production, please don't forget to remove or change the password. So, what's Pekka? Pekka is from HashiCorp. HashiCorp maybe is known from, they have several tools for automation. Vagrant. I think they started with Vagrant. Now they use Pekka and Terraform. Pekka is licensed under Mozilla license. So, I think it's open source. It's easy to install. It's just a download of a binary and put it in your path. And there are hundreds of templates on GitHub and GitLab to deploy Windows, servers, clients, or Linux machines. So, you can find nearly everything. I will do a CentOS deployment here. So, that's all builders Pekka can use. So, you can do nearly every cloud template, Docker templates, Kubernetes, VirtualBox, VMware, QVM, LXD, Hyper-V, who wants just a file. So, that's pretty much all. You can provision each template with standard shell, local shells, Windows shell, PowerShell, Puppet, Chef, that's pretty much all, and Ansible. And as a post-processor, you can just upload it on a Docker registry to a cloud provider, on a vSphere machine, on a vSphere cluster. VMware ESX, I think so, is there, or Vagrant. So, a ton of tools, how to do that. First of all, I created a kickstart file. So, that's standard. We can use kickstart files to deploy Rated or CentOS, I think, since 3.0 or so. I started with 3.0. I would say, yeah, since 3.0. The only thing I changed here from the standard kickstart is I changed the keyboard to German and the time zone. Everything else is just there's a swap partition. And one thing here is an encrypted password. That's password encrypted as short 256. So, you can put here your password, and even when you put that on to Git, it should be safe. So, we will start our deployment with that kickstart script. How is the deployment of, or how do we build the packer template now? The builder is, we use VMware ISO. And the most important one here is we want a boot with our kickstart script. So, the packer script will type in the kickstart script during the boot process. We communicate with SSH. The guestOS is sent to a 764 bit. We use minimal ISO file. We have the hash for the ISO file as a download tool. And we have a root user and a password to shut down the machine in the end of the deployment. That's this. Then we want to provision it. After the process of deploying, I start some shell commands. Within that shell commands, I install the open VM tools. I tweak a little bit of the SSH service. I reboot the machine. I do an update, a kernel update to 5.1. Because normally the sent to a machine is coming with a 3.1 kernel. So, that's just everything in the template. And I install open VM tools, Perl and Python. So, when the template is finished, I can work further with Terraform and Ansible. Here is an example of one of the scripts. The bottom of the slide. So, it's just a YAM install open VM tools, install Perl and then restart the VM tools service. That's pretty much all. It's important that the PECA template restarts one time because the vSphere cluster won't recognize the open VM tools without that reboot. Now, that's a little bit, that's something to think about. Then the post processor. In the end, I just upload the result of my template process to the vSphere cluster. So, I need a server name, a cluster name, the disk where it's stored and then it starts the update. The username is fake and there is no password. Yeah. Okay. And in the end, I showed we use builders, provisioners and post processors. It's one file. It's a JSON file. Just copy the three slides together or use the GitHub file. And then we can run the PECA command. First of all, validate it. And there's an error. The build command is not validated. It's built. And all variables we use within our JSON file can be set on the console. So, I set my vSphere password on the console. I use a time stamp for my template name that I know when I build that template. And that's pretty much all. And I hope. Yeah. That's all. I used, where is it? Here? No. There. We used just one disk. So, I haven't changed anything in the default settings that the VMware will build on, I think, 40 gigabytes single hard disk. But we can change that with Terraform later during that process. So, that's, I think that's a minimum to start with Terraform and Ansible with that template. So, what happens here? I hope you can see that. Too small. So, it's PECA build. I already said, normally, the PECA is executable. You download from the Haschicorp page on Fedora. You have to rename it to PECA something because there already is a tool named PECA. So, I name it normally PECA.io. So, you see, I start the build process on the right side. You see my VMware workstation starting up. You can start that headless that you don't see that process, but I want to show that here. And that was too quick. So, let's start again. So, there's the download. Normally, when the ISO file is already in the cache directory, it will not download. And here you see that the process will type in our kickstart file. So, it gets the IP address of the machine. It says you get your kickstart file with HTTP of the host and then fire up the deployment of the server. And let's speed it up a little bit. So, that's a kickstart installation. And now on the left side, you see the deployment with the shell scripts. Now, all updates are deployed. In the end of the deployment, I run a cleanup to remove all temporary files, to remove the repository cache, to remove the SSHD host keys. Because when you don't remove the host keys, all of your virtual machine will have the same SSHD host keys. So, now that's secure. And in the end, so, that's the final process. It uploads now to the vSphere server. That's a process. I would say that's 15 minutes. And you have already installed Linux something. And you can do the same with Windows. So, you can create all templates you like. Everything is on GitHub. So, that's a Packer build. Now, we have a new template on our VMware machine or our VMware server. Now, let's go to Terraform. Terraform is also from HashiCorp. It's also Mozilla public licensed. And like Packer, just download, unzip and put it into the path. So, Terraform knows around 160 providers. I hope I can show them. No, that's Packer. So, it's a long list. I don't copy it. You can do nearly everything. You can use OpenStack as a target. You can use a Kubernetes GitHub. There are plugins for monitoring systems like Icinga, Grafana, Prometheus. So, you can do really everything during the deployment. So, even register a new DNS name, configure the virtual network, upload to a cloud provider and so on. So, it's not limited to VMware. You can do everything. So, here's an example we use. And it's on the Git repository too. Terraform has its own configuration language. It's HCL. It looks nearly like, I would say, YAML. It's a little bit different. It's a mix of YAML and JSON. But it's not that hard. I have a built Terraform file where I normally store the information of the systems I want to deploy to. So, my cloud provider or my VMware environment, the GPN 19 server 1 and 2 are two server definitions. We will generate two servers. The .terraform file is just a plan. Ignore it. The .terraform are the plugins. When you just have the TF files and do a Terraform init, Terraform downloads all plugins which it needs to deploy the stuff. The TF state files will appear when the first deployments are ready. So, when I deploy something with Terraform, the state is stored within a Terraform TF state file. And in the end, I have a variable file where I provide some information about the environment, like DNS domain, passwords, and something like that. And a version Terraform file, there is the version where you define that stuff. So, it's looking like that. So, that's our variable definition. We have our vSphere server name. We have our vSphere user and password and data center and so on. And I think we have a look at it live. Is that big enough? So, readable from... Let's start with the build one. So, provider vSphere project folder, which data center? So, that's all VMware stuff, network, in which folder will it be deployed? So, that's the first things. Then the variables, DNS server, Stringless. So, that's my DNS server. What's my admin password to get to it with SSH? Which template name and my SSH public key that I can work with Ansible and that machines. That's this. And half an hour. VMGPN server one. So, that's server definition itself. So, the server name in line one, which pool that's a variable. Four cores, four gigabytes of RAM. We have a 100-gigabyte disk. That's the IP address. A gateway. Our provisioner. With that provisioner, I just deploy my SSH public key into the root user that I can work with Ansible and that account. And in the end, because we created a template with Packer where the password was password, I lock the root account with lock. So, nobody can lock in with the password, but we can still lock in with an SSH key. That's all. So, that's this. So, we saw that. We saw this. That's the main commands. When we are ready with our environment and our Terraform files, we do a Terraform plan. We can set some variables there, like our vSphere password, like the template name. So, we can use it in a build pipeline and needn't change anything in the file itself. It generates a plan file. In that case, the name is rebuild Terraform. And when we run Terraform apply, the servers will be generated. Destroy means everything in the definition files will be destroyed. So, hardly deleted from VMware. And with taint, we can just decide which servers will be deleted. So, we can say, okay, we define 10 servers. One of them is a little bit creepy. We can taint that one server. So, it's marked for deletion. The next time when we run a Terraform plan, it's marked for deletion apply and it will be deleted and automatically new deployed. And we have a fresh installed single server, even when 10 servers were defined. Some videos. I don't want to show any host names and passwords. So, that's the reason for the videos. That was Terraform plan. You see here the server name, the folder name, where it's deployed. The plus means it will be generated or created. It created an output file. And it shows the command to really install or really create the servers. Now, creating. I speed it up a little bit. I would say that process generating two or let's say 10 Linux machines on a normal VMware cluster will need about two to three minutes. And it's always the same. Two servers need two and a half minutes and 10 servers need nearly the same. So, it's just a parallel task. You see here, two minutes, 34 seconds. And the folder and two machines were created. Ready. So, now the taint is too fast. No. Server 2 was marked as tainted. When I do the plan again, it will show me speed a little bit up. One add, one to destroy. So, it will destroy my server 2. And in the same task, it will generate a new one. So, always check the output of this plan. Question? Why is the second server destroyed? I marked it for deletion with taint. I just said, okay, there happened something. So, maybe an update failed. So, I can't just say, okay, I taint one server even when I had 10 definition files. So, I didn't destroy everything in that environment. So, like, the two servers, I just can say, okay, that one server, please create it again. So, taint it for deletion or taint it for recreation, run the apply again, and then it kick away the old server and create a new one, maybe with a new template, and that's all. That's the main reason. Just to show it here. I have an application server deployment with LDAP, two database servers, two web servers, load balancer, and I would say three or four web sphere servers in the background. So, there are 10 definition files, and sometimes I just to kick away the LDAP or something like that. And so, I can say, okay, taint the one server, recreate it, and I can start from the beginning, but the rest of the environment just stays. And the complete destroy, that's the fastest one. I would say, destroy is a thing of 10 or 15 seconds. Type in yes. So, it shows us three things will be destroyed, two servers and the folder. So, return now. And now that's fast. Okay, 20 seconds. So, for a deletion of two servers, and that's the same for 10 servers or so, too. One thing is still there, the folder, because I have no clue how I have to create the definition files that the servers are deleted first and then the folder. It always tries first the folder. It's missing because there are machines within the folder, so I run it twice, and then everything is away. So, we created and destroyed two servers on a cluster, I would say, in eight minutes or so. So, now I have, think about, I deployed it again. So, we have two servers on the VMware environment now. Now, let's start with Ansible. Who uses Ansible already? It's a lot. It's only a small example here, so don't kill me, please. With Ansible, I have several advantages. I can run it separately against the environment, but I can include it to my Terraform files, too. So, there is a provision for Ansible there. So, when Terraform is ready, it can start Ansible. I split that process in my demo environment just for testing reasons, because when something fails, it always deletes all servers, and so I can just go back to something else. So, why Ansible? I like Ansible, because it's clientless. I don't have to install any server. It's just pure SSH, and I can do everything what I want, but there is no Windows version. So, I hope that's no problem. One thing we always need to remember when we do infrastructure's code or automatic server deployments is idempotency. That means when you run a script or process multiple times against the same server, the behavior is not allowed to change. So, like, when I do an echo of IP address in the host files, and I run that script twice or three times, I will have three entries. So, that's not idempotent. Oh, yeah, that's right. Thanks. Yeah, perfectly. Yeah, that's right. Yeah, the comment was that command is really idempotent, because we have only one greater sign, and so the file will be overwritten each time when we start it. So, there must be two of them. That's right. That's a typo. Just a small example what you have to recognize here. That's a small example for our two servers. So, we will see the inventory where we define some server groups. We will see some group variables in the group variable folder. I defined two roles, a common role, which will be deployed to all of the servers, just an LDAP role where we deploy an open LDAP to one of them. We see some template files with Jinger 2 templates and the side YAML file with the whole definition of the Ansible task. Inventory file, I defined a server group GPN19 and LDAP. So, only on server two, we will deploy open LDAP. The common tasks will run against both servers. That's the side YAML. So, we see here the all group. So, hosts all always meant all groups, even when they are not defined, and the hosts group LDAP will set the role LDAP for deployment, install open LDAP here. The variables, I learned that some weeks before. In group variables, the all folder contains variables which are used in all roles and all server groups. When you define a group name folder under group variables, it's only valid for the group name. So, I can have a GPN19 group name here and then we have variables which only are valid for that group. In my example, I put the passwords for LDAP into the variable file. I would say best practice is to use an Ansible world. Don't use that in production, please. But we see here, we define the LDAP domain and the password. And we use some templates. Templates are copied from our local machine to the deployment, and all used variables like the LDAP password variable or the LDAP domain variable here are replaced with the real value of our variable file. So, copy it to the server, replace the string and run some commands. That's a role just as an example, but that we will look in the real files Ansible. So, in common, first five lines, we disable the firewall service. So, the state is stopped. It's not enabled. So, even after restart, it's off. I disabled S-Linux for testing purposes. For production, I would enable it. I heard that. Sometimes. So, we change some limits. So, number of open files, number of processes. We install some additional tools like Ansible and WIM. I update the SSH configuration like X11 forwarding and in the end, we restart the SSH service. And for the roles LDAP and for the LDAP task, we install open LDAP server and client. We enable the SLAPD service. We copy the chinga templates. The text parse means that our variables are replaced. We copy the sample configuration. The sample configuration is already on our deployment because it's installed with the open LDAP server package. So, we need the remote source yes command here. So, the process will copy machine to machine folder and the config script where I start the LDAP service or configure it and in the end, I remove the config script we used here because there is the password in it. I just delete it here with that task. So, I think that's pretty much all here. So, that's the example here. We saw that. And when you have installed cow say with your Linux machine, Ansible always use the cow. You can disable that with a variable in your BESH RC or ZShell RC. You see it runs through now. We see which server is changed. The limits changes here. VI is installed. Here it was a little bit fast. And now the open LDAP is installed. The open LDAP task in that case generates a new server with 20 users, 20 passwords. And in the end, I think I restart and that's all. So, that's a deployment of open LDAP with two servers and only speed up a little bit. So, that's normally three minutes. So, benefits. Why do I do that? Because I already said it. All servers are always looking the same. When I need an update, I just generate a new template, run my scripts again and I have the same environment with all actual patches. So, when I want to test something, I build a test environment, test all updates. Maybe I get the data from our production. I can test everything. I can script it. And in the end, I know when I run that against production and I rebuild the production environment, it will run again in the same way like the test environment. Normally, nothing should break here. Administrators don't like to create documentation. So, the process should document nearly everything itself. It saves a lot of time. Yeah, that's pretty much... Yeah, I know. Best practice. I would codify everything. I know when I look back to my history, how I installed servers. Like, I think my first test environments on VMware, I installed one server, a second one. I configured them nearly the same. Sometimes I just cloned the machines and you have to adjust settings like IP address, host names. You forget something. Most of the time, you have the same MAC addresses when you start with that process. Then I started to generate snapshots that I can go back when something happens because the process to install was just too slow. I don't do that. A snapshot needs a lot of disk on my machine. So, I normally work on that notebook. A snapshot, when I install application servers, needs gigabytes of disk storage. It's slow. I just deploy it again. I think I did a sample test environment some weeks ago. Eight or ten machines, two Windows 1, six Linux, have an hour of work and the stuff is just working. In the end, when you are ready, you just throw it away because when you need it again and have an hour of work or one night running and the machines are restored, you have the same again. Nothing what I have to copy on backup disks or something like that. It's in the version control system. I can reproduce it. I have a history. It's documented. It's modular. So, when I need some of the stuff on another machine, I can just reuse it. That's pretty much all. You need to think about some things. I know I often get questions like, why not using the cloud? The cloud are just servers of other persons? So, someone have to install the clouds and there's nothing against installing his own cloud. So, when I look the last two years, what we are doing with Kubernetes or server testing or rollouts, test server deployments, I think I say 50% of my testing time. And on top of that, what I show today, when you register DNS names, monitoring and some testing tasks, it can run automatically out of pipelines. So, even it just tests itself. Just write it once or test it twice and then you can automatically run the stuff. And a little bit too fast. So, we have time for questions. Awesome. Thank you very much, Jobs. Other questions? Yes. When you clone the machines, you remove the SSH keys. Do you also remove machine ID and system ID? During the cleanup? Yes. There are several things. The SSH host key, the caches, some temporary files. I think I reset the repository fastest counter. So, you get the real fastest repositories when you deploy the stuff. There are several things. The script is in the Git repository. Just have a look at it. Further questions? I have no mana, so that's a reason, no questions. Yeah, think about that. I normally give beer away, but today, today, and next time. Yes, just a second. Run. No. Run for us. Too many accidents happen when we're running, so no running. I see there's some feature overlap between Terraform and Ansible. Can you give some best practices, which tasks you give to which tool, especially setting up a server software and configuring it with Ansible primarily or better with Terraform? It's a personal thing. So, I know several people build different templates for different servers. So, that's the first thing where it can start. So, you can have a different template for, let's say, database servers or web servers. I decided to just start with one template with Packer. Then I went to Terraform. In Terraform, I normally deploy my SSH keys because it's not only me who works with the server. More people will work with that server, and sometimes a web server needs another SSH key like a database server. So, it's a good place to store the SSH stuff there. You can already deploy things in Terraform. It speaks nothing against it. So, in the moment, I try to move a lot of stuff what I formally did with Ansible to Terraform because Terraform, for me, is the youngest tool because I started at latest with it. Ansible is, I would say, software which is used by a lot of people. So, there is a lot of stuff around the Internet. You will get a ton of documentation. But from the view of infrastructure as a code, I would try to get a lot of stuff into the Terraform. Just, yeah. But it speaks nothing against it that you remove the shell script provisioner from the Terraform and just point the Terraform to your Ansible. So, the Terraform task just stalls Ansible. So, when it's tested, it's perfect. Good question. Sure. Yeah. Sure. Just a second. But no mana left. Hanuta next time. I don't know. I'm quite confident the answer is yes. But is there stuff like if and else stuff in Terraform? Sorry. Is there stuff like if and else? There is a ton of stuff like if and else you can have loops so you can create a single definition file for 10 servers but just a counter steps up. So, there's a lot of stuff. So, yeah. And there is if else. As far as I know. Any further questions? You will be around for another half hour or something. A little bit around. I think the whole evening. Yeah. Awesome. Yeah. If there are no further questions, I would say thank you very much steps and another round of applause for steps please. Thank you very much. Thank you.