 Okay, hello and welcome to our presentation today We would like to introduce you the two great tools that together might become a great addition to your DevOps life Okay, before we start let Please let us quickly introduce ourselves. I am Simon. I come from Poland out of pizza and writing slightly overhelming one liners in bash How about you Ari? Hello, I'm Aria. I like practicing DevOps and contribute to open source project and in my free time I enjoy spending playing a good video game Okay, we both work at Red Hat on automation around networking component of OpenStack in simple words Every day we put an effort to make our dev teammates lives easier so in this presentation we We divided this presentation to three major parts First we will briefly introduce the programs mentioned in the topic of our presentation for those that might not be familiar with any of this starting alphabetically with Ansible So starting with Ansible ansible is an automation tool It provides you with many building modules So you as a user can focus on what you would like to achieve rather than how to achieve it So let's say you have a workflow where you need to install a package add users and start some services Ansible allows you to run these operations without being familiar with the actual commands But by using a very simple YAML format also any script you have today can be used with Ansible This allows you to benefit from the Ansible automation engine features, which we'll discuss in a moment Ansible is also a very popular open source project on github. It has many contributors many pull requests It's written in Python. You don't need to actually be familiar with Python to use it But in some more advanced use cases like writing plugins and changing core functionality Knowing Python can be useful So how it all works in Ansible you write playbooks playbooks are basically files that describe what modules to execute these are also called as tasks and Where to execute and basic basically and under which conditions you then use the Ansible automation engine to Execute these playbooks. The engine has several useful features for you One of them is the inventory the inventory allows you as a user to specify which host to run the playbook on So it can be either a dynamic inventory based on some resources that you created in your cloud or A static list of hosts that you manage by yourself Another important add engine feature is of course the modules themselves which provide you with the automation code Necessary to execute your playbooks Now let's see how it looks like So here in this slide you can see how a playbook actually looks like you can see the very simple yaml format Which is basically a playbook contains two tasks a single task Executes one specific module in our case There are two tasks as you can see one for installing the latest version of postgres And the second is for start starting postgres service. Even if you don't know anything about Ansible user user can find this file Can read this file basically and understand what it does because of the very descriptive Way playbooks are written Now to execute a playbook you simply run the Ansible playbook command and you specify the location and the name of the playbook You can see from the output that we successfully executed two tasks in the playbook But nothing was changed. It means that the package is already installed in the latest version and the service The service is up and running if it wasn't up and running then you would see it something was changed and then it wasn't it It would it would basically change from zero to two one in the output in class Another way to execute this task will be with a ad hoc mode Which basically allows you to skip writing the playbook and specify directly from the CLI the module to use and its arguments It's very useful for for quickly firing up some changes with Ansible as a one-time occasion Now let's move to the Terraform Yeah, okay, so about the Terraform What to mention is that it bears a very similar concept to Ansible of defining things in text files It's called as code but in contrast to Ansible it focuses more on what resources should be delivered than on how to configure them and It does that part really really well. It was originally Created by HashiCorp, but it's now available as open source product and thanks to that It not only it supports many leading infrastructure providers, but also thanks to community Delivered modules It supports many many providers So in big picture it works like this first as a user on top You prepare the configuration file it can utilize some official built-in modules or some modules provided by community Then the when the configuration is ready the Terraform engine comes to action First it processes the configuration files and captures the required resources to create Then it resolves the dependencies between one to the resources for example here if someone defined that He or she wants two open stack instances connected to private network Fills the private network shall be created, of course So all steps are planned the Terraform engine can finally Press what's necessary on the resources provider's site to deliver the necessary things so basically what API calls for example should be called to create everything and And Speaking of the Terraform's configuration file those can be those files are defined using Special dedicated declarative language, but don't be afraid. Its syntax is quite simple as visible on the slide Basically in those files you defined what think you want to create In this example, it's provider and its resource then you specify the name of that thing In the manner known by Terraform here, it's called open stack and open stack compute instance and For some resources you also need to define the custom Identifier that will be used later in some references in this particular example. We called our instance test server Then inside the block So between curly braces you can define some options and values for those options those are separated by equal signs and new lines and It's worth to mention here that Terraform reads all the configuration files that are matching some specific extension in file system and Treats them as configuration file So the whole configuration can be split it into multiple files as the order Because of the declarative nature the order in which those files are read should not matter It's declarative and then the Terraform engine should resolve all the dependencies To work with Terraform you basically need to memorize to remember these three commands mentioned on slide First one processes the configuration files and prepares the environment to work So in simple words, it will create dot Terraform directory and in this directory the necessary modules Will be downloaded in our example as we see it's the open stack module Then if you will call the Terraform apply command the instance will be created in the open stack cluster As we see here in the example Some facts are already known To the Terraform engine if they are known from the configuration for example, it's the name of the image for that instance But some facts will be known after this instance is created example of such fact is the uncivil is the access IP address and And Another thing was to mention is that the details all the details about the created instance are kept in so called Terraform state file through this file Terraform tracks information about the resources it created and For example, what should be removed when you want to perform a cleanup? Okay Okay, that will be for the first for the first part now Let's discuss shortly why one should consider combining the tools and using them together Well, it is true that functionality of both tools overlaps a little They are designed to focus on other aspects simply put there do they do particular things better than others Ansible is a great tool if you want to adjust and ensure specific configuration of some hosts many modules offer system Magnostic approach allowing you allowing you to focus what packages shall be installed without thinking on which package Managers should be used for this while Ansible can also be used for example to deliver some open stack instances Terraform is just better at this thanks to its state file and the ability to track instances by identifiers Provided out of the box. It is much more reliable when it comes to performing performing There are basically three ways you can use these tools together First one is rather obvious just called there from first and then call out Ansible What is not so obvious here is that in this approach? You can use their phone to produce Ansible inventory file and we will show you how on to on the next slide in just a moment The second way would be to use the Terraform module for Ansible this way in Ansible you can register facts about creative resources and Easily utilize them in your playbooks in big picture. You can save about two or maybe more commands the third way would be to utilize Terraform mechanics like Local exec or remote exec those may be sufficient for simple actions However, even official Terraform documentation Advises other purchase if possible due to problems with state tracking in such cases for example Okay So as promised On this slide I will briefly introduce you how you can produce Ansible inventory for a file with Terraform so for this the Resource called local file could be used and this resource can involve the template file function in Terraform to fill Particular template with data. So on the example here given on the slide. We create Two resources to open stack instances that would be called my test server one and my test server two and We identify those Terraform in Terraform under the name servers, which is visible in line three and What we do here We use this identifier 917 to pass data as an array to the template The generated file will be I will have named Terraform Dash hosts.ini Okay, and about the template file itself You can see here on the slide that It uses syntax similar to ginger to run which Which means basically you can do not only insertion of variables, but also you can do our constructions like Loops to iterate over some arrays or put some conditionals there Here in this example, there will be a group at that called nodes with two Load addresses it identified by IP addresses from the previous example And it's worth to notice that this inventory file is a regular Resource in Terraform So that means this file will be created after the Terraform apply command is executed And this file will be gone removed from the file system after there will be cleanup with Terraform destruct command Okay, we have one more example here for those of you who would like to call Terraform module from unseable site if you want to Register some facts about created resources. You need to put that Explicitly in Terraform configuration So like on the example on top of here we define that we will be interested in the output core addresses and This one will have a value that will consist of array of IP addresses from our open stack servers and Then in unseable playbook You simply put a register after the Terraform task and you can access this value like you see in line 11 here Under name of registered variable then dot outputs then the name of the declared output Okay Okay, now let's discuss some tips and pitfalls to avoid that we have encountered during our job So one of the common pitfalls in unseable is related to checking with our system is reachable This is usually a problem when you run a task to ensure the system is available right after the gather facts action For those who don't familiar with with gather facts is basically the act of collecting a lot of useful information on your remote system information like file system layout networking interface and operation system details When you run gather facts before checking a system is available It will try to reach the system and if the system is out unavailable it will fail What we can do in such use case is as you can see on the right side of the slide is to move Gather facts task to run after Running the task that ensures the system is is reachable This way we avoid failing on gathering facts and we probably wait for the system to become reachable before running it Okay The second thing we have to we would like to present here is how you can achieve real-time output streaming from unseable So when you involve unseable or any tool in CI you might at some point want to play a long-running task with it So just compilation or launching some test suite In such examples It's nice to know what is the current status of those instead of just waiting Until they are finished eventually So unfortunately, there is nothing like this built in yet in unseable. There were proposals like this One is mentioned on the slide However, they are not implemented and available yet. Okay however As unseable is open source And if you are some python expert, you can patch it and implement such feature by yourself if you really need it however If you might need it only for some basic shell tasks like those where you can call make Or something You may achieve it in a different In a bit simpler way The goal would be to utilize the ssh tunnelism Tunneling mechanism Which allows you to create a traffic redirection from some point on the Remote host to some point on the local machine as visible on the slide All you need to do to achieve it Uh Is to run some background process that will read incoming input and display it in the example hit It's here. It's the socket process on line six And then use the ssh extra arcs option to unseable playable call to open the tunnel In your share tasks, then you just need to turn on the redirection of output to the special devolves called The ftcp using exact command like in the line nine here on the slide to Other additional Too helpful additional additions additional additions That might be Welcome here when you launch such code in your cs system First would be to ensure the proper cleanup of this background processes For this the trap command can be used and So basically the role that way that the defined function will be executed every time at the end of the script or something The second thing here, it would That would be welcome. It's To randomize the port number on which the listening process is running on So you can run a few such processes at the same time For this you can use bash and this dev tcp magic to see if such particular Random ports is open and or not Just try to randomize and pick another port port number Okay So we are here almost at the end let me now Take the honors to conclude this presentation And well as a final words, I would like to say that We recently started a project to improve automatization around the and the testing around the ovn This talk was mainly to announce this and to share with you our discovers and our new experience we had And I hope from this presentation You will remember that there are two such tools like ansible and terraform Both these two these tools are great. However, their main focus lies is put in different places so Combining them together in your workflow might bring new additional value and improve your work Please try them Okay If you're looking for the slides, those will be available on the summits the web page as far as I know And also there will be available on my personal website I would like to thank you for your time and attention I hope you liked our presentation And if you have some questions, you are welcome. Please feel free to reach us by email and Yeah, just share all the thoughts and questions Thank you again and wish you a great day