 Thank you everyone for coming. If you were expecting the postgres talk, that was the one before, so you might need to watch the video stream. Yes, so Ansible best practices. I thought about calling it Ansible my best practices, so just warning ahead, this is things I stumbled on using Ansible for the last two three years and those are like very specific things I found that work very well for me. Yeah, about me, I do also freelance work, do a lot of Ansible in there. I'm also the Debian maintainer for Ansible with Harlan Libawan, so yeah, if there are any bugs in the package, you know, just report them. Yes, so the talk will be roughly divided into four parts. The first part will be about why you actually want to use config management and why you specifically want to use Ansible. So if you're still SSH-ing into machines and editing config files, you're probably a good candidate for using Ansible. Then the second part will be about good role and playbook patterns that I have found that work really well for me and the third chapter will be about typical anti-patterns I've stumbled upon either in my work with other people using Ansible or over the ISC support channel, for example. And the fourth part will be like advanced tips and tricks you can use like fun things you can do with Ansible. So a quick elevator pitch, what makes config management good, it actually also serves as documentation of changes on your servers over time. So if you just put the whole config management in a git repo and just regularly commit, you will actually be able to say, why doesn't this work? It used to work a year ago. You can actually check why. Also, most config management tools have a lot better error reporting than your self-written bash scripts that do whatever. And usually you have a very good reproducibility with config management and also idiom potency, meaning that if you run, for example, a playbook several times, you will always get the same result. Also, it's great if you work in a small team or you admin main time in the company and you have some people working on a few things too. It makes teamwork a lot easier. And you will save a lot of time actually debugging things when things break, in my opinion. So what makes Ansible good? Comparing it to Chef or Puppet, for example, it's really easy to set up. You start with two config files, you have it installed and you're ready to go. It's also agentless, so whatever machines you actually want to control, the only thing they really need to have is an SSH daemon and Python 2.6 or upwards. So that's virtually any daemon machine you have installed and that is still supported anyway. Ansible also supports configuration of very many things like networking equipment or even Windows machines. They don't need SSH, but they use the when I am. But Ansible came a bit late to the game, so Ansible is still not as good in coverage like for example Puppet, which literally you can configure any machine on the planet with it as long as it has a CPU. Yeah, next up, I will talk about good role patterns. So if you've never worked with Ansible before, this is the point when you watch the video stream that you pause it and start working a few weeks with it and then unpause the actual video. So a good role should ideally have the following layout. So in the role directory, you have the name of the role and the tasks, the main yaml, you have the following rough layout. At the beginning of the role, you check for various conditions. For example, using the assert task to, for example, check that certain variables are defined, things are set, that it's maybe part of a group, things like that you actually want to check. Then usually you install packages, you can use apps or on CentOS machines, a yaml, or you can do a git checkout or whatever. Then usually you do some templating of files where you have certain abstraction and variables are actually put into the template and make the actual config file. There's also good to point out that the template module actually has a validate parameter. That means you can actually use a command to check your config file for syntax errors and if that fails, your playbook will fail before actually deploying that config file. So you can, for example, use Apache with the right parameters to actually do a check on the syntax of the file. That way you never end up with a state where there's a broken config or something. In the end, usually when you change things, you trigger the handlers to restart any device. If you use variables, I recommend putting sensible defaults in defaults main yaml and then you only have to override those variables on specific cases. Ideally you should have sensible defaults you want to have to get whatever thing that you want to have running. When you start working with it and do that a bit more, you notice a few things and that is your role should ideally run in check mode. Ansible playbook has a minus minus check that basically is just a dry run of your complete playbook and with minus minus diff it will actually show you, for example, file changes or file mode changes stuff like that and won't actually change anything. So if you end up editing a lot of stuff, you can use that as a check and I'll later get to some anti-patterns that actually break that thing. Ideally, the way you change files and config and state, you should make sure that when the actual changes are deployed and you run it a second time, Ansible doesn't report any changes because if you end up writing your roles fairly sloppy, you end up having a lot of changes and then in the end of the report, you have like 20 changes reported and you kind of then know those 18, they're always there and you kind of miss the two that are important that actually broke your system. So if you want to do it really well, you make sure that it doesn't report any changes when you run it twice in a row. Also, a thing to consider is you can define variables in the defaults folder and also in the vals folder but if you look up how variables get inherited, you'll notice that the vals folder is really hard to actually override. So you want to avoid that as much as possible. The much larger section will be about typical anti-patterns I've noticed and I'll come to the first one now is the shell or command module. When people start using Ansible, that's like the first thing they go is like, oh, well, I know how to use WGet or I know AppGet install and then they end up using the shell module to do just that. If you use the shell module or the command module, you usually don't want to use that. That's for several reasons. There's currently, I think, 1,300 different modules in Ansible, so there's likely a big chance that whatever you want to do, there's already a module for that that just does that thing. But those two modules also have several problems and that is the shell module of course gets interpreted by your actual shell. So if you have any special variables in there, you'd actually also have to take care of any variables you interpret in the shell string. Then one of the biggest problems is if you run your playbook in check mode, the shell and the command module won't get run. So if you're actually doing anything with that, they just get skipped and that might not, that will cause that your actual check mode and sort of the real mode, they all start diverging if you use a lot of shell module. And the worst, also a bad part about this is that those two modules, they'll always report back changed. Like you run a command and it exits zero, it's like, oh, it changed. And so to get the reporting right on that module, you'd actually have to define for yourself when this is actually changed or not. So you'd have to probably get the outputs and then check, for example, if there's something on standard error or something to report an actual error or change. Then I'll get to the actual examples. On the left is a bad example for using the shell module. I've seen that a lot. It's basically, oh, yeah, I actually want this file. So I'll just use cat's path to file and I'll use the register parameter to get the output. This is, do you see anything? It's here. So the actual output goes into the shell command and then we want to copy it to some other file somewhere else. And so we use the Ginger double squirrely brackets to define the actual content of the file and then put it into that destination file. And that is problematic because, well, first of all, if you run it in check mode, this gets skipped. And then this variable is undefined and Ansible will fail with an error. So you won't be able to actually run that in check mode. And the other problem is this will always report back changed. So you'd probably have to, the most sensible thing would probably be to say just changed when false and just acknowledge that that shell command won't change anything on your system. So the good example would be to use the actual slope module that will just slope the whole file and base 64 encoded. And you can access the actual content with path file.contents and you then just base 64 decoded and write in there. And the nice thing is slope will never return any change. So it won't say changed and it also works great in check mode. Here's another quick example. The example on the left, oh, yeah, wget. Here's the problem is every time your playbook runs, this file will get downloaded. And of course, if the file can't be retrieved from that URL, it'll throw an error and that will happen all the time. And the right example is a more clean example using the uremodule. You define a URL to retrieve a file from. You define where you want to write it to. And you use the create parameter to say, well, just skip the whole thing if the file is already there. Set facts. That's my pet peeve. Set facts is a module that allows you to define variables during your playbook run. So you can say set facts and then this variable equals that variable plus the third variable or whatever. You can do things with that. It's very problematic, though, because you end up having your variables change during the playbook run. And that is a problem when you use the minus, minus start out parameter from ansible playbook. Because this parameter allows you to skip forward to a certain task in the role. So it skips everything until that point and then continues running there. And that's really great for debugging. But if you define a variable with set facts and you skip over it, that variable will just not be defined. So if you heavily use set facts, that makes prototyping really horrible. Another point is that you can use ansible minus msetup and then the host name to check what variables are actually defined for specific host. And everything set with set facts is just not there. So in summary, avoid the shell module, avoid the command module, avoid set facts as much as you can. And don't hide changes with changed when. So the clean approach is always to use one task to check something and then a second task to actually execute something, for example. And also a bad idea, in my opinion, is when people say, well, it's not important if this throws an error or not. I'll just say it fails when falls. That might work sometimes. But the problem there is if something really breaks, you'll never find out. Advanced topics. So this is about the templating. So the usual approach, for example, for postfix role would be to do the following templating. You would define certain variables in, for example, the group vans postfix servers. So any host in that group would inherit these variables. So this is sort of a list of parameters for SMTP recipient restrictions. And this is just the SMTP healer we're required. So the usual approach would be you define variables in the host vans or group vans, or even in the defaults. And then you have a template where you just check every single variable if it exists. And if it exists, you actually sort of put the actual value there in place. So here I check if this variable is set true. And if yes, I put the string there. And for example, SMTP recipient restrictions, I just iterate over this array and just outputs those values in order in that list. So the problem here is that every time upstream defines a new variable, you'll end up having to touch the actual template file and touch the actual variables. So I thought, well, you actually have keys and values and strings and arrays and hashes on the one side. And actually, a config file is nothing else than that just in a different format. So I came up with the fact, well, the fact with Jinder 2 is you can also define functions. So I'll have to cut short a little bit on explaining it, but basically up here, function is defined and it's called here in the bottom. And basically what it just does, it iterates over the whole dictionary defined here, postfix main. And it just goes, well, it iterates over all the keys and values. And it goes, well, if the value is a string, I'll just put key equals value. And if it's an array, I just iterate over it and put it there in the format that postfix actually wants. And basically you can do the same, for example, for HA proxy and you can just deserialize all the variables you actually define. So the advantages of this is your template file just stays the same and it doesn't get messy if you start adding things. You have complete white space control. Usually if you edit stuff, you kind of get an extra new line in there and that changes the template files for all machines. You have all the settings in alphabetical order. So if you actually run it and you see the diff, you don't end up having things going back and forth. And if you get the syntax on the template file right, you don't have to touch it after that and you also don't get any syntax errors by editing that. So that follows to the next one. You can actually set hash behavior merge in the Ansible config. And that allows you to do the following. On the left here, you define, for example, a dictionary and this is like in a group and then in a specific machine you define other setting in this dictionary. If you wouldn't use merge, the second setting would just override the first one and you'd end up with that. But if you actually do the merge, it does a deep merge of the hash. So the previous thing I showed would actually benefit from that. So the combination of both is really good. Then I'll skip that. Yeah, further resources. Ansible has just a really good documentation. Check that. There's the ISC and there's also DevOps, which is a project that is specific to Debian and derivatives. So, yeah. That's it. Thank you very much.