 So just in case anyone doesn't know what room you're in, you're in avoid deep herding, deployments beyond. Get other any note takers in the room? Go ahead, raise your hand, that's fine. Okay, this presentation has links in it. I do not spell out all of the links, but you can find the entire presentation already online at github.io. Yes, that is case sensitive. Make sure you get that right. If you have some, I'll give everyone a second to take some pictures so that we can get started. Okay, still seeing two or three phones, five, six, eight phones, oh dear. The DA does that, but if I have the opportunity, I will try to put a link in as well. Okay, hopefully everyone's got the URL now. I still see one person taking a photo, thank you. Okay, so you might be wondering, who is this weird person in front of you and do they really get that excited when they have a boarding pass to a business trip and do they yes, really throw t-shirts at people if you were here three minutes ago, you know that answer already. My name is Tess Flinaway, is known as Stockett Wensch, that's Wensch, not Wensch. I am the module 18 of Proflag, Proflag Friend, and four examples. And I am a DevOps engineer at a little company called 10-7. 10-7 is a full service web firm based in Minneapolis, we are now fully distributed. We are actually a remote only company. We do Drupal design, user experience and hosting. Give us a high to the wonderful people, wonderful humans at 10-7 at 10-7.com. So let's talk about deploying Drupal, shall we? Does anyone remember FTP? FTP had a lot of problems. First of all, it was insecure. It sent the password in plain text over the wire so that anyone could intercept it. You also had problems with permissions and modes. Let's say that you are really excited, you're ready to do your deploy and you're going to upload all your files to your house, you go and hit the upload button in FileZilla and then you walk away and then about five minutes later you come back and say, oh, the upload's done, everything's great. Until you find out that half of the file's been uploaded and all of your images uploaded in text mode, so they're all garbled and it's a huge fricking mess and you're going to spend hours trying to figure out what the hell happened. And often when we had shared hosting, this was the only option available to us. And then Git came along and it was like the heavens opened and the angels sung and it was wonderful. All we had to do was SSH into our server, we changed to wherever the directory that hosted the site was. We did a Git pull, just pull and done. It was wonderful, right? Well, there was always something that we had to do after we did a Git pull. We always had to do things like we had to update the database. We might have had to do a cache clear. We could do all of this stuff in the web UI but it wasn't recommended, particularly for larger sites that might have timeout problems with HTTP. So you might have to get Drush involved and okay, fine. That's still pretty complicated but if it stayed that way, we might have been okay. That wouldn't have been too many steps to actually figure out but it didn't stay that way. We wanted it to stay that way but it didn't. So we do our SSH, we change the directory. You do a Git pull. Now in Drupal 8, we have to do a composer update. We have to do a configuration management import or in Drupal 7, you'll have to do a feature revert all. You'll have to do a compass compile. So you have to compile all of your CSS preprocessor files, then you do the UPDB, then you do the cache rebuild. And it's just so much stuff, you have to go and write it all down and no one likes writing stuff down. We want to get stuff done, not write notes to ourselves. But it didn't end there. The pain did not stop. What about grunt and gulp tasks? So third party integrations, we have to deploy a key file or any other kind of post configuration in order to make your thing work. You might be running on PHP 7 with OpCache or APC and now you have to clear that and make sure that all the stuff shows up. You might be running behind a reverse proxy caching web server and you have to do a varnish band to make sure everything shows up and it's already a lot of stuff and then it gets worse because what happens when things go wrong? What happens if your Git pull fails or your styles don't show up or you get uncaught exceptions? When things go wrong, it gets really, really bad. You may have missed a deployment step. You might have forgot to get ignore something, get desperation. You might have done Siege, Mod-R, 777, all the things and then you might need to roll back and it all culminates in deep herding. Deep herding. If anyone doesn't know what that's from, this is from a Mystery Science Theater 3000 episode called Hercules versus the Moonman. You can find it on Netflix and YouTube. So the big problem is that human beings are fallible. Good documentation does not remove human hands. Typos, missteps can kill a deployed dead and the problem is that Git alone isn't enough. It's a code history tool. It's not a deployment tool. So we're good Unix admin people here. We're all DevOps-y types. So what do we do? We're gonna write a script. Scripting fixes everything. Scripting makes the world go around. That way I don't need to write any docs. I just write the script and I'm done. Well, there's still some of your problems, sure. You just run the script and you're done, right? Okay, well, if you don't set up your script right, it might fail on a particular step but keep executing and then you have another problem that will take a while to debug. It reduces operator errors though, right? I mean, it runs all the steps the same each time, right? Well, what if one step requires a bit of state that's external and I have to check for that? Oh, geez. So it doesn't run all of the operator system errors all the time and also writing scripts is painful. It's covered with obscure Unix-isms. Who knows what set-x is? I had to look it up four times to figure out what it was the last time and I've already forgot but it's even worse than that. When you write scripts, it's full of odd programming conventions because in Bash we don't need them fancy curly brackets. We'll just write if backwards and else backwards and case backwards. Worse than that, repeatability, to make your scripts repeatable, it adds a ton of complexity. You need to check the state first. You might have put a make directory in your script and now you have to check to make sure that the directory exists because if you don't do that, your script will fail and it litters your scripts with tons of ifs and else ifs and suddenly you're just, this is no better. This is just no better. And the problem is that scripts are like tribbles. They start small but they quickly become a huge problem because the complexity of maintaining those deployment scripts approaches application code. They're code, not configuration. So let's talk about config management. There are lots of configuration management tools out there that you can use. Tons of them, several of them have booths here at DrupalCon. The one I like though is called Ansible. Ansible is made by Red Hat. It says it's a radically simple IT automation and it is open source on GitHub. And that's a really, really good marketing line. But what the heck does that mean really? So Ansible takes three different inputs. It first takes a list of things that it needs to deploy, a playbook. Then it also takes a list of targets that it's going to deploy those to in inventory. And then it takes some variables that you define to control what your thing does. Ansible takes all of this input in and we call that the controller. Ansible then connects to a remote system over basic SSH. And it takes all of these inputs and compiles them on the fly into Python scripts. Those Python scripts are going to execute on the remote system and typically they usually invoke other utilities that are already on the system like FIO utilities, MySQL, database clients, or even things like Drash. So that's what Ansible does. It also has a lot less dependencies. It's just meet and based on Python 2. This means it works on most shared hosts out of the box. You don't have to do anything extra. The installation is really, really easy. If you're on most Unixes, most Debian based Unixes, you are going to use app get install ansible. On macOS you can use homebrew, brew install ansible. But the most popular way to install Ansible is actually using the Python package manager, PIP, not a Farscape reference. PIP install ansible in order to install the most recent version of Ansible directly from them. Ansible also tends to rely on those external utilities, but also external libraries. So sometimes you'll have to also install things like the Python MySQL library in order to work with databases. One of the big advantages of Ansible is that it's agentless. You don't need to install anything on the target server. There's no weird background daemon that's running on port 1421 or whatever the heck it is in order to do all of the configuration management stuff. It's just SSH. It's just SSH and it's just public keys. You don't have to have some fancy gooey in order to set anything up. It's just normal SSH. If you don't know how to set up public keys, you really, really should. There's an excellent type tutorial on line of docs all about setting up SSH public keys. Ansible is also serverless. There's no central puppet master server that you have to manage. Wherever Ansible is run from is designated the controller. That can be an integration server in your data center. It could be your laptop. It could in fact be the system that you're managing itself operating in so-called local mode. What do those inventories look like? These specify the targets to manage, right? And they're just a text file of IPs and host names. You ready for how complicated this is? I missed a step. I always missed a step here. There's also a global inventory at Etsy Ansible dash host slash host, sorry. Here's what an inventory file looks like. There you go. That's it. It literally is just a list of IP addresses with carriage returns. Now it can get more complicated than that. You might want to take some of your servers and group them together under a more commonly used name like you might have a live cluster. So you can use this bracket syntax to create a group. Sometimes, however, you do want Ansible to run locally without executing SSH. Just connect to the same system Ansible's working on. And you do that by passing it this variable ansible underscore connection local. All right. Ansible's also really easy to read. This is the one thing that I really like about it. All the code is in YAML. If you know Drupal 8, you know YAML. You can read a playbook. Congratulations, you are all DevOps engineers now. Also that execution is run from the top to the bottom. There's no weird dependency graph or a whole bunch of stuff that it has to figure out. No weird complex server model. Start at the top, finish at the bottom. That's it. So let's look at a playbook. So we have those three dashes at the top. That means that this is a YAML file. We're going to specify what hosts we're going to target. So this targets all the hosts in our global inventory. We're going to define some variables. We have one called hey look. It just has a variable, a string in it called a variable. And then we have a series of tasks. Each one of those can have a human name that's for us, not Ansible. And then after that it has this thing called stuff doer. That's a module name. That is going to be the component that does stuff on the target system. And it usually takes one or more parameters. And you can use those variables that you defined as those parameters. Ansible also has all the batteries included. You don't have to go and get the battery or plug it into charger. It's already done. All the modules that provide the functionality are already built in to Ansible as soon as you install it. I actually have a container that has Ansible on it. And it's only about 70 megabytes. So it's even really small too. You can find out more about all of the modules on docs.ansible.com. So let's start making our lives better with Ansible. So here's a basic playbook that's going to do a git clone. So you can see that we're using the git module. We've specified a variable called git directory. We're specifying what the repo is, where to put it, that we want to clone it, we want to update it, and what branch we want to use. Notice what you don't have to write here. You don't have to tell it does the directory exist. You don't have to tell it if it doesn't exist, create it, and then do a clone and not a pull. You don't have to make sure that you're pulling the right directory, that it's actually the correct repository in the first place. You don't have to do it a pull instead of a clone. It does all of this stuff for you. It's very expressive with a minimal amount of syntax. So how do you run this playbook? You actually use the ansible-playbook command and then you just specify the playbook to run. But there's a problem. We didn't specify where. Remember in our playbook we said hosts all, which means it's going to go to the global inventory. That would target every system we have. We don't always want to do that. What if we want to target this client site or that client site? We don't want to target everything. So what we can do is we can narrow our targets. We could actually update the hosts parameter in the playbook to target particular clients, but then we're editing the playbook every time and it's a mess and it's not very dynamic. The best way to do that is you pass the dash i parameter to ansible-playbook. Then you can specify an alternate inventory file. What's this look like when you have this all in your repository? So if you're a repository, typically you're going to have an ansible directory that contains all your ansible stuff. Inside of that you'll usually have an inventory's directory that will store all of the alternate inventory files that you have. We'll have two of them here, one for our live environment and one for a stage environment. Likewise, we're also going to create two environment-specific playbooks to do a deploy. One's going to be for the live environment and one's going to be for the stage environment. This way every environment has its own inventory and its own playbook. Okay, let's fix git deploy. So far we're using ansible, but we're still stuck with all of git's problems. When we do a poll, all of that deep hurting comes back to us. We don't want to do that. How can we fix all of those git problems? When I started to think about this, the big problem is that our git directory and the directory that the web server looks at, the web directory, are the same. What if they weren't the same anymore? Because the thing is that git deploy is like hook menu. It should be doing one thing well and not several things okay. We need to make them more specific. So here's how we solve this problem at 10.7. So we have a git repository directory and we have the directory that the web server's looking at, the web directory. When we start our deployment process, we're going to do a git pull on our git directory. Then we're going to rsync the directory inside of the repo that contains our Drupal site over to a temporary directory, the build directory. Then we're going to do all of the stuff we can do without touching the database. Our compass compiler, our grunts, our gulp tasks and all of that other stuff that we can do that doesn't touch a database. Then what we're going to do is rename the web directory to web directory underscore old. At this point, the site is offline. So we don't actually have a running site at this moment but fortunately it doesn't take us long to go back online because we just renamed the build directory to the web directory and Apache and Nginx don't know the difference. Then after that we do all of the post deployment tasks that do touch the database, our configuration import, our cache clear, our db update. And then afterwards we're finished. We actually have our site online. There's a number of different advantages to this. There's no more git ignore hell because the git directory does what git does best. It mirrors the repo. It doesn't have to do anything else. We no longer have to run git clean dash df. Also there's a lot of stuff which is ready before the site goes live. We take a whole bunch of those configuration, we take a whole bunch of that stuff like the grunt task, the CSS preprocessor files. All of those artifacts are created before we switch them over so that we don't end up under serving any user. This includes all those generated assets and so on. Also there's zero, quote unquote, downtime. Because what we do is that the directory move happens on the same disk. In Unix systems, when you move the directory on the same disk, it's a very fast operation. It takes literally milliseconds for that to actually be done. And we also do it last in our deployment process. We do all of this other stuff before we touch a database so that we minimize the number of steps that happen after we move the new version over. This way, if something breaks the build, we do have a git problem or something is down that we can't access it. We can't generate an artifact for whatever reason. Your site never went down. It's still up because we never touch the web directory yet. Only the build directory. All right, so let's add that process to Ansible. So first we're going to make sure that our build directory exists, that we're going to use the file module. It's going to have the state of directory. So we want the file module to make a directory. We want it to have that particular file permission mode and be owned by this group. Then after that, we're going to copy the files, the web files over from the repo to the build directory. So we're going to use the synchronize command. The synchronize command just runs our sync. So we specify the source directory, which is git directory. And our Drupal site is under the doc root subdirectory in our git directory. We do a desk, how we specify the destination as a variable. And we say if another build directory did exist before from say a failed build, just go ahead and delete it. We don't care. Now don't forget the trailing slash because if you've ever used our sync, if you don't specify that trailing slash at the end of source, it will copy the directory too. You want to copy the contents of the directory. Thank you, mini me. All right, now we can do the go live. So we're just going to do a move command. To do basic shell commands in Ansible, we just use the shell module. And the shell module has a few nice features in it. We can actually specify does this command create this file or directory and remove this file or directory. So this way we have a nice check that things actually are what we think they are. We've maintained state. Next we're going to move the build directory to the web directory. Second verse, same as the first. All right, so everything's great. We're back online and what is it, mini me? The old directory, damn it. I forgot about the file directory. So when we moved the web directory to web underscore old, the file directory's in web underscore old now, right? Isn't that a big problem? Now all of our files are not there. We might get permission problems and all that pain starts coming back to us. How do we fix that? We don't want to move it back on each deploy. In fact, we don't want to move the file directory ever. So how do we fix that? So here's where our site directory is. It's going to be under public HTML. And under public HTML, we have sites and default like you'd normally expect, but the file directory isn't there. The file directory's actually in the root outside of the public HTML directory. How we actually get to the file directories we create a sim link. So the sim link will tell Unix operating systems and also Apache and Nginx will know go to this directory instead whenever I go to that path. You can create as many sim links as you want to that files directory. So you don't need to worry about multiple copies of stuff. All right, this way the files directory never moves. It stays in place. How do you do that? Use our Swiss Army knife, the file module again. You're going to do a state link, specify the source directory, which is where we have the files, and then where to put the link mode and group. This is actually now an artifact. We build this as part of our site deployment. So we do this before all of those other steps, before we touch the database ever. So the files directory looks like it always existed and it was always in the same place. So what about those post go live tasks? Config import, database updates, cache rebuilds. We use the shell module again. We use the shell module, but this time we're going to specify which directory to execute all of those commands in using or change directory. So there you go. But what happens if you need to roll back after all of this process? You do all this deploy and now you have to roll back. And now your hair is on fire because you have to go and pull a database backup from who knows how far ago in order to restore things and it's a mess. So how do we create a restore point on every build? What we're going to do is we're going to keep the last site build. We already do that because we have web underscore old. We want to archive the sync directory so that we have the exact same configurations that we expect it to have. And we want to create a database dump. Now in order to make sure that our backups are going to have unique names, we're going to first generate a timestamp to know when that is. We're going to use the set fact module to create a new ansible variable called timestamp. And we'll just get that content from the date command in Unix. So now we have a variable that has the current date. Then we're going to do a config sync backup by using the archive module to create a new backup under our backup directory slash sync with the timestamp. We don't even need to specify the compression method because the file extension tells ansible what compression method to use. Then we're going to backup the database using the MySQL underscore DB module. We're going to say make a dump. Here's our login information. Dump it into this target directory with that compression and it will take care of all of that for us. So that's a whole bunch of stuff that we can do to create a backup on every single deploy. We're going to do this the very first thing. If we can't take a database backup, stop the build. We want a backup every single time because 10 minutes of panic is better than three hours of panic. But there's still a problem. You're sitting there, you're being a nice DevOps person, but someone says you have to go deploy your site and it's like, I still have to run the damn thing manually. I don't want to run the damn thing manually. I want the server to do it for me. I want to be lazy. Sobby, if you've heard, I've been in this situation before. Your developer gives you a Slack message saying, can you deploy this code to stage? And being, of course, because you're aimed to please and you have a self-loathing nature like any DevOps person, you're going to say, sure, yeah, give me a few minutes. I'll take care of it. Two hours later, their developer is wondering, what the heck is going on? Where is my changes? They didn't show up to anything. Did I do something wrong because they're a good self-loathing developer as well. And then you're sick of the glowing constitution. I call this Slack to deploy. It bottlenecks the testing and releasing your code because it means that an ops person has to actually go and do the deploy for you. You don't want to have to do that. It also, it makes your developer's lives hell, but it also makes your ops person's lives hell because it makes ops into a never-ending firefight. In addition to fixing stuff, you're always deploying stuff or making stuff better so that you can deploy it better the next time. And you might as well say that it's a hairstyle, not a job description. That's why we need something called continuous integration. Continuous integration is a $2 word for a $10 concept. It basically means commit smaller pieces of code every day, sometimes several times a day, and then deploy them as often as possible. But because only you can prevent head fires, you require automation in order to make sure that CI actually works. So we need some CI servers in order to do this. A CI server basically monitors your repo for something that happens, usually a push. And then you want that CI server to perform actions in response to those events. What's that look like? Your developer does a push to the get repo. The CI server is going to receive an event from that repo. It's going to read some kind of CI rules, files, or database in order to figure out what it needs to do. Often that will mean connecting to a remote system to upload some source code somewhere. And that's what that does. That's what a CI server does. There's a lot of different CI servers out there, but the one that I happen to like is called GitLab. GitLab CI, it is free and open source and it is integrated with GitLab Community Edition. And you can get all of it from a variety of different ways. You can actually go to gitlab.com and get on the free tier. You'll get unlimited private repositories as well as CI for free. You can also download it and self-host it yourself. The installation does take a little bit of unique skills in order to figure it out. Or if you're lazy like myself, you'll probably want to put it on GitHub, on Docker, in order so that you don't need to worry about doing all of that server stuff. This is actually the Docker compose YAML file that I use to power my ridiculously named integration server. So yes, you too can run GitLab and CI on your system right now. All right, so we have our GitLab and our GitLab CI server set up. How do we get it to talk to the web server? We already know that GitLab and GitLab and CI are gonna talk to each other because they have a natural integration because they're sister products. GitLab CI is also going to need some kind of connection information in order to get to the remote systems. GitLab is also going to require, we're going to also need to know what steps to execute. And for details of GitLab's implementation, that comes from the repository itself. How it knows how to do all this stuff is in your repository, there's a file that's gonna be called .gitlab-ci.yaml. And it basically maps a repository event to a series of steps to execute. Here's what a .gitlab-ci file looks like. So we have some stages, don't worry about that. We have a job, we have a number of tags which identify what kind of stuff we're going to be doing. And we basically have our script. This is our entire deployment script and we're going to execute that when someone pushes to the master branch. This is actually a real production file that we use at 10.7. That's our entire CI script. Everything else is in an Ansible playbook. If you have multiple environments, you just create another job and you point it at a different environment. So we have different inventory files, different playbooks, different jobs for everything. How this looks in your repo? You have your .git directory, you have your Ansible directory, you have your per environment playbooks. And at the very root of your directory, you're going to have the .gitlab.ci.yaml. And that's going to tell .gitlab what to do with the environment specific playbooks and also when through the .gitlab.ci.yaml file. But what about where? Where's the connection information? We don't have that yet. We need something called a runner. A runner is the process that executes the builds and they're typically created for each individual specific project, but most importantly, they provide the connection information. So the runner is going to take several tags that match up with the .gitlab.ci file. So the runner knows for each repository which runner it's going to need to execute. It's going to have the SSH login that it's necessary in order to connect to a remote system. And it's also going to get the rest of that .gitlab.ci.yaml to get all the set files. It executes over, it connects to a remote system over SSH and it instantiates a new shell session. And from there, it can run any command that we need to. It also can clone the repository for you so that you always get the correct repository for that build. You can run Ansible and you can run other special commands after that. So great. So this is what the command looks like that had to create a new .gitlab runner. It's really complicated, but it basically says the URL is where the .gitlab server is. The registration token tells you what repository it's going to be linked to. The tags are going to be corresponding to the tags that are in the .gitlab.ci. And then there's the connection information. Now there's two different methods that you can worry about how to configure all of this. Normally you might want to have the .gitlab runner connect to the target web server and then have Ansible run locally to run your playbook. This is great for IAAS and other services like Blackmash that actually provide you a raw server that you can install custom utilities on. But what happens if you're running on shared hosting where you basically only have Python? Well you can run it the other way around. The runner can connect to the CI server and then Ansible can look at the inventory file and then connect to the shared server instead. This method actually works a lot better for shared hosts because the only things that need to be on the remote host are whatever commands you're going to execute and also Python. And Python is practically in every Linux distribution out there. So nine times out of 10, it just works. But we're not done yet. I forgot something, didn't I? You're sitting there, you're going great. My life as an ops person is wonderful. I can actually do some development stuff now. I'm gonna push code to my repository and the CI does the rest and I can be a lazy bum. And then you press the deploy button and suddenly your site is dead. Because you forgot, the database login isn't in the repository and we just deleted our old live site. We don't have it anymore. We made a new one. So where does it get that? The old axiom still holds true. We never want to put the database login inside of our repo. There's also a whole bunch of other stuff we might want to keep out. Things like API keys, cloud identity keys like an AWS IAM or maybe hash salts. There's a pedantic security argument to hash salts belong in your repository or not. All right, so even if you keep all of that stuff out of the repository, does that mean every time someone does a push, you have to SSH in and actually add all of that stuff back? What's the point? CI should be doing that stuff for me too, but I need to do it in a secure fashion. How do I do that? GitLab provides a feature called secure variables and it's brilliant. It's a unique name space per repository. So you define a number of key value pairs per repository and the values are encrypted on the GitLab server. It's really easy to define an edit because it's just done through a web UI. You don't need to worry about anything else. But how do these variables get to Ansible? Well, when the GitLab CI does a build, it exposes all of these secure variables as environment variables in the GitLab shell session. And then what you do is you chain load those to your Ansible variables. So it kind of looks like this. You have your regular Ansible variables and then you have the secure variables you're grabbing from the environment. And you might be thinking, test, that sounds great, but what the hell are you thinking? Environment variables are super mega insecure. We don't want to do that in production. Are you cracked? The thing is that GitLab CI environment variables actually do work and the reason why is they're only set during the build process. They're not persistent. They also are only set for the build's shell session so that you literally have to get to the build when it's running, at the time it's running, and break into that session to get to it. And because those variables aren't persistent, there's a lot smaller of an attack surface. So you don't need to worry so much about that. All right, so our build is taken care of because every time we run the playbook, we're going to have our GitLab CI expose those environment variables for the database credentials. Those database credentials become Ansible variables and we can get to everything. But once we're done building all of this stuff, we still have to get it into sessions.php. How do we do that? Where does it get the logins? Instead of saving them to the repo, which is a bad, bad, bad idea, what we want to do instead is we want Ansible to write them for us. And there's a wonderful thing in Ansible called Ansible templates. It basically lets you replace placeholders in files with Ansible variables. And it's got a twig-like syntax. So if you know a little enough twig to be dangerous, congratulations, you can write a template in Ansible. This is how it looks like in the playbook in order to use the template module. It's the template module. You specify the source file to use as the template and the destination file, where to write the replaced variables. These can be in fact the same file. So inside of our settings.php, we now have this. We now have taken all of the sensitive stuff out and we've replaced it all with Ansible variables. We even have some sensible defaults for some things so that we don't need to worry about if we forget to enter something. All right, that's great, but we still have a problem, don't we? Because each environment now can have different templated credentials. But what happens if we need to store configuration overrides or varnish configurations or things that differ between our stage and our live environment? How do we handle that situation? What we did is we actually extended the settings.local.php concept to environment-specific settings files. So each settings file, so each environment now has its own get-backed settings as well as database log and templated out. So nothing is stored in the database and we also have all those configurations in our repo where we can manage and track them. So how do we update our settings.php? We wanna break that up into per environment files and we are just going to conditionally include other settings files per environment. You might have seen this line somewhere in your settings.php. Well, it's really easy, we just extended it out. We use some kind of condition. In 10.7's case, we're going to probably just use the directory path, creating a name like stage or live. It could be an environment variable, it could be the source of another file, it could be something else. Whatever works for you is whatever you use and then we just conditionally include whichever file that is. Now the problem is we want to make sure that those files have separate names. So when we update our template command in our Ansible Playbook, we wanna make sure that we include the environment name and because we want to be lazy and not have to copy and paste that twice, we're just going to use a variable for that. So we just have our stage and our live environment and does everything for us and it's great. But there's still a problem. So you might be sitting here after a few months of doing all this stuff and you're going, okay, fine, but now all of our playbooks are kind of similar between sites. So we've kind of standardized everything and now I'm copying and pasting them all the time. Is there a way that I can make them reusable and extendable so I don't have to copy and paste them? There's something in Ansible called a role. A role allows you to take a playbook and also some default variables and package them up so that you can reuse them again and again and again. Now I'm not gonna get into how to make a role because I've already bored you to tears already. So instead I'm going to point you to the section on Ansible Docs and how to create a role, but there's still another problem. As soon as you make roles, now you're just copying and pasting the roles back and forth and nothing is getting better. You're still doing the same thing again. How do you fix this problem? Can't I just keep all of these roles in the same place so I don't have to copy and paste them back and forth so that I just have to update it in one place because universe forbid, if there's a bug in my role, now I have to go to every single project repository and I have to go and fix the same line and that is an error prone process because I'm doing it manually and we don't want to do anything manually. We want to update it in one place and we'll have CI do everything for us. What if we could get CI to grab the roles for us and treat part of our CI as an artifact? Insert inception sound here. There's a thing called Ansible Galaxy. It's distribution for Ansible roles. You can find it at galaxy.ansible.com. How do you get some roles from Galaxy? You use the Ansible Galaxy install command and it takes one or more org-role names. Then you might think, wow, that's great but now I'm going to have to write all of this stuff in a playbook. No, you don't. I actually did for you already. You can find our roles that we use at 10.7 for both stage, remote, shared hosting, and production deployment, all online already on Ansible Galaxy and if you don't like that, you want to look at the source code yourself. Boom, it's right there on GitHub. You can get it yourself. Go ahead, tell me how bad of a coder I am. Now in order to install all of this stuff, we will probably have to specify ansible-galaxy-10.7.drupaldeploy.drupalbackup and you might have like five or six of these and that becomes a lot of stuff. You need to install with Ansible Galaxy and you don't want to have that in your GitLab CI. It makes it too long. You want to put it somewhere else. So we can use something called a requirements file and the requirements file is just a list of roles to install and it can be from either Ansible Galaxy or if you need to keep your deployment code proprietary and private, you could point it to a Git repository. This is what the requirements file looks like. It's just going to point to our two roles on Ansible Galaxy and now we're going to update our GitLab CI.yaml to pull those roles before it does the build. So that's a really, really, really long line test. What does that actually mean? Well, let's take a look at it and break it down. We do an Ansible Galaxy install. Because we're operating a CI, we want to make sure that every time we do a build that we're going to tell Ansible Galaxy, yes, download the role again, even if you already have a locally cached version. That way, if there's an update, it gets the update. This does incur a few seconds of download time for each individual build, but it's worth it because now your updating is transparent. You just need to rerun the build. Also, we're going to specify a requirements file with a dash R. And then just because sometimes it does happen and it becomes a very bad day when it does, if GitHub is down and your roles are hosted on GitHub, you'll probably want your CI to fall back to any locally cached version of the roles that you have. So you use dash dash ignore errors. This way, GitLab will go, oh, well, that command didn't error out for any reason. I'm just going to keep going. And it will just use the last version that's cached for your server. And now you have a happy ops person because now all you have to do is push to deploy and everything's wonderful. I want to take a moment to give some special thanks for this presentation, especially the Drupal Association who was kind enough to give me a grant to actually be here today for 10.7 for also providing me support. Also my wonderful patrons on Patreon at patreon.com slash socketwench. Also make sure to come to the Friday sprints. We have a first-time workshop at 9 a.m. to 12 p.m. We're going to have, if you've never worked with Drupal before, never done PHP, don't worry, we will help you. And if you don't want to work with PHP, there is tons of stuff that you can do to help and contribute back to the Drupal community. And there will be mentors here all day to assist you. Make sure to give me some feedback on that link or on the DrupalCon Baltimore website and you can find this presentation on github.io. Let's spread the pain around. I really need something to take my center. If you want to hear Jim, you want me to say? Okay. Okay. What the mic? Yeah, I got plenty. This one just finished. Maybe some other speaker coming up. It's a different speaker. Okay, okay. So when she gets here, you're going to plug her in and you're going to turn her laptop like this. Okay. So now she can look at her laptop. Okay. She can look at the screen. Okay. Cool. And if she wants to, she can... Okay. Cool. Cool. Okay. All right. Cool. I'll make sure you have one.