 My name is Oliver Davies, and this session is deploying PHP applications using Ansible, Ansible Vault and Ansistrano. We're going to cover a few different things today. First of all, my name is Oliver, as I said. I'm a developer, software engineer at a company in the UK called Invika. This talk is mostly based on my personal experience working with these tools on personal projects and site projects. I do quite a lot in the Drupal space. I'm an aqueous certified Drupal developer. I maintain various modules on Drupal.org, as well as maintaining actually quite a few roles, ransom or some of which we'll touch on during this talk. Firstly, it's probably good to notice when maybe this approach is not suitable for you. We, at least in Invika, and many others use platform as a service hosting solutions, such as the ones on the screen, like platform SH and Acria, who have tools such as platform has build tools and deploy steps built into it, as does Acria for pipelines. A lot of what we're going to talk about, you can probably replicate using their native offerings. I tend to use, at least for personal projects, services like DigitalOcean or Linode or Volta or Rackspace. Services more where they give you a server and it's up to you then to configure the server and handle all the application deployment yourself. Or this may be a case of, in a client situation, they don't have a budget for a fully managed system like Acria or a platform, or maybe they're using an internal infrastructure like their own service in a rack of the basement or something, which I've seen. So, we're going to be looking at three parts of the Titan Implies. First, we're going to be looking at a little bit of Ansible Crash Course for people who may not have used it before. How we can then use Ansible Vault to keep secrets and to keep our credentials safe. And then how we can use a tool called Ancestrano to use deployments, as well as how to do deployments with Ansible natively. So firstly, what is Ansible? The definition from their website is it's an open source tool that automates software provisioning, which I think is what it's mostly used for, people using it to set up their servers. It can also be used for deploying your applications onto the servers as well. So, it's a command line tool. It gives you at least three or four different commands you can run in your terminal. It's written in Python, but you don't need to use or write Python in order to be able to use it. So, it's mostly configured with YAML, which, as Drupal 8 developers, I assume most of us are using Drupal 8, or have used YAML in some other services like Jackal or something. Most people are familiar with YAML, or at least Ginger 2 templates, which is very basic in that twig. And it's used then to run, you run commands locally in your machine or on a dedicated run machine, and you run those on a remote server, which could be your digital ocean or your line-out, or vagrant box locally if you're doing local development. And it's used to install software packages. So, if you're running a web server, you're going to install your Apache, your MySQL, your Redis, et cetera, using it. And as I said, you can also use it to run deployment steps such as your Git clone, et cetera, et cetera. And one of the reasons why I like Ansible is it has a batteries-included approach, which means, in comparison to other tools, including some that I've used, they don't have tools like there's a composer module in Ansible that we can use to run composer. So it comes very much with these things built in. That's just one example. MySQL is another module I use quite a lot for managing databases. So all of these things are included as part of the core package. Some of the key concepts, hosts and inventories sort of mean the same thing. That's how you tell Ansible where your servers are. There are commands that we can run from the command line using Ansible commands. We can then write playbooks, which we'll look at some examples of, which are in YAML. Tasks are individual steps that we've run that are combined inside a playbook. And then roles are a collection of tasks that we can package up and deploy. So reasons why, like Ansible, it's a familiar syntax. As I say, it uses YAML, a large AAA uses YAML. Again, Ginger 2 is very, very similar to Twig, so we're going to use it to the thing of that. It's easily readable, so any developer on a team could open up a playbook and figure out what it does, which really can't say that for certain other tools. There are no server dependencies except for Python, a little aware of, actually on the remote server. So also you use tools such as Puppet, which relies on you to install software packages on the remote server, which this isn't the case in this instance. So as long as you know pretty much what the IP address of the server is and you can connect to it over SSH, you can run Ansible commands against it. And therefore it's easy to add to an existing project. And as I said on the last slide, it comes with relevant modules for PHP developments such as Composer and MySQL, et cetera. So let's look at an example of a host file, an inventory. There are a couple of ways, two ways that I love to define these. The first is an INE syntax, an I syntax, where we group our web, in this case web service together inside the curly double square bracket and then we list our IP addresses. So this is for a vagrant box in a group that I'm calling web servers. So this is going to be my web servers. Alternatively you can use YAML again for this. So this does the same thing in a slightly different format. And these are really simple examples. You can do ranges, so you can have between IP addresses, between 1 and 3 or 1 and 5 or if you're using certain host names with sequential numbers, you can wildcard them in there. So there are also ways of pulling in building an inventory using a manifest file externally as well. But I can't really have time to speak to that right now. Commands. So this is the most basic simple command that we can run. We're going to use Ansible. We're going to tell it which group of hosts to run against. So we can just say all in this case. This could be web servers based on our previous slide. And then we can use dash m to tell it which Ansible module to run. So ping is one that's just going to send a request to the server and look for a response back. So this is what this looks like. You see our server group at the top. This is successful. We do get some facts back. So these are things that Ansible has figured out about our system, or retrieved from our system. Nothing has changed. So Ansible has not made any updates to our system. But we sent a ping. We've got a ping back. So most of us are familiar with that concept. But we know that Ansible in this case is to be able to connect to our server and at least be able to make a connection. This is another example. It's using a slightly different command. So in this case we're actually using the command module that we can use to run arbitrary commands against a server. So, again, we can Ansible all. Use dash m to specify which module. The module is called command, which is slightly confusing maybe. And we can use dash a to pass through arguments. So in this case we're just going to run git poll. And we can say change into this directory before we run that command using dash s chdir. So in this case go to slash app, then run git poll. A more realistic example is to use the actual git module that ships with Ansible. So we can do that with dash m git. So use the git module and then pass through our values or arguments to pass through to the module. So this is our repository. This is a set up that I was playing with for Ansible and Drupal. So Drupal, Drupal, Ansible. And these key value things. So repo equals and dest equals. These are arguments provided by the module. So you can look upon the documentation, what these are. Some of them have defaults. Some of them you have to specify values for. So in this case, this is the repository URL. This is the destination, i.e. the path we're going to clone that repository into on the server. Tusk and playbooks as these are sort of yamol grouping of commands. So we can specify again our host file. We can specify some variables under a vars key. So we're going to store our git repository string as a variable in this case. And then our task, we've moved that command into a task. So we're still using the git module. We're still turning it which repository to use, which destination to go, et cetera. We're specifying the master version in this case. The branch or the tag or the commit char to check out. And in this case, we can say update true. But you'll notice that git repo now is using the double curly brace syntax because it's a variable. We're going to substitute that with our var section above. There's a different command to run for running a playbook. So ansible.com playbook. We specify the path of the playbook. So I put them in ansible directory. The root of my project. And then I use dash i to specify the inventory, so the host file that we're going to use. So in this case, it's just called host.yamol. And then, I think lastly for this section, roles. Roles are collections of playbooks. So Jeff Gooding, who I guess most people in the Dribble community is familiar with, writes a lot of ansible modules, including these are ones that I use to set the basic sort of lamb stack. Or yes, lamb stack in this case. This is going to install Apache Composer, MySQL, PHP, and the PHP MySQL stack. So the ordering does matter. As I found before, I tend to default at ordering these alphabetically, but that doesn't work. But this is how you use these. Five roles to make a fully functional lamb server with Apache MySQL Composer on your server. The best way to do this is to have a requirements file. So very similar to your Composer JSON file, you have a requirements.yamol file specifying which roles you're going to install. You can specify version numbers. I recommend you do so. We would then tell Ansible Galaxy where to pull these roles from and to install them. And then once you've got them down, we can just plug them into our playbook. We can specify roles and then specify the list of roles. And actually here it's where the order matters role and the other place. Again, we can see how we can use these to do some provisioning tasks like creating a database because Drupal, we need a database. So we can use MySQL underscore DB module to create a database for us. We can give it a name. So my database, the state is present. So we need the database to be present and to exist. And then we're going to create a user using MySQL user. And our name is going to be Drupal and our password is secret. And we can specify which databases we want this user to apply to. So now we've got that. We can use Ansible to do a basic deployment. So on another playbook, this one's going to be called Deploy Ymol. We can create our directory. So we're going to create this in slash app. We can say it's going to be a directory rather than being a file or a sim link. We can upload our application. So Ansible has a synchronised module which is basically a wrapper around our sync to upload our files from our local machine. And we can use Composer module to do a Composer install inside that directory. So this is a really basic deployment script that we could use. There are some disadvantages. So a single point of failure. If our Composer install was to fail or something, then our site would be down. We'd have no ability to roll back at that point. We'd have to know if our site would be down. We'd have to go figure out what's going on, do another re-upload, et cetera. And our sensitive data is stored in plain text for everybody to see. Particularly if this is an open source repository and if it's a public repository, you don't want your MySQL passwords and databases and everything to be stored in plain text. This is where Ansible Vault comes in. So we end this package with Ansible out of the box. So we can use Ansible-vault create to create a vault file. I'm going to store it in my Ansible directory and call it vault.yml. This is what it looks like. You can type in a vim or sublime text or whatever your default editor is. You can type in your things. I'm just moving those values into this vault file. So again, it's just key value pairs written in YAML. But if you were just to open that file in plain text, this is what you would see. So this is no good to anybody. We can quite safely put this on a GitHub repository or something if you wanted to. What I then tend to do is have sort of a middle variables file. So I tend to prefix all of my vault variables with vault. So you can see the vault underscore something. So that makes it quite clear to me that they're coming from the vault rather than from a separate variables file. But maybe for the sake of keeping my playbooks clean, I don't necessarily need to see the good vault everywhere. So I tend to have this vault sort of normal variable in the most places. And the great thing then is I can just substitute out my private sensitive data with what's going from the vault in the same way I did for the GitHub repository you were well earlier. And then to edit the vault, it's basically the same command. You just run edit rather than create. It opens up the same window. You make your changes, save it, and it updates your vault for you. So how do we then access the vault? So I should have mentioned the vault was password protected. So you enter a password to access your vault. So you store that in last pass or something ideally. And then I ask you to specify how do you get into the vault. And you can do this using as vault pass option. So it will prompt you for it on the command line. If you're doing it in CICD pipeline, you can store it in a file and specify the pass to the file as an environment variable. So you've seen a really basic deployment. How do you do better deployments? This is where Anastasdrano comes in. So Anastasdrano is just another role or technically two more roles. And if the name seems familiar to people, if you've used a tool called Capastrano, it's a port of that from, say Ruby, I think, into Ansible, which is great. So there are some, obviously there are some features. Multiple release directories. So by default, it's going to have each release in a separate directory. There's the option to have shared pass and files. So for Drupal, we are a files directory where our user upload files go to need to be shared. So we can do that. And it's really flexible and customizable, which we'll look at in a minute. We can use multiple deployment strategies. So we saw our sync in the previous example. We can also pull from Git repository on SVN repository or something. And we can use multi-stage environments. So if you've got a production site and then a staging site, it can cater for that use case as well. And there is an option to prune the old releases. So you can say, keep the last three releases or five releases or 10 releases, depending on your project and your situation. And it will stop that directory getting really, really crazy big and filling up all your disk space. And the second role, so one role is deployed. One role is to rollback. So you have a separate playback to do a rollback to the previous release if you discover a problem you need to rollback. So how do we do it? In our requirements file, we're going to require Ansostrano.deploy and Ansostrano.rollback. And in our playbook, all we need to do under roles is specify the roles that we want to use, so I'll deploy role in this case. As I mentioned, it's customizable. So it provides a number of variables that we can use to configure it. So I'll deploy a directory. I tend to have project-specific variables, release-specific variables, and they are all underscored with this level or the scope. But then Ansostrano provides its own one, so Ansostrano underscored something. So in this case, we're going to deploy via Git. So we're going to clone from a Git repository. We're going to deploy two of our RWW directory in this case. We're going to clone from the master branch, and again, our Git repository. So we can send you the same command in this case to run, deploy with Ansostrano. This is what it looks like on the server. So if I'm inside my app directory in this case, I can output the directories. We can see our releases directory, which is where all of our releases live. A shared directory is where our shared stuff lives. And there's a simling vehicle, current. So current is a pointer to the active release. So we'll see it goes into the releases directory and uses a timestamp value for each release. So they're all unique. And each release is on a separate directory, completely isolated from each other. And then if one fails, it just doesn't update the simling. So it would sit there as a fail build. As I mentioned, you have the option to rollback. So we can do this using the rollback playbook. Sorry, the rollback roll, which I store in a playbook called rollback. And then all you need to do in this case is include the rollback roll, and then again, give it where we're going to deploy to. And it will know which one to rollback to if something was to fail. And again, you just run... The only thing we're going to change is the path to the playbook in this case. Rollback.yamol, rather than deploy. So a few more minutes left. Customisation, there are a number of build hooks that you can hook into during an answer to try and deploy. There's always a before and after. So in this case, shared is where your file directories and your log directories and your cache directories are linked. There's a before and after sim link. So when that current sim link gets updated. Sorry, two sim link steps. One is when the actual shared sim link is linked. One is when the current release is linked. And then there's a cleanup step. So if you're going to remove your node modules directory or database exports or testing databases, you could do that in that step there. Apologies when it's quite small, but we can see again. We can tell it where our sim link tasks are. So if we want to customise these, we can add extra yamol files based on which step we want to hook into. So I normally have a matching playbook per step. These live in my Ansible directory. And in those, I make the appropriate changes that they need. So in this case, Anstrano gives us some additional variables again. So which is our current active release. We can't hard code that because we don't know. So we can use anstrano-release-path.std out. And that will give us the active release to sim link to. And then we can use that as part of our commands as well. So you go in. And this is in the after update code. So once Git has pulled down our new code, our new composer, Jason Lockfile, we can then run a composer to install those dependencies locally, using again that variable. I know we store a drush-path. So drush is a dependency of our project in this case. So again, I'm going to store a release drush-path and that's going to change based on the release because it's going to change. In this case, we're going to run drush, just to run our database updates after the sim link shared, the shared sim link has been updated. In this case, once everything's happened, we've linked the final sim links and have the current one. We've released a new version. After that's happened, we're going to clear our Drupal cache again so that our new version is nice and cache-free. And then once you've done that, we've got our website that we can use now. I have a quick demo, which we can maybe see that's running. So let me find my code. There we go. So in this case, there's nothing, there's no website. So we get a nice, practically not found error. You're walking here so I can see what we're doing. No, there you go. So in this case, we're going to run our... I can't see this one here, sorry. This is running the deploy script. So you can see... It's going to give us this output of every step. It's going to show us what's changed because it's in yellow, typically, and it will say changed somewhere. This angle is weird for me seeing the screen. I apologise. You can see exactly which step it's running. We can see some of these steps added by Anastasiano.deploi right on the left-hand side. We can see, again, certain steps, it's going to skip. So our sync is going to skip because we're not using our sync. Right now, it's update... How do you say that? It's going to update its deployment existing code to our server. It's now renaming settings.php. This is a custom step that I've added just to re-synlink that or to copy the default one into the right place. It's going to insert our composer dependencies. Then it's going to do its cleanup steps. So once it's finished doing the sharing, so everything's clean. Another step there is to fix our file permissions. Fire is going to install Drupal. Once again, it's done that. It's now going to do our softlink to our new release to make it active. Then it's going to prune our previous release directory. So we only have a certain number. We see the summary at the end and now if we reload the page, we've got our website. If we just approve it's working, we can actually go ahead and log in and actually see Drupal back. So probably there's a bit on to the right because I'm not funny to the screen. So I apologise. That video is actually up on YouTube, on my YouTube channel. So if you want to see a better or recap that's on YouTube afterwards. Do you have time for questions? One question. Question one or two? Yes, okay. The demo files or the definitions of the build or release are all stored in the repo straight? Yes, yes. The question was all the... Can you repeat the question in the microphone? Of course. Yes, the question was all the build scripts and playbooks stored in the repository. Yes, yes, usually I do. I've got projects on GitHub that have this setup. I don't... The advantage then being that any developer on a project can clone the repository and then see what the deployment scripts are doing. It's not a separate hidden repository somewhere else. Because of the readability of them, I think they do serve as some documentation as well, which is also a good thing. And I don't need to worry about people seeing my database credentials because they're stored in the vault and they just see the hashed output of it rather than the actual credentials. I'd say that you're in control of building and releasing multiple projects. Are there implementations where you have the definitions stored in a separate repo that is dedicated for this label that's building and releasing for other repro and other projects? The question is are there shared things that I could pull in to my project repository that combines some of these steps together? Is that what you're asking? It's probably a way to do it and maybe you could split them out into a separate repository and then include them in a certain way. I've done that, I think, with Yaml and Ansel before. That would be one way to do it. Usually I just keep them in per project because they're all slightly different anyway. At least that's been my experience so far. One more? Okay. Can you clarify with the state of after-synling and the after-synling links because I wonder, like your presentation, you have the last clear case after-synling, right? I wonder what happens if you do it after-synling-shared? I don't quite understand the state after-synling-late and then after-synling. What happens if the code is inside the... What's the difference between synlink-shared and synlink? Synlink-shared, it depends how you've got it set up. Synlink-shared is probably going to contain your old files directory, any user-loaded content and possibly your settings of PHP file because you want that to be, again, consistent across each deploy. In that case, you could put in those tasks in synlink or synlink-shared because once there's a synlink, that's fine. If you're doing something cache clear, you probably want to do it after the main site is live. There are probably certain ones like... If I think of a good example of something, maybe if you're trying to run a migration, a pre-prod environment, you want to run that through Drush. After the settings file is in place, but before the deployment is made live, so again, if that was to fail for some reason or maybe you're running some tests locally that needs the settings file in place, that would probably be a good example of that, I think. So if you do a config update, where do you put it? Yeah, a config update again. I'd probably put it after shared because then if a config update was to... or your import was to fail, the site is not live at that point. The site is not live till it's got past here. So if you can run it there, then if it was to fail, then it's not going to affect your live site yet. Yeah, the answer to that is really good as well. So I'd definitely suggest taking a look at those. Let's see. Let me go contribution sprints tomorrow. So please come to sprints. Any feedback would be great on this website or to me on Twitter, that'd be brilliant. Yeah, thank you very much.