 So if you're still coming in and settling down, don't panic. I'm not starting the presentation yet. I just wanted to tell a little story to fill the time. So I mentioned a minute ago that we've got a short URL up there pointing to the slides. If you look very, very closely, you'll notice that the domain is G1A.io. I decided a couple of days ago that I needed my own personal URL shortener. Because don't we all? Of course we do. And building a URL shortener, it's a very technical task. And it's very difficult. It involves doing one lookup. And then you have to print two HTTP headers. Very difficult. So rather than doing all of that work, of course what I did was I stood up a Drupal 8 site and installed the redirect module. And it handled all of that stuff for me. It was great. I literally did that. Sorry? Let's see how the time to put it in. You didn't see how the time to put it in. Yeah, I think you should put it in and do time to first byte. Because if you have a URL shortener, of course, you should host it on Pantheon, where it's cached with a Pantheon global edge and has a very, very short time to first byte all around the globe. That wasn't an advertisement, I'm just saying. I guess it's still a sub-second. Now is that all the way through to Google or just the redirect? OK, you have some. If you're just coming in, I'll point out once again that there's a link to the slide deck at the lower left of the big screen. It'll be there for the first three slides. Then you're on your own. Or you can memorize it. It's A2, which is short for A squared, otherwise known as your automation. I am doing only one talk this time, and I'm really excited about that. Because at DrupalCon Vienna, I was doing two talks. And the original plan was I was going to do one talk that was my talk. And it's going to do another talk, which was Drush 9 with Mosh. And I was just going to stand there on the stage and smile while Mosh did this thing. And then suddenly we had this plan to do all of this stuff. And suddenly I had two primary presentations too. It was crazy. I could have be two. All right, everyone. It's about the top of the hour, so I'm going to get started. You have come to the right place if you're looking for automate your automation. Practical DevOps. You can find the slides once again at HTTPS, g1a.io, slash A2, all along at home. I'm Greg Anderson. I'm an open source contribution engineer at Pantheon. I also dabble in the back end and do various things like that. I am sometimes known in the community for being a contributor to a number of open source projects such as Robo, Drush, and various consolidation tools. A couple years ago, I was deep in the depths of automation. And I had this problem where sometimes I'd set something up that would be broken. I'd set the same thing up again. And so just in a moment of frustration, I tweeted out that my least favorite thing was spending hours fixing off-credential token problems so that automated deploys can work. But of course, my favorite thing is automated deploys. So we have to work our way through it. So last year, I gave a presentation on creating a smooth development workflow for PHP. And I talked about some of the services that you can use in order to make the stuff go better. But it takes time to set it up. And the observation I had is that all of these great automation tools have automation APIs. So why don't we automate the automation? Before I get started on that, I just wanted to give an example of one of the problem sets of automation that you can encounter in the industry. This is a real-world situation at Pantheon. We are watching a number of upstreams that contain cool and useful software like Drupal 8 and WordPress. And if a new tag appears, then we need to take that new code and get it into our repositories, which are slightly customized for our platform. And from there, it can go out and be distributed to all of the sites on the platform. Of course, we do this with automation. When a tag shows up, we have an automation script. And the key part of the concept of this automation script is when it does its work, it creates a pull request on the repository that's going to receive this new code. CircleCI does its job, sees the pull request, runs its automation, and then when things go right, a human can merge it into the branch and have the tag applied. In order to keep track of what's going on, the automation script will also occasionally post comments to the PR, giving instructions on how to use this thing before it goes live, and a little update when it makes the test site and things like that. So by watching the PR, you know what's going on with automation. You don't have anything else to do to find out what's going on. It's just all right there. So although you, personally, might not have to do this same exact task, we probably have other things that you'd like to automate. And this model of creating a pull request and throwing comments onto it is a good one to follow to do a number of things. So when you're starting off an automation, you go through a very simple process. You identify something to automate. You make it good because you don't want to automate a bad process. You want to automate a good process. So make sure before you start writing code that you know what a good process is. And then you can survey the available tools. You know, what can we do to make this thing happen? And then you automate and repeat. Keep going. Make things better and better as time goes along. Now if you've seen any of my talks recently, you've probably seen this XKCD cartoon because I lead with it just about all the time. This is a funny cartoon and it shows on one access how much time you can shave off of a task versus how often you have to do the task. And then you just go through that matrix and the XKCD author shows you how much time, budget, you are allowed to have to spend on automating your scripts. And why is this a funny cartoon? Well, in the real world, you have people and they have a big job like they have to cut down a tree but their axe is dull and say I can't sharpen my axe because I've got too many trees to cut down. And this is a funny joke because software engineers, they're kind of the opposite. We've got all these trees to chop down and you're like don't worry boss, I've got the sharpest axe on the block. And when I get to chop in those trees, they're all gonna come down really fast. But the point about this cartoon I want to make is although it's funny because of that, he only grants you a one to one ratio of cost savings to benefits and there are other benefits besides your simple time savings. For one thing, if you automate your task, you're not going to forget how to do it. And you're not going to make mistakes if you do it over and over again. And another really important aspect is that not all time is valued equally. If you have an hour of dead time at the end of a sprint, the value of that time is much different than the hour immediately after a client has called you up and said, oh my God, we have got a critical situation and we need to get another bill turned immediately. So if you've invested time in automating so that that emergency time goes smoothly, you've benefited greater than the one to one time that goes into it. And finally, the more you automate, it becomes possible to do more. So this table of how long it takes you to do a task, this table shifts as you put work into it. And so your efficiency goes up and up and you have to reevaluate what you're going to automate, where are your bottlenecks? So there are really a lot of benefits to continuously improving. So what can be automated? Anything you do, development tasks, testing, deployment, maintenance, tasks, updates, just anything that's dragging you down. And we can get really excited and say, I'm going to automate everything, just go for it. But then when we look at all of the things we do manually, it can be really discouraging. Like, oh my God, are we gonna automate all of this? Can just take a long time to make these scripts or write these programs. But if you don't do it, you incur a cost. There's an opportunity cost to not automating. Your repetitive tasks are prone to errors and mistakes. If someone leads the organization, knowledge about how to do this task might be lost. If the task is done manually, it might evolve over time and the documentation or playbook might just become out of date because the people who knew how to do it didn't take the time to update the documentation or the tools. And finally, when the situation gets worse and worse where you don't have time to automate because you're spending so much time doing things manually, that can lead to discouragement. So that's what we're going to talk about today is how to tackle this problem with some of the tools that are available for us in the community. Because we're really at an amazing point of software development right now where there are all of these services like GitHub that are just out there that we can utilize to make our lives more efficient. The first example I'm going to give is scripting with Composer. This Composer has a number of facilities for hooking into the Composer processes and adding additional custom scripts. Composer Create Project is something that many people in this room have probably done before. If you've ever used the Project Drupal Composer Drupal Project, the first step of starting off a Composer-based Drupal install is to make a clone of that project with Create Project and it gets pulled down. The project that you pull down is usually considered to be orphaned from the one you created it from. That project continues to evolve by its maintainer but a lot of times people don't track it. They expect that the things that are going to change in that template project should be in one of the dependencies of the project itself and not in the project that you clone. So usually it's sufficient to just run Composer Update to stay up to date once you've created your own copy. It is possible to track the parent if you want. You just have to find the URL of the original repository and add another remote and then you can pull from the upstream. Composer actually, if you save that you want to preserve the original VCS files, it'll have a remote already added to your local system called Composer and you could do a get remote rename from Composer to upstream or whatever name you wanted instead of applying a new one. After Composer completes an operation, it looks inside of your Composer JSON to see if you have any defined custom script actions that run and create project is no different. And a post create project script is a good way to help customize a project after it's been cloned and orphaned. The Drupal Composer Drupal project uses this facility in order to create some directories and set file permissions. And also the Drupal Composer Drupal scaffold project will download additional files so that it actually does that not as part of a project that is post update and post install. And we're going to do some exciting things with this hook later on in this presentation near the end. Just hold on for that. In addition to the hooks for scripts that will run after a standard Composer operation, you can also define your own scripts inside of the Composer JSON. And you just give it any old arbitrary name that you want and define either other Composer scripts actions to run or shell commands to execute. And in the projects that I make, I like to define a suite of standard actions so that if someone just downloads one of my projects, they know that they can just run Composer tests and it's going to run the full suite of tests. And usually I define different commands for the different types of tests that exist in my project. Like often there's linting, style checking, unit tests, maybe some functional tests if it's a project that uses BeHat. And on the right of the slide, you can see some quick examples of how you just have a JSON file that defines the script name and then contains the bash code to run. As I mentioned at the beginning of this presentation, we've got a lot of services available in our ecosphere and these services contain APIs that we can use to customize the service themselves. I'm going to go over a couple of the things that are available to create full requests, post comments and set up test credentials for surface CI and things of that nature. These are some of the more useful things that you'll do often when you're automating your automation. Of course, there are other APIs that you can follow techniques similar to these and customize other things. So creating a full request is a two step process. You do this manually all the time. The first step is you push a branch up to GitHub and then the second step is creating a PR which usually do that by visiting the GitHub website. But there are other choices. There's a handy tool called Hub that will let you create full requests from the command line. GitHub has a REST API that you can call directly using Guzzle or any other HTTP client on PHP. And finally, occasionally you will find custom libraries specifically geared for one service. And K&P Labs, GitHub API is an example of a very robust API that you can use to do anything that the GitHub API offers. So pushing a branch to GitHub. When we do this manually, usually we just name the origin, name the branch. If you're not familiar with the dash U flag that tells GitHub to remember the name of the remote that we're using. So subsequent calls to get push, go to that same remote again and you don't have to name it. But if you're using a continuous integration service, it's generally less convenient to use SSH authentication. You can, of course, add SSH keys to your CircleCI or TravisCI server, but it's way more convenient to just throw an OAuth authorization token into an environment variable. So if you see then we can replace the name of the remote with a full URL to the repository. You don't have to set the remote. You can just use it right on the command line. And in the case of GitHub, it's possible to embed an OAuth token right inside the URL. It's a little bit backwards. The token is the username and the password is a constant string xoauth basic. I don't know why they swapped that. But you follow that pattern and you will get a get push. It'll go straight through using OAuth and you don't have to worry about your CircleCI freezing up because it's asking you if you're allowed to connect to that IP address. You can fix that with a SSH config, but that's more setup to do. So I recommend using OAuth to push your branches. If you want to use the hub tool, which is useful both for manual tasks on the command line or inside of simple integrations, you can easily install it using brew or apt-get install. And once it's installed it on your path, you can just run the pull request command and give it a comment the same way you give a get commit a comment and boom you will have a new pull request. And this command is operating contextually. It looks at what branch you're on in the current get repository based on the current working directory. Very convenient, very easy. But if you're doing CI operations, it's less convenient to install a tool. It's not so bad, but if you can avoid it, calling the API directly is another option. This little blob of code, I'm not going to go over line by line. I just want to point out that we're just doing a normal GuzzlePost request here and the important part for the GitHub API and most of the service APIs is you want to use JSON to send and receive data. Most of them expect this, some expect accept other data types, but I just try to be consistent and use that for content type application JSON and then the Guzzle parameters has this extra JSON element where the actual post data goes. And you also have to figure out how to authenticate some APIs. We'll put a token in the headers, others expect the token as a query parameter on the command line. And I showed this pattern even though we don't necessarily need it for GitHub because I'm going to show you a better way on the next slide, but all of the other services are very similar and if you don't have a cool API like K&P Labs GitHub API then following this pattern is a good way to add additional services to your automation. Oh yes, my little hacky workaround. There's something of a composer bug. Composer, when it's running your auto loader for a composer script, it only installs the auto loader sections for the classes. It doesn't install the auto loader for the files and the reason it just skips these critical things that should be auto loaded is that the makers of composer figured that they couldn't count on whether or not these files were able to be loaded more than once because sometimes PHP include files are not item potent. So they just sort of threw up their hands and said we're not fixing this. So in one of my sample programs, I work around this by requiring the files that are included in the auto loader by hand. This is kind of a fragile solution because if Guzzle changes what it's auto loading, then you have to update your program, which isn't great. I think the best workaround is to just avoid using Guzzle in composer scripts. Maybe make a standalone tool. You can exec the standalone tool and then you don't have to worry about this nonsense. The short URL up at the top links to the issue where the composer maintainers are discussing why this is this way. If you're interested, you can take a look at it. So if you want to create a pull request via the REST API, you just need to give it the proper URL, which in this case is repose polls with the name of the repository in the middle. Give it a title, something to put in the body. And if you're adding a pull request to a fork, you put the name of the fork repo with a colon followed by the branch name. If you're pushing right up to your main repository, you just leave that part off and just name the branch and off it goes. And base, of course, is the point that you forked from. Very often, master. If you're using the K&P Labs GitHub API, which I strongly recommend, it looks very similar. The parameters look very much like what goes into the HTTP post request, but then you just call the GitHub client with the appropriate API, in this case pull request, and we call the verb is create, give it the parameters and boom, K&P Labs does all of the work to make sure that that pull request gets created for you. Posting comments, as I showed in the first example, it's a very useful way to communicate with other people on your project. If you have a project where you are trying to create a flying machine and you have a pull request that's trying to get ready for the big demo, your colleagues might have some questions like are you sure this thing's going to fly and do we have anyone who's going to do the cliff demo? Cliff jump demo and we'll return to this concept later because the way you build confidence in these things is through testing and automation. Unfortunately, you cannot post comments with the hub tool. It's just missing that feature. There's a pull request there that if you feel like hub really should be able to post comments from the command line, maybe you can help out and convince the hub of maintainers to do some work there, maybe merge it. But you can add a comment from the REST API and this looks very much like the pull request creating sample that I showed you a minute ago that just does a post and as a body that says what you want to say on the command. There's also a K and T labs equivalent to this which again is just using the repo API comments create and the parameters are once again analogous to what parameters you would add in the actual API if you're posting with Guzzel. So the next thing I'm going to talk about is configuring test credentials. This is always really tedious when you're setting up automation yourself if you have two or three different systems and one's going to talk to another. You have to go migrating through the appropriate part of the user interface to set an environment variable or what have you. But we can actually do this directly from the command line using APIs that are provided by CircleCI and Travis. CircleCI has a number of command line tools that you can install but none of them have any facility for configuring environment variables. But fortunately Circle provides a REST API and just like the previous example I showed there's a lot of different ways that you can send these with Guzzel or the built-in HTTP, I mean sorry the built-in Curl that PHP provides but this example right here just shows how to do it directly with the curl command on the command line. The post parameters are name and value and in this case I want to make a environment variable called cred and I put the credential token into the value and then this will set the credential in CircleCI in any test that runs in Circle then we'll have that environment variable defined so you could use it in your tests to authenticate with your service. And you could wrap this in a script so you didn't have to flip through all of those pages every time you made it in a project. The Travis has a really excellent command line tool. You can install it with brew and APT get just like the hub command and it has a very simple command Travis end set and then you give it the name of the environment variable, the name of the token and it sets it. Really easy to do. Downside is of course you have to install one more tool. There is a REST API for Travis. I have not used it because it doesn't support OAuth and the authentication steps you have to go through are a little bit convoluted. Now certainly there are advantages to doing this but if you remember that matrix of how often you do a task versus how long you'll save I just have never gotten to the point of wanting to invest the time to go through this. So I really recommend that you just use the command line tool for now but if you're in that situation where you have to install the command line tool a lot of times in a lot of different places maybe it would be worthwhile to go through and do the multi step authentication handshake that Travis makes you do if you want to use the REST API. So the next section I want to talk a little bit about testing practices. I'm not going to be all meta about automating your automation we're also going to talk directly about just automation and just testing because as I mentioned at the beginning it's important that when we automate processes that we're automating good processes. So one of the really important processes to automate is a deployment. And when you deploy something you want to have a very high confidence that the deploy is going to go correctly. And the way that we are confident that the deploy is going to go correctly is that we have tests and the tests show us that the code is working correctly. The best thing you can do to have good tests is to write good to code. The way to write testable code is very simple it's a two step process. You pass a value to a function and the function returns a value. If the function has side effects then it gets a lot harder to test. So if you can just keep writing your tests like this then you'll find it's a lot easier. You can keep writing your code like this it's a lot easier to write your tests. So here's an example I wanted to show you in PHP unit. PHP unit has a feature where if you have a test here I have a test called testexample and it has an annotation called data provider. And when there's a data provider annotation the annotation itself has the name of another function. You can see up here we've defined the function and the data provider function does nothing other than return an array of arrays and each of these items represents one execution of your test. PHP unit is going to call testexample over and over again for each of the array items returned by the data provider. And the first element of the array is going to line up with the first parameter of your function and the second item of the array is going to be the second parameter and of course the third item in the array will go into the third parameter. So in this example you can see here our test is instantiating our model which is called example and the model takes the constructor which is just one value and then there's a simple method of this example called multiply which multiplies two numbers together. You can see that values go in, a value comes out and we assert that the value we expect is exactly what we intend the function to do. And if you can keep writing code like that such that it's very, very easy for you to write tests with data providers, then you are winning at testing because you know that your code is testable and the results are reliable and if you have discovered new permutations or code paths through your code that needs to be tested it's easy for you to add more tests just by adding more parameters with the appropriate expected results. It gets a little bit more complicated if your code will sometimes throw an exception but you can also catch those with PHP unit as well. Sometimes however, you can't write code that's perfectly testable. There may be side effects or there may be some large system that you can't easily write in your integration test environment. In those instances you have two choices. You can do functional testing or you can mock up some tests. Functional testing uses an entire working system to do a test. So like if you have a website you will run the website under a web server and you'll use curl or something equivalent to send it a web request and you'll check to see that that web request comes back correctly given all of the environment is set up exactly the way it should be in production and Behat is one tool that we often use to write these kinds of tests. It's usually very easy to write functional tests with a tool like Behat but the problem with functional testing is that it's sometimes harder to maintain. It can be a little fragile because you change the way the system runs and your expectations might be slightly different so false failures can come in and then you have to say oh there's a failure it's not really a failure we have to update the test fixtures so the false failure goes away and the functional testing also sometimes takes longer to run than other kinds of testing. Some people don't like functional testing as much and so they will sometimes attempt to use mocks to build a gaps and test code that is less than fully testable a mock replaces an actual system with an object that just provides a value that you set up in the fixture. It's useful for providing values and collecting results but like functional testing mocks can be kind of hard to maintain but they have the opposite problem. Sometimes your actual system that you've mocked away changes but your mock hasn't changed at all so your tests are still passing but the actual system doesn't work anymore so a false pass is really a much worse failure scenario than a false negative. So really the point I'm making with mocks is that you should avoid using them except in cases where you're just using them to provide values or collect results because any place where you've mocked away the code you should assume that that portion that line that you've mocked away is not tested. One of the examples I saw was someone wanted to get their test coverage up and they had a function that was really hard to test and it used four objects and all of them were hard to test and so they just mocked all four objects and the mocks just checked to see that the parameter values were what were in the code. So some people say this is double accounting or something like that but really all you're doing is you're testing to see that you implemented the code the way you implemented it which is a tautology and it doesn't really help you. So I'm kind of down on mocks. Use them where you have to but beware when you're doing that cliff jump demo if you're relying on the mocks to tell you whether your flying machine is gonna fly it's not a lot of reliability and we like that reliability because we can automate deployment if we have a high degree of confidence. So next I wanted to talk about leveraging Docker. Circle two and Travis both allow you to easily use Docker containers and if you haven't used Docker containers sometimes it seems a little intimidating it may seem like it's easier to just use the traditional testing system where you don't have to build a container and you can just use one that the testing provider's already given to you but I want to show you that with a little bit of automation it's really really simple to use a Docker container every bit as simple as using the built-in mechanism provided by the services. With automation we can automatically rebuild our Docker image whenever the source repository changes and even better than that we don't need to set this automation up it's already set up for us if we use certain services. So this slide shows you how to make a custom Docker image I'm just gonna show you this briefly in case you've never done it before a Docker image starts with a from line which says here is another Docker image that's going to be our base so we find something like Drupal Docker PHP 7.1 CLI and this has a lot of the prerequisites that you need for a Drupal site and you say okay I would like to add some other things on top of this standard image like Drush. All you need to do is add a sequence of run commands and each run command is just a sequence of shell scripts to install some things. There's a couple of declarations before that where you define the name of your working directory and you mount it in the container but basically you can see that the steps for setting up a Docker image are very similar to the tedious, slightly slow way of installing every time that you might do if you're using an old traditional Travis build or something of that nature. So I like to use a service called Quay and this screenshot here shows the process of defining a new Docker image in Quay and one of the options is to pull the Docker image from GitHub and the neat thing about it is that if you select this option that says link to a GitHub repository push what that means is anytime you push a new Docker file to your GitHub repository, Quay is gonna grab that repository, it's gonna build the whole image for you and it's gonna publish it. So then when it comes time for you to use the image in for example CircleCI 2.0, you can see in the default section there's a Docker element, it identifies an image. If we were using the Docker service for providing Docker containers, you would just give the last two items here, the organization and the name, but if you're using a different one like Quay, it just turns into a three part designator, very, very easy and when you do this Circle will download the whole container, the last one that was built and other than that it works just like the traditional method where you have to install everything every time except it's much, much faster and it's also more stable because if some of the dependencies that you're installing change, like if you install the latest version of something and a bug comes out in one of the tools that you install then all of your tests break immediately whereas if you've built a Docker container where you've cached all of the tools you need, then your tests are only at risk of breaking due to tool changes when you rebuild the container which you can do at times other than when you're making changes to your project. Finally, you'll notice at the end of this first line I have one point X highlighted in green. A lot of people just say that they want to pull from master so you always get the very, very latest version of your Docker image but I recommend following Assembler semantics for your Docker images as well and make a one dot X branch and a two dot X branch. So if you ever change the way your Docker container initializes the system in a way that's not compatible with your existing scripts, you can bump the major version up and then if you have multiple projects using the same image, the ones you haven't touched in a while can keep using the old image and the ones that you're updating to get the new functionality you can change to pull in two dot X and that'll make your life a lot more sane when you're managing your tests. Testing dependencies, it's really useful to be able to test all of your dependencies. We're getting into this era of composer now where sometimes programs work with more than one version of some of their dependencies. Now like Drupal for example used to ship with symphony two and then it recently upgraded to symphony three much to the dismay of money scripts and composer update processes as composer would get confused by all of the changes that happened all at once. But if you're writing some program that likes to work with multiple versions of Drupal like Rush for example, then sometimes your composer Jason might say that I work with symphony two and symphony three. If you're in a situation like this it's helpful to be able to do permutation testing on your dependencies to prove that the things that you claim you support you really do actually support. If it's not tested, how do you really know that it works? Another really interesting trick to do is when you have a lot of project dependencies where your constraints might take a range of different projects sometimes if you run your composer update command with a different version of PHP you might get different results. For example, if you run a composer update with PHP 7.1 some of the dependencies that you pull in might say that their minimum PHP version is 7.1 whereas if you run the same composer update on a different system using PHP 5.6 that version wouldn't be eligible and you'd get a different set of dependencies. So it's a really good idea inside of your composer JSON to define a configuration section. The configuration section has a lot of different purposes but one of the things you can put it into the platform section and inside of platform there's a place to declare your PHP version and what we're saying here is I want to pretend that I have PHP 5.633 no matter what version of PHP I'm actually running this under. And we'll see in a minute that on Travis we're gonna run our tests on a lot of different versions. So we want to make sure that this composer file has dependencies that are gonna work on all of them. So in order to make it easier to test for mutations of dependencies I wrote a simple project called composer test scenarios. For those of you who are familiar with testing in Drupal Core may be aware that recently Drupal Core had to do magic during composer update time to dynamically upgrade the PHP unit to version six because PHP unit five doesn't work with PHP 7.2 and they wanted to support PHP 7.2 but the downside here is that PHP unit six doesn't support PHP version five. So you can get into this bind where if you want to support both version five and version 7.2 of PHP then you can't use just one version of PHP unit. So Drupal's solution was to write a script that would dynamically alter the way the update was happening which I thought was a little complicated but I have a different tact and what I do instead is I commit multiple composer lock files to my project and each of these composer lock files has a different set of dependencies. I put these composer lock files into a directory called scenarios and then I can install the set of dependencies that I want to test very easily and I'll show you how this works. At composer update time we define another one of these really handy dandy post update commands and the post update command is going to run a number of create scenario commands and all create scenario does is it takes a number of parameters. The first one is the name of your scenario. I'm making a scenario called PHP unit five and then I list the dependencies I want in this composer lock file that are different from my standard composer JSON file. In this case I'm bumping us down to PHP unit five and we can also modify the platform PHP or other things like that. Take a look at the documentation for the project if you want to see how it runs. Using composer scenarios in your tests is very easy. What you see on the slide right now is a Travis test matrix and we're testing multiple times on PHP version 7.2, 7.1, 7.0 and 5.6. For PHP 7.1 and 7.0 we just run unmodified. This is going to use the composer JSON and composer lock at the root of the project. But for PHP 7.2 we're going to do highest lowest testing. I'm setting an environment variable here called dependencies and I'm setting it to highest. And if you look down at the last line where the install command is running composer scenario the last parameters dependencies when the composer scenario script sees the highest parameter it's going to do a composer update to bring all of your dependencies to the highest version. And this is useful so that you can catch breakage when your dependencies come up with new versions that may not be as backward compatible as semver claims they should be. Down at the bottom for the PHP unit 5.6 matrix I set the scenario to PHP unit five and dependencies to lowest and this has two effects. One is it's going to use the composer lock file that I defined previously on the previous slide here named PHP unit five and it's also going to run a composer update with prefer lowest so that we can test to see that in our composer JSON when we claim that we work with version 1.0 of some file that it really does work with 1.0. So for depending on a new feature or a bug fix in 1.0.1 the lowest test will fail and then we can modify our composer JSON so it's accurately describing the minimum versions that we need. You can also use composer scenarios and add a plot to passion just by running composer scenario on the command line you give it the name of the scenario that you want to run and then run your tests and away it goes when you're done you can just run composer install again and that will bring you back to the lock file that's in the root. The interesting thing about this is the way composer scenarios works is it's very compatible with all of the assumptions that composer already makes. So if you run a composer info command to see all of the dependencies and versions that are currently installed that will accurately reflect the last composer scenario that you selected on the command line. So take a look at that project. It's extremely useful if you get into the situation of wanting to test with both PHPnet5 and PHPnet6 or any of the other numerous ways that you might fall into this category. Deployment. I'm gonna show a little bit about how to do Travis release steps and self updates. GitHub has this really neat feature where you can attach files to releases and Travis has a really neat feature where you can add some statements to the end of your Travis YAML file to automatically use this API to attach build results from your test to the release. So we're gonna set this up so this only runs on tags and we'll update the name of the file that we provide. We're using the Travis command line tool. We run Travis set up releases. It prompts you for some credentials, ask you its file to upload and then it modifies your Travis YAML file. You can see here that it's added a section called deploy and it put in an API key which is an encrypted version of the credentials that you provided it. So it now has the access rights to do the upload to GitHub. But there are two lines here in red that are not included by the tool. So when you're doing this manual you have to add skip cleanup to true. That tells Travis to not delete your files before it uploads them and adding on tags true means that we're going to do this deployment whenever we're pushing a tag. If we don't have anything in the on section except for the default repo, it never pushes. So we need at least one rule. The most useful one is to add only the rule of tags true. Also it's very easy to write self updating bars if you use Robo. If you take a look at robo.li slash framework this is a simple wrapper around the symphony console tool and the sample code here in the lower right shows that if you tell Robo the name of your GitHub repository and organization it's going to add a self update command so that any time you've added the far to your GitHub releases, Robo is able to download it with the self update command. So all of the users of your tool will have an easy command line self update and you don't have to write that code. It's all packaged for you. Maintenance task. The world changes. It said that software rots. Sad but true. You would think that it never would. We're going to work on automating composer update with a perpetual motion machine. Nowadays I'm doing this with a really cool project called dependencies.io. Dependencies.io is an online service provider very similar to the other ones like Travis and Circle but this service rather than running tests it checks your composer lock file and if you have anything that's out of date it's going to automatically add a pull request to your project and it can show you one here on the screen that's already been merged that says aha in this particular project I've noticed that one of the dependencies you're using updated from version 2.4.0 to 2.4.1 and then when I saw that the tests were green I just merged it and now my dependencies in my composer lock file are all up to date. So I've now automated this. I didn't have to do it myself. I just clicked through on dependencies.io and boom. If you want to come back and take a look at this slide later I'm not currently using this slide but I used it in the past and what this little ball of bash script does is at the end of every test it checks to see if the only change that was made to the commit or to all of the commits in that pull request were to the composer lock file and nothing but the composer lock file then it automatically merges the pull requests using an authentication token that I applied to the project. So with dependencies.io and this little ball of test I wouldn't even have to go and look at GitHub and merge the pull requests. I would just know that my dependencies were up to date. This might be something good to do for a project that does not auto deploy itself like a library that other projects use to avoid any possible shenanigans. Keeping license info updated. In my talk last year I talked about a whole bunch of interesting services one of which I really liked it was called version I and version I would look at all of the licenses of all of your dependencies and it would make this nice little table that would tell you what the dependencies of all of your licenses were and then you could have a badge on your homepage so the users of your library would be confident that not only did you choose the MIT license but all of the dependencies you're using also had a compatible license. Unfortunately version I closed their doors they couldn't make their business model work. So now I have a new script called dependency licenses and all it does is it uses the composer command there's a built in composer command called composer licenses and the dependency licenses command runs this and appends the output of that command onto the end of your license file and while it's in there it also updates your copy right here if it's out of date and then from this I point the badge on my read me to the license file and then anyone who comes in and sees my project they can just click on that scroll to the bottom and then they will see all of my dependency licenses I've just saved them some time it's nifty pulling it all together so we've got a whole bunch of tools in our tool basket now what can we do with it well actually we can do something really really spiffy first I wanna point out that I have a new GitHub org there's only a few projects in it but my personal GitHub org was getting too full of projects that were created by automation or just scratch project that no one would ever want to see and it was hiding the interesting stuff so inside of g-1-a the org you'll see a project called starter and starter is a neat little project that will help you start a new standalone PHP library if you're writing a Drupal module and you wanna factor some of that code out into a standalone library so you can use it in other places it's a fair bit of work to move that code somewhere else and set up all of these services so what the starter project does is if you go to the effort to define and export a GitHub OAuth token, an AppVare OAuth token and a scrutinizer OAuth token these last two being optional I'll describe them in a minute and then you run composer create project on my starter as usual you give your project a new name and composer create project's gonna do its thing it's gonna download the project and then at the end it's gonna run a post create project script that's where my starter project comes in what does it do? It creates a GitHub repository it enables Travis testing and starts running your first test it enables AppVare testing which is a Windows units testing program it also enables static analysis on the scrutinizer tool and it sets up coveralls, packages and dependencies.io although unfortunately those projects don't have an API to spin them up so you actually have to go and click through to turn them on but they're all set up all you have to do is click you don't have to write your YAML configuration files it also throws a little command line tool with the far builder so if you're building a library and you want to be able to ad hoc use your APIs on the command line and not just test it with your PHP unit tests it's really easy for you to do that I'll show you how that works in a minute and it's hooked into those things I showed you before where it's using Robo so it has a self update command should you decide you want to publish this command line tool and it'll auto deploy the far every time you tag a release assuming you follow the setup instructions speaking of which it makes a readme file for you that has instructions in your new projects readme about the additional steps you should take to finish this setup it has github contributing and issue templates and it's optimized your composer.json it has a data driven unit test example that looks like the previous slide where I showed you how to do good unit tests with data driven data providers and it has a sample test matrix set up to test PHP versions 5.6 through 7.2 it is using the composer scenarios project so it uses PHP 5 for the 5.6 tests and PHP unit 6 for the 7.2 tests it has built-in PSR2 and linting functions it lists the dependency license information automatically updates your copy right here and finally it has a release script where all you have to do is change the version number in a file name version and run composer release and it'll do the tag which ends up in your command line tool being pushed up to github so this slide here shows you the beginning of the readme that's created by this little project and you can see there's a little table there if you defined all of the OAuth tokens that were recommended you'll see that your github repository was configured your Travis testing was configured your AppVare tester was configured your static analysis was configured but then the services below that have no APIs have handy dandy links you just click on that table right in your readme it'll bring you right to the right page in the authentication API for the service you want to configure and you can just click to turn on your new project and get the automation going just like that if you did not provide an authentication token for AppVare or scrutinizer then the Windows testing and static analysis lines won't say done they'll have a link that you can click on to go and just click, click and start your static analysis running and start your Windows testing running the simple command line tool uses Robo and Robo has a very simple expressive way of creating command lines all you do is write simple PHP code you have a normal function with a normal name it takes regular PHP parameters which are the command arguments the command name is declared by an annotation at command in your PHP doc block comment and the first few lines of your doc block comment are the help text this particular example just creates a new example model and it runs the multiply function and prints out the result easy as that so you can see if you're using the starter project to build a PHP library it's easy to just generate the models that you need and you know, run it from the command line finally, I also want to say that if you're not making a PHP library but you want to make an entire Drupal site with the hat tests there is something called the terminus build tools plugin this slide has a link to the documentation this will make a Drupal site for you on Pantheon and set up the hat tests with a free Pantheon for agencies account you can run these tests and you don't have to pay for Pantheon services and the sites that are created with this automation tool can easily be exported to be hosted on any platform that you want to run it you just define your GitHub OF token your Circle OF token and you run one terminus build project create command you'll end up with a simple readme which has a link to your Circle of tests Circle will create a new multi-dev on Pantheon for every pull or twap test run the B-hat tests and when you merge the GitHub PR it will make those code changes end up in the dev environment on Pantheon so this project was sort of the pre-generator for your automation and everything that I've shown here today are the techniques that went into this tool and the starter sample is another example of how we can utilize these techniques in a similar way to do other sorts of projects and speaking of which I forgot to mention if you don't like my starter because it doesn't do quite the right thing just use the GitHub clone repository feature and change it however you want the part that spins up the services is only like 500 lines of sample code it's easy for you to customize to make it do something else so you can easily automate whatever sort of automation you need to write not necessarily what I wrote so with that this session is finished but we're not finished because there's always more things to automate but I hope that by automating your automation it will make it easier for you to do exactly that thanks for coming today like to remind you that we have contribution sprints on Friday there's a mentored core sprint first time sprinters workshop and a general sprint check out the Drupal sprint hashtag if you wanted to know more and finally please remember to visit the evaluation pages for all of the sessions that you attend and fill out some surveys so we know what you thought thanks very much for coming to DrupalCon and I hope you have a great time what was that? what was your short URL again? oh let's go to the beginning it's G1A.io yeah A squared there it is and I tweeted that out too so if you just look at my Twitter feed you should see it right at the bottom appreciate it you're welcome I hope it was suitably named thank you one question I have github.poor github.com ha, ha, ha, it's alright so many domains I would like typing curiously I got the domain wrong it's a github somewhere yeah I made some last minute changes to squeeze some extra things in so clearly not every URL was sanity checked but I can change it after the battle people go back and see it I use it a lot for like circle I don't use it very much just like on my local machine and I love it for circle because like I said I just set it up on play and if I want to make a change I just push it up to github and I run it on circle and by the moon I love it right it's faster and more convenient what about your local what do you mean the difference is there well yeah you can try Lando I'll just answer your direct question about what I do and many years ago not even that many I used to do all of the VM stuff and I didn't do Docker because back then if you wanted to do Docker you would do Docker up for math on top of the VM and it would load on top of the load and that's no longer the case it's just load it's just load instead of load on top of the load but a while ago I gave up all of my VMs because if I'm not running lots of VMs then my disk does not fall and my laptop isn't burning my lap and my RAM isn't full and other than those limitations in the past I've done workloads that use VMs and I was really happy with it and I had a laptop with a terabyte of disk and that was a lot of disk and it was always full and I was always managing which VM am I going to throw away to get rid of another 100 megabytes so that I can get another VM out of my terabyte drive and I've been really happy with just using built-in tool here and speaking stuff in the cloud is much possible but I know a lot of people like you and folks like that