 Well good afternoon everyone. I hope you're having an excellent one Wednesday at Drupalcon Vienna We're going to talk a little bit about automating your automation today I'm Greg Anderson. You may have seen my name in the queue of Drush, Robo, Consolidation, CGR and other build type tools. I'm a platform engineer at Pantheon and also an open source Contribution engineer. Pantheon sponsors a lot of my time to do this work So today we're going to talk about Automation and automating your automation and taking it to another level. I'm going to start the question with you Well, what what are you going to automate? What can you automate? To do a good automation you want to identify something that you do repetitively You want to start off by making it good so you don't take a bad practice and do it a lot of times Survey the available tools Automate it and then start over because you can always improve the process So we want to build all the things because we don't want to TDM in our life we're in this Presentation we're going to mostly talk about development tasks testing Deployment and maintenance, which I mean software updates And it's really exciting because you know we have a lot of repetitive tasks in our lives and we set out to automate all the things But it's always a good idea to consider, you know, what's the cost and anytime I talk about automation I like to show this xkcd cartoon You don't even have to read the whole thing He just breaks it down on a matrix like how many times do you do this task and how long does it to take? You to do the task and then he gives you a time budget It says, you know if you do it this often you spend this long you're not allowed to spend any more time than this on Automating and it's very practical and Straightforward, but it's also really funny because you know the typical Parable is the man in the forest and he has his axe and he's trying to chop down the tree And it's really hard to chop down the tree We can't because it's axe isn't sharp enough. It's taking him a really long time and You know people are saying what well, why don't you stop and sharpen your size? I can't because I have so many trees to chop down and This joke is kind of the opposite. It's the engineer who's sitting there And he's like, okay, I can't start chopping down that tree yet My shot is my thoughts not sharp enough and I make it sharper and I make it sharper And if you just spend all your time on Dev tools, then your real product doesn't get any attention. So you needed a happy balance in there But the happy balance isn't just the some even gain that's shown in this chart Because the time payoff that you put into automation is going to pay off more than one-to-one It's not all time is created equal if you get into a crunch situation and you have a bug in the field and you need to redeploy if You've put a lot of time into your automation or someone else has put a lot of time into your automation system for you So you have confidence That you can test and roll out in an automated fashion Then you can respond to those emergencies a lot faster and having a spot faster response time is worth an investment Furthermore If you're not planning on working on your project all by yourself from now until the end of time you may occasionally have new people join your project and When there's automation in place, it's easier to bring people on in fact one of the key signs of a project that's Reaching unmaintainability is if you can't bring on new people That's when it's really time to refactor the code make it better and work on some automation Furthermore as you make small improvements Your process will improve over time because you can keep building you automate one thing you make it a little bit better and Next time you can just improve on that the strategy I like to take is if I have some big task and I'm doing it over and over again I don't go to the xkcd chart and stop everything I'm doing for five days until I have good out automation Usually I do a little automation and I use my new automation to finish the task and the task is better And I do spend some time debugging my automation takes me longer to do it than if I just done it all manually But the next time it's a little better. It's more reliable and soon I reach the point where that task is fully automated because I spend less and less time doing just the repetitive type parts And get more and more value out of the repetition And so the result of this if you have really got an automation system is that it becomes possible to do more So in that xkcd chart the task that you used to do once a week if it's automated You might just do it every day because it doesn't cost you anything anymore. It can just happen in the background And that'll give you a much more robust and reliable system Sometimes when we're faced with the long road of automation it just becomes discouraging Like the all the things meme related automate all the things got a lot of things to automate but if you just fall into discouragement and Don't sit down out on the road of automation Then you're going to incur some costs that are not directly measurable Your manual process is more prone to error and mistakes There's a risk of loss of knowledge either you might forget or if people leave the organization Manual processes are very Vulnerable to that and just as increased automation gets better and better over time The more you put off lack of automation The more your system as it grows is going to become more complex and more bespoke and more and more error to prone to mistakes So in this presentation, I'm going to talk a little bit about the tools of the trade things you can use to help you get your automation in Year we have service API's Available to us from a number of the web applications that we all like to use and love like github circle ci Travis These have API's that let you do things like creating pull requests post comments on the pull request Configure your credentials so that your tests will start up and various web hooks automating creating Pull requests on github can be a really valuable addition to your automation tools because Github is already set up to be automated with the testing system like circle So if you have a process where you need to do something a Really good way to spin that off is to have a little script that does the automation and as part of the automation at the end step Creates a pull request for it because then your automation is going to spin up and it's going to test the automation script that you just wrote So there's a couple easy ways to create a pull request on github there is a hub tool and Github also Offers a rest API, which of course is what the hub tool uses under the hood So before you actually create the pull request on github you have to get the code there and from the command line This is something you do all the time It's just to get push and if you add in the dash you origin the next time you push you'll remember Where that branch was going to and this is something we all do all the time, but if you're scripting it You have the added Challenge of authentication is if you're an end user and you're pushing to github Usually you have a user account and you're authenticating with an SSH key but github has a really nice feature that allows you to push with an OAuth token and I'll show you in just a second how to generate those but The push command doesn't have to push to an origin that you've already set If you just provide a full URL to a repository as the target of the push command Then your branch will be pushed up to that origin and you don't have to waste time in the script creating a remote just to delete it or keep it around and that url that has the Repository in it can also be pushed to the x OAuth basic Form where you just put your github token right on the line that does the push So this is great for scripts because it keeps everything Up and nice I'm noticing some people in the audience are taking notes Which reminds me that I forgot to tell you something really important and that is on oh my god I forgot to do it I was going to tweet out its url to my slide so you could follow along but I promise at the end of this Presentation I'm going to tweet out at the url to my slides So you don't have to write down all of these little nilly gritty details about how to do a github OAuth token push because I'm going to try to give you a bunch of these and You don't want to have to memorize all of it so installing and using the hub tool is really easy if you go to the hub Project on github it has a big read me that has really good instructions and they have They they support the various installers for macOS windows and devian so just one line and you can get the system installed and To create a pull request. There's just one simple command hub pull request you give it a comment and boom you have a pull request it's going to get the target of the pull request from the branch information on Whatever repository is at the current working directory. It's really easy if for some reason you're writing a tool and you don't want to have it depend on an external tool like github you can also go to the REST API and I've splatted a little bit of code here that shows that it's just a very simple post if you use the guzzle library Those are easy to send. You just need to send it to the right URL The headers specify things like what data type you want to get your data back in application Jason is usually the most convenient for scripts github likes it if you identify yourself in the user agent and Your OAuth token just goes in the authorization headers and from there you post it away and Before you know it you will have a full request On your repository So this is some additional details about the specific parameters that you use when creating a pull request Now down in the head parameter the third one where it's highlighted. It says forked repo you can leave that part off including the the colon if you just wanted to create a Pull request off of a branch that is In the same repository, but if you forked a repository then uses colon form to specify Where the pull request is coming from now once you have your lovely pull request you may have occasion to want to post comments to it The hub tool unfortunately Doesn't have a command that allows you to post comments to a pull request but you can always use the REST API and If you are excited about the concept of a hub command that posts a comment On the slides here. There's a URL of a pull request in the hub tool that has the beginnings of a post comment command So, you know when you're reviewing someone's Pull request you might have interesting comments about their Engineering decisions on the project and if you're writing a tool that's posting pull requests posting comments to pull requests you might want to give status of the build information or Links to assets that you create and things of that nature So it's very easy to do this using the same github API method that I showed a couple of slides ago There's just a different uri That adds a comment and you attach comments to the sha hash of the commit that you want to attach it to and then you just have a A body that has the text of the comment that you'd like and it'll show right up on the page so starting up tests is something that we do a lot every time you create a new project you need to get your tests written and Once they're running locally you are going to want to make a circle YAML or a Travis YAML and after that you have to go and Click on the buttons and find your repository, but we can automate that because circle CI and Travis CI both have facilities for doing this and Circle doesn't have a tool to Configure trust credentials. It does have a number of tools, but none of them have this particular command a Travis CI Has a very very complete tool that has lots and lots of commands in it including one to configure test credentials So the circle rest API is Very similar to the APIs that we just saw for github Here I'm showing you how to do it with the curl command line tool We just use the dash x flag to say that we want to post In this particular instance this tool wants content type to tell you to tell it what kind of result you want you want to send it back and the particular Key value pair for the credentials that you're setting go in the body now in both circle and Travis There's a page where you can set environment variables and that's what these credentials are their environment variables that will show up in Your tests so if you have a test that needs to do something like post a Comment to github on success then you're going to want to configure one of these environment variables where you set github token To the token value and then it'll be available in your tests If you've never used this feature, it's also interesting to know that for the purpose of security these environment variables are only set in your tests for Pull requests that are originating from the master repository if someone forks your repository and writes some code to dump the environment variables to the log then the environment variables won't be set so that your Credentials aren't exposed the side effect of this is that your tests might not run correctly on Community pull requests if they absolutely require credentials and another thing to note If you're just getting started with this API if you look at the circle URLs in their web user interface They shorten github to gh everywhere, but in the rest API it spelled out github So if you mix those up you'll get 404 is a little confusing Travis has a CLI tool which I mentioned before is very functional if you use Travis at all That's one of the ones. I really recommend for PHP libraries You should install this tool and take a look at all of the other things that it does besides what I'm showing you today The command to set an environment variable is simply Travis n so in one line there You can see it's the equivalent of that curl post that I showed you on the last slide This will inject the environment variable github token on our local machine into the github token environment variable on Travis There is also a web API for Travis, but it's really pretty complicated because they never fully Implemented oh, so there's a lot of steps you have to do to authenticate you have to start with your github credentials And then you ask Travis for credentials And if you really feel they need to do that the docs are here and you can step through all of that work But I've always just used the tool Because I just felt like the web API was a little too much work for my use cases Looking on my xkcd craft graph of how much time I can spend on this stuff Maybe they'll provide OAuth like the other guys some day So spinning up new projects is another thing that sometimes happens a lot It depends on what kind of work you're in sometimes you're working on just one project forever Other times you might be an agency and you're spinning up projects all the time composer provides a command called create project and if you've done any work with Composer managing Drupal sites before you're probably familiar with this command because the canonical project Drupal composer Drupal project recommends using create project for this purpose And once you create a new project, they're also post create scripts that circle will run for you So Composer create project will copy a template project that you specify and it'll rename the composer Jason Components for you so that you have a brand new project And as I mentioned this is being used by a lot of people to great success in the Drupal composer Drupal project Project, but you can also leverage this to your own benefit if you have a certain Set of things that you always do to a project all the time You know you could consider writing some sort of generator script for it to always add to these things in Which is a little time-consuming but the really easy thing you can do is if you just have manually made your own template once You can just make an empty template and get hub and register it with packages and from there It's very easy to use composer create project and everything from the old project will come into your new project You can set up your tests and have some sample test suites So everything's all scaffolded out for you. You don't have to spend a lot of time with Generators the general model with composer create project is the ideas that the new project is orphaned from The template so if the template gets any changes done to it all of the downstreams usually Continue using whatever features they got at the time of creation But it is possible to pull in Updates and all you need to do for that is go back to the github project page For your template project and the little button that says clone has a URL in it You copy that and then you can say get remote add another rate origin I named it upstream here and paste in the URL you got from github and there after if you say get pull upstream Master anything that happened in the parent will come and pull down into your project And you may or may not have conflicts just as any other Get pull or or merge Generally, though they encouraged them The more convenient way to do this is if you have something in the template project that you have a habit of changing a lot It's better to move it into another project that you again register with composer on packages and Include in the template project is one of your requirements So then you can just take your derived projects and run composer update on them and all of the components that have might have Modifications will come from these factored out pieces When you use this model of Developmental template projects, it's helpful to Make a number of small but not too small projects to hold these various things in Now composers also really need I mean Not only is it a mechanism for you pulling down a lot of code that you can manually run later But composer itself will also run Code after you do installs updates and other commands and a Drupal composer Drupal Project is one example of a template project that uses this technique. It uses a project called Drupal scaffold So Drupal 8 If you download it at our ball from Drupal.org you get all of Drupal and one big Directory, but if you're using the composer managed version only the core directory Appears in an actual project and that project is called Drupal slash core The other files that aren't inside the core directory Which is just a really minimal number of things like index dot PHP and your HT access example and things like that Those are not part of the composer managed project So what Drupal scaffold does is it goes out and it downloads those files Individually from Drupal dot org and drops them into your project route. This composer doesn't like to put files into the project route It's just a design decision for how composer works and the way Drupal scaffold works is it hooks into one of these post command hooks So if you have any sort of similar processes where after you've created a Project by a composer create project if you have additional Customizations that you would like to make variables that you'd like renamed that Composer create project itself doesn't touch you can easily hook in with one of these Scripts to increase your automation I want to take a step back from automating your automation and talk just a little bit about automation and basic test Practices as I mentioned at the beginning if you want your automation to be really strong You're going to want to start off with a good practices so the best thing you can do to make your project testable is clearly enough to write testable code and Sometimes people are confused about how to write testable code because all these servers and untestable things and side effects But it all comes down to a really simple paradigm that if you want your code to be tested you should just pass values into functions and your function should Return values, and if you do that then you can very easily write a unit test that checks for this It's the best way To test so when you're designing your code you should make as much of your code as possible Follow this model Now for example supposing you want to test something that's not testable a canonical example of this might be the PHP exit function Because you run exit then it exits everything including PHP in it So you can't verify whether exit worked correctly so the right way to test that is to Factor that function into two functions and one of them Does all of the logic it needs to do and then it just returns the exit code as a value and the other function? Calls your testable function and then calls exit status code So by splitting Apart your non testable functions from your testable functions You'll get as much of your code as possible in this clean and pristine state And while you're at it I also recommend Reducing the complexity of each function if you have too many levels of nesting Split that function up and keep splitting it up until you only have like a cyclometric Complexity of just one or two which means you're really eliminating the number of ifs you have and if you really want to become Really optimized about this refactoring. I'd recommend you Google an article called else is evil and I don't literally believe the else is evil But the author of this blog made that case and argued that you shouldn't use else And I would recommend you go and take a little read of it because it's eye-opening I've actually found that following his advice and I Rephrased it to say don't use else in any place where you can avoid it So that keeps your code nice and clean But not 100% of your code is necessarily going to be testable So what do you do with the parts that aren't and the best thing to do is for the Non-unit testable code is to write functional tests and a functional test tests the entire Environment if we're building Drupal sites This is actually easy to do because there's a tool called be hat You just need to spin up a website and then be hat will allow you to pass requests to the web server and it's written in a form. That's very a User user readable or allegedly user friendly And the advantage of functional tests is that it's usually really easy to write tests But the disadvantage is that sometimes it's hard to maintain because you might write a test that Characterizes the behavior of your website and depending on how fragile that Description is normal design changes will cause those tests to break and then you'll have to update them So you want to be careful when you're writing your tests to really think about what the desired behavior is and not try to describe Visual aspects of the screen very much but actually test for operations And then the other problem with functional testing is that it takes longer to run So the alternative to functional testing is mocks and a mock is a system or a Technique for replacing an actual system with a piece of code that behaves like the system so you remove the system that you are having trouble testing and replace it with Something that just says if I get this parameter in Then I would expect that the real system should pass this value out And this can be really useful I recommend using mocks if you want to insert values into a Function that you're testing or you could also use mocks to pull values out of Functions, but the problem with mocks is they also can be sometimes hard to maintain because a false pass can creep in if you completely abstracted away Some subsystem and that subsystem changes and you pull in that change with an update if you've mocked the system your mocks are going to keep passing and that's bad because You know when we're automating updates. We want to be confident in our tests and so I really find that Mocks are a danger to reliability sometimes you have to use them But it's best to avoid testing the implementation. I sometimes see code That goes in and says I'm going to call this routine and then I assert that somewhere deep in this function And some other function is going to be called And the problem with that if you're testing the implementation is that someone might rewrite that function in a way that's still compatible But the test will fail Because something in consequential changed In the worst case I've seen some people writing tests where they have a function and it just calls a bc D e and then the unit test over there it mocks a and it mocks b and it mocks c and it mocks d and in the end It hasn't tested anything. It's just declared that the function was implemented in the way that the function was implemented and That's that's not helpful So, you know avoid looking at your test coverage code score as a badge of honor because sometimes if you try to push that too High you'll end up cheating and actually hurt yourself so docker is a really cool tool a lot of people are using it and I wanted to Throw in some words about docker because I've recently started using the new circle to oh which leverages docker quite extensively and one thing I've discovered with Docker and try and circle to is that it's really easy to make your own Docker image and it's a good way to manage your test scripts Especially if you have this system I was just describing if you remember in the composer create project example if you have a Example template project and it includes tests some of the tests that it might include might Include a circle yaml that does a certain number of steps And I actually have a number of projects that are like this and the tests may evolve over time So you have this problem of you know, how do you? distribute changes to the scripts downstream and The technique I was showing before about adding another remote and just doing a get pull We'll have a tendency to create a lot of conflicts and a lot of confusion Because usually when you clone the project one thing that you customize very very highly are your unit tests in a unit test scripts so what I do instead of is in you're probably familiar with this technique in your Circle yaml you can have bash Multi-line bash lines and can get as long as you want but if it gets more than about one line it's a good idea to put that script somewhere else and in a bash script that Contains the aggregate of the functions you want to run and then from your script call that bash script instead So what I've started doing is taking these little scripts and storing them inside of the Docker container So now when I pull down my testing Docker container It has the scripts I need that set up my environment for that Docker container and if I change something about the Docker container Then I can change the script so that it is still providing the same set of environments that These projects are expecting and that becomes a form of a contract just like the functions you pass into a Just like the parameters that you pass into a function and the values that are returned from that function are a contract that you can Describe with simver similarly these scripts that set up environment variables that contain a certain collection of Credentials and things that you need for your tests can also form a contract And so we can semver version our Docker containers. So when I pull in a Docker container I don't pull in latest I pull in the Docker container 1.x and then if I ever need to make a breaking change in the way my test scripts work I start maintaining a 2.x of that Docker image and then all of my old projects can continue to pull the old script until they're updated So how do you put this all together at the beginning of your circle configuration file? there's a configuration parameter called Docker and it takes a parameter called image and in the example here I have an actual Image that I use in some of my projects for testing and the first parameter that I've highlighted there is Quay.io and Quay.io is a service a free service where you can store Docker images. You're also Able to use Docker hub is what most people use and if you're using Docker hub that first parameter just disappears And you only have one slash instead of two There's not a lot of difference between Quay and Docker hub mostly I'm using Quay because my company is using it on the commercial side And so I use it on the free side as well just for consistency, but I will show you that it is a little bit neat I'm gonna do my slides out of order When you're making a new Docker image and you register it with GitHub on Quay You can see that there's a little thing here that says where do you want this Docker image to come from and One of the options is linked to a GitHub repository push And if you select that option then at the time that you create your Docker image It's going to set up the automation so that any time you make a change to the parent GitHub repository Quay is automatically going to rebuild that image and Docker hub does exactly the same thing But you have to go and set it up. It's a couple more clicks. So There's nothing wrong with using Docker hub. It's a standard. Most of you will probably use it But I just wanted to show you that little bit of stream lighting So the Docker image itself You can just give it a from line where you inherit some other Docker container That a really smart person who spends a lot of time optimizing Images might be offering and I'm using Drupal Docker That provides PHP 7 1 and then Workder just gives me a directory where all of my work assets are going to be placed and the ad Instruction says that I'm going to Put things from the local machines current working directory into my work there So this is where I put all of the scripts that I'm going to be using in my sub projects And then they can just know that at the very root of the file system is this folder called build tools CI There's a bunch of scripts in there, and I just run them the other huge advantage of Using Docker for testing is that it increases the speed and the reliability of your tests as you can see down at the bottom There is a series of statements, and they're just like bass statements except they start with the word run Which is what Drucker uses to say, you know do this thing so in a typical circle one Configuration file you'll install a bunch of things that you need and it Copies them down every single time and this takes time and it takes bandwidth but the other problem is Occasionally the tools that you use to do your tests might hopefully never but might sometimes Accidentally have a bug introduced into them, and it's a real bother if you accidentally Introduce a bug into one of your own test tools and break all of your tests for your downstream so To help protect the stability of your test If you baked the most recent version or the version of the tool that was most recent at the time You made the Docker image and tested that Docker image You know that you won't be taking as many updates to that tool as you would if you were just asking for From composer, you know hat version one the latest stable tag and then you have to make the conscious effort to Update your Docker container and test it and then you know that that version of the tool is good And you won't be introducing any problems downstream on your tests So That's why I'm liking New found practice of using Docker and tests. It makes them lots more stable Now this is just a little brief aside With something that I thought I might spend some time talking about in this Presentation, but there's so much to automate that I had to kind of prune things down So I'm just mentioning that this is a thing if you have a IDE like PHP storm it has code generators and If you've made a task, I mean if you've made a class You can tell PHP storm to generate a PHP unit test and then you've got your scaffolding all up And you can just add in your asserts, and it'll make things go a lot better. So I Recommend surveying the tools giving it a try Maybe you'll find them helpful. Maybe you won't it depends on how fast you can type PHP unit tests in your sleep I tend to not automate them very much myself because I've written a lot of them But when you're just getting started that the tools are really helpful And it helps you sort of get an idea of what to test when you line up your tests with your classes Moving right along Deployment is another task that we commonly like to automate one example of that is releases That the Travis tool will help you Automatically set up your GitHub releases and I'll show you that in a second and then there's also once your releases are released The users of your tools need to get them so we can provide self updates So the Travis tool that I showed you earlier has many many commands one of them is called setup releases on this Slide in the red there. I show you passing in a token Which will authenticate if you leave that part out then the yellow lines below will be displayed where it asks you for your credentials So you can either Provide an OAuth token if you have one around or if you don't have one around you can just type in your credentials And then you give it the name of the tool you want to upload now the name of this tool Should be some far file that you've built in the test before getting to the releases section and what will happen here is After your test builds the far the commands that this Tool will insert into your Travis YAML will upload that tool to GitHub So you don't have to write any of that configuration yourself. You just run this tool and then every time Your release test passes That is like, you know, if you tag a Sember version of your tool then your far will show up in the releases section in GitHub for people to download Automatically adults. I'd like to give a little shout out to this PHP task runner called robo. It's pretty nifty Recently you some of you may have attended the Drush 9 session Drush 9 has been Rewritten to be based on top of the robo PHP task runner which also provides a framework for writing CLI commands and In addition to that pantheon also uses robo as a basis for their terminus automation tools and Grasmash from aquia has rewritten their BLT tool to be based on top of robo and Nerdstein has a build tool called build BLD that has also recently been ported over to run on top of Robo, and I'm not going to go into what you can do with robo because it does a lot and it makes it really easy for you to write tools that are You know very short you just write a Simple a simple PHP function that does some operations and robo will pass in the arguments from the command line and the options And it makes it very simple and it's a real symphony console application This little rectangle on this slide shows How you set up a robo application the instructions for that are on robo at li slash framework and The interesting thing is that highlighted line down at the bottom if you pass in your github organization and Project name then robo will automatically add a self update command to your application And when you run self update it will replace your robo far with your latest release that Travis pushed up to you From the process that you set up with the Travis tool on the previous slide So that's pretty nifty lot less work to do something that Used to take a while to code yourself the tools will now do it for you And finally maintenance Now composer has sort of changed the way the PHP world does its business In a way updates are easier because you just run composer update and everything happens but Then on the flip side suddenly this manual task of running composer update and making sure that The dependencies still do what they're supposed to have become a point of TDM and As I mentioned earlier, it's really really helpful if you have made really robust tests so that after the composer update runs you can create a PR and Your automatic tests should show your automated tests should show that the dependencies are still running but that's We're done. I mean it's repetitive just to run the automation. So let's try automating the automation by using this Automated composer update procedure So we're going to start this thing off with Travis Cron. It's a new feature of Cron. It's a few months old six or twelve or so or I don't know lose track But in the Travis settings now, you can just go in and turn on Cron and it'll start Running your tasks in a repetitive process like this Perpetual motion machine that Leonardo da Vinci designed And what we're going to do in Travis Cron is we're going to use a tool called composer lock update Closer lock update is a nifty little tool written by Daniel Bacouber the author of WPC Li which is a Drush like tool for WordPress and If we want to be really really wild and we really trust our tests and we know that we're just testing dependency changes We can also automate the merge of this PR back into master Ideally not releasing straight to your users, but at least on your dev branch you you can keep your Dependencies up to date at whatever frequency you've set up your Travis Cron So the way this looks is that Travis Cron execs the composer lock update tool Composer lock update uses the hub tool to create a github pull request for you The github pull request uses its Built-in features of github to send a rest request to Travis Travis will run your normal Automation tests and when that passes it will send I labeled that wrong. It's not rest there. It just execs code to merge the branch and Github will Do the rest so this github pull request is only created by composer lock update if they're actually updates available and The merge only happens if the tests pass So Travis Cron as I mentioned in the settings page. There's this new Area that says cron jobs. You just identify your branch often master and how often either daily weekly or monthly Do you want this to run and then hit add the interesting thing about it? Is as soon as you add the tool it schedules it to run in about a minute so if you wanted to test a Task that runs on a daily basis And you can try to ad hoc test that on your local machine a bunch of times before you set up But set it up on Travis but likely no matter how much you test it locally the real system might not work exactly right And you might have to do it several times So all you really need to do is if your test fails once it's in Travis just delete it Reschedule it again, and it'll run again in a minute, and you can try it out keep iterating until your Automation is doing the right thing so in order for this to work you also need to add a github token to your Travis settings and The way you do that is you go to github.com Settings tokens and click on generate new token and From there you will get this new personal access token page You can write in a description that reminds you of what you're using this token for you should give it the repo Scopes and there are some other scopes You might also want if you are planning on using your automation to delete repositories There's a delete repository scope the other scopes You probably don't ever need to add an automation because they're things like add users to groups and things like that Once you create the github token it'll print this great big long number you just copy that number into Travis through the web user interface or you can again run Travis and set github token and paste in the thing that you grabbed from the github user page and my previous presentation that I referenced in my session description I talked about Travis and different techniques you can use including highest lowest testing of size lowest testing if you set an environment variable To dependencies highest then you ignore the composer lock and bring in the latest dependencies and if there's an environment variable set for dependencies equals lowest then you ignore the composer lock and run Composer update with prefer lowest and this allows you to run tests that span the range of All of the dependency Ranges that your composer Jason claims that you run with it's a good practice So if you're already doing that you can just insert one more environment variable in one of your matrices and Just set that to one and then if you're running multiple parable tests on Travis You won't only want one of them to do your cron automation because it doesn't make sense to do it three times for all of your different PHP versions and your different dependency levels of next Travis is going to set a Environment variable called Travis event type so in this little script We're first checking to see if we are in the right Parallel test that has our post build actions environment variable set and then we're checking to see if this is a cron job And if these things aren't both true We're going to exit and we're also going to expect that we have a github token in define as I previously described And once we get to that point we just run through and Install some tools and finally down at the bottom. We're going to run CLU, which is composer lock update Composer lock update is going to run through all of the steps it needs to do to Create a pull request for you now There's an interesting thing about this you may recall from a couple of slides ago that I decided that I wanted to do My post build actions on PHP 7.1 now There's an interesting thing that's happening out in the composer world And that is that people are making a certain category of not sember compatible changes to their projects without increasing their major version and specifically what I'm talking about is when PHP 555 goes end of life a lot of projects decide that they're going to stop supporting that And so your depends the versions that you might get at different PHP versions can skip can change around over time because of a version that used to use For PHP 55 might at some point freeze and you'll keep getting newer versions as you're going up and Conversely sometimes when a project wants to support PHP 7.1. They'll come up with a brand new Version that does that and it no longer supports some of your older versions so the problem this can Creates is that if you create your composer lock file with PHP 7 1 Sometimes some of the dependencies you pull in won't work with PHP 5 5 or 5 or even 5 6 And if you purport to work with those versions You actually won't because your far will have too modern of libraries in it If you don't need an EOL PHP, and you're starting with 5 6 It's usually pretty safe to always build your composer lock file with PHP 5 6 And then that will usually also have components that will support PHP 7 and PHP 7 1 They follow those versions are still active and supported at the time you make the lock file But it is not necessary for you to actually have PHP 5 6 on your system in order to build one of these you can insert in the config section of your composer Jason a platform section and if this has platform PHP 5 6 even if I'm using PHP 7 1 To build the lock file the solver will be run as if I was running PHP 5 6 So I put this in the composer Jason of the project that I'm running composer lock update on and that way when Travis builds my Composer lock file using PHP 7 1 it'll do the depend dependencies based on PHP 5 6 Then the actual result is when CLU creates a pull request It's going to tell you what it did and you can see that the pull request is attributed to the user who have provided The github token and you may have made a special user for that or it might just be your own user But the commit itself isn't by the user that created the pull request. It's by the update composer dependencies Or it's actually not sure the composer lock update user is the actual author of that Commit so once that commit gets merged back to the master branch when you list your log you will see that user in The log list now there's a comment in your pull request and the first part of the comment shows you What Dependencies were updated in this pull request and after that CLU runs a another tool from symphony the symphony security check report This tool will check your composer lock update. You're sorry your composer lock file Before it's been updated and if there are any security Vulnerabilities in any of the versions in that composer lock. It'll be printed out in the report here So when you look at your pull report ports, you will see these comments that will advise you whether or not There are any security vulnerabilities in the pull request. It's being tested right now So if you're not automatically merging Your dependency updates or even if you are You can put a higher priority on Shipping versions when actually this tool reports that there's a security advisory at the moment the Drupal module security updates are not Part of this tool this tool only does things and packages at the moment But there is I've heard mumblings in the Drupal community that those are going to be integrated So in time if you're using composer to manage a Drupal site, you might also see your Drupal security advisories in here And that'd be pretty nifty and finally if your tests pass we can go ahead and Take the composer lock or check everything that's in the commit if there's only a composer lock in the PR Then we can say okay This is probably safe to merge and just auto merged into the PR So it's sort of up to you whether you want to adopt this workflow or not depending on What kind of dependencies you have and what kind of testing you have? So that's neat But that was a lot of slides and that was a lot of work And so you're if you're thinking to yourself hmm How many hours is it going to take me to? Set up all of that stuff so they can start getting my payback Well, I've got news for you because we are going to go to the next level and we are going to automate The automation of your automation So I have a another tool That you can run that will automatically configure Travis to update the composer lock file of your project with Composer lock update and it almost works. Well, it works, but it's not fully automated. So the tool is called CI and All you need to do is download the CI far from the Consolidation CI project you change your terminal to the you're working directory to your PHP project You make sure that you have an environment variable named github token defined and then you run CI Travis CLU and This is going to make a bunch of changes to your project. It's going to write things to your Travis YAML and It'll make a Travis YAML for you if you don't have one yet So you just inspect what it did and if it looks rational you push them up to github and then you suffer a one moment One brief moment of regret because the Travis tool does not yet have a way to automate turning on cron jobs So you have to run on over and click one button Like I said, Mr. Spacely made me push the button three times today But after you do that then you can be amazed because this tool is just going to crank along and every time one of your dependencies Releases a new version. You'll see a pull request show up and if you've done the auto merge You'll see that pull request go away or eventually well actually right now It's the this tool is not auto closing the pull request the pull request stays open That's just because my get operations were not completely optimized in the auto merge step But watch this space for updates. We will get that pull request merged and we won't have to use a API call to do it It's the very commit that's in the pull request ends up on the master branch github will close it automatically And I just have a bug where it's Cherry picking and creating a new new hash so it doesn't close it so Finished but not finished so we're finished with this presentation, but we're never finished automating so keep repeating and Don't get into despair, but just do a little bit of automating all the time before I open the floor for questions I would like to remind everyone that we're having contribution sprints on Friday The mentor core sprint is in stoltz 2 and there's a first time sprinters workshops in Lear 1 and Lear 2 This room in that room and the general sprint is in that great hallway that they call them all with tables on either side If you like the session or if you didn't like this session, please take the survey and let me know With that I'm going to do a brief time check and ha ha you have two minutes to ask questions No, actually you don't it's the last session of the day. You can keep me here all night asking questions So if you anyone has any questions come on up to the microphone. Otherwise enjoy your evening. I'm sorry I didn't hear you. Could you come up? Share the presentation that is an excellent thing as soon as I'm done answering questions I'm going to go to Twitter and I'm going to tweet out the URL that I planned to tweet up before the beginning of this Presentation so everyone can get the URLs and notes from these slides. Yes If you run composer updates on their Drupal website and your core also gets updates You might encounter that you're each the access files we've written or other files like editor config or doesn't matter What do you think is the best approach and because if you think about automation if you deploy this to a production platform? For example, you will really run into serious issues. What is the best approach to handle these kind of files? Yeah, conflicts are the bane of automation because in the case of composer lock update If it can't complete if it can't cleanly merge then it's not going to make the pull request so This goes back to what I said at the beginning of the presentation is before you automate You want to make sure that your process is clean? And I'm going to answer this question just for the one example you gave for your HT access now If you just take your HT access and you edit it willy-nilly then that's likely to produce Conflicts and sometimes you have to change what's already there But hopefully usually you don't because you know Drupal put it there for a reason a lot of times What you're putting there? Can be appended on to the end of the file and in that case what I would suggest you do in your automation is Either back out your HT access changes at the beginning or just do if you're using get do a prefer There is when you do the pull so that the Drupal or version will overwrite whatever is there And then you can take as a post get pull step Just run another script that takes your customizations and appends them on to the end So there's also a project called the sea vegans composer patches where you can automatically apply patches to Projects so if you have customizations that might cause pull requests It'll it's not completely foolproofed to make a patch because patches can fail to apply, but it'll be a little bit more potentially more stable if you always apply on to a Base system and and your applications just like the HT access if you try to put your custom code always Well not right at the bottom because new if your code is exactly at the bottom then sometimes a new function at the bottom of the file is more likely to create a lock a Merge conflict but you know put them maybe somewhere near the top in a stable location Write a said script that injects them so maybe they're not even seen at the time of the merge Smooth that on out and and then your automation will get rolling But this is kind of a later edge Concern because usually if you're only partially automated You're going to be faster if you just let the conflict happen and fix it But but when you start getting down to the those more Stein grain things that's what I would recommend to get that hundred percent Okay. All right. Well if you're too shy to come up to the microphone I'm still gonna be here after the talk and I'll be in the sprint room On a Friday and Saturday, so hope to see some of you there