 You're right. You're right. Okay. That's okay for me. We just lost video. It's back. Good. And it's swapped monitors. I think. No. Just not. There we go. Okay. Cool. Hello. Good morning, everyone. Did lots of people end up making it to the keynote this morning? Yeah? Does anyone else as surprised as I was to see Drupal's governance and community model being used as a model for all sorts of other open source communities? I never, I just could not believe that that was a thing that's actually happening. So it was pretty cool to get to watch and see how it all played out. For those of you who don't know me, my name is Andrew Berry, and I'm a senior architect at Lullabot. And with me on stage is... Hi, everybody. I'm Huampi. Huampiinar is my Twitter username is there. I just tweeted the slides. So in case you want to open the links as we go through them, feel free to open them there. We work at Lullabot, which is a strategy, design and development company. We're one of the first Drupal focused agencies to be founded years and years ago now, and both of us have been at Lullabot for quite some time. And through many of our different projects working there, we've had different kinds of opportunities to bring continuous integration into the projects that we work on. And through the different companies and teams that we've worked with, we've started to come down to one set of principles, which is that continuous integration is all about enabling your team to deliver more software, better software, and software that lives and is useful for a longer period of time. But every time you and your team start to bring continuous integration into a project, you really need to focus on making sure that you manage the risk to your individual sprints, the risk to the whole product, and any sort of financial risks to your team and your overall company budgets. When I looked at continuous integration in Wikipedia, I realized that the definition is very wide, so I wanted to set some boundaries for this session. Every time I've implemented continuous integration in a team, it was to make the peer review process as easy as possible. Normally, when I work within a development team, everybody who wants to make changes to the code should create a pull request, and that pull request should be reviewed and merged or discussed by others. So there are a few things in this process, especially for the ones who are reviewing that I believe that could be automated and I could see other people joining this effort as well. Things like running PHP Uni tests and verifying that they pass, if they are VHAT tests, making sure that they pass, taking code in a standard, generating a coverage report. These are things that when I'm looking at a pull request, if I can get some feedback, just making sure that these checks are passing, then this green button that I'm seeing here, I want to make it as big as possible. I want to make it as easy as fast as possible for somebody to just focus in looking at the code, making sure that it meets the ticket requirements, making sure that all the checks are fine and merging it. So I started with Jenkins CI years ago. I remember I had a talk about how I was using Drupal with Jenkins for doing things like running Chrome or deploying code automatically to development when it was merged onto master. And then I started doing it also for other sort of CI. It was Andrew who introduced me to CircleCI, the client that we were working with before was using it already, and we decided to move a few things from the current Jenkins CI implementation that we have into CircleCI, and I loved it because we'll see in detail why. And then lately, just I wanted to compare CircleCI with Travis CI. I've used Travis CI in open source, but I've never used it with a full Drupal site. So I took a Drupal 8 project and added Travis CI exactly the same as I did with Circle. We'll see a comparison between them in a little bit. Yeah, and something we want to make sure is clear is that just because we chose these three tools for this topic doesn't mean that they are even necessarily the absolute best tools out there. When we are working with a new client and getting processes in place, we always like to say that the best continuous integration tool is the one you already have. So maybe your team is using GitLab internally on a hosted GitLab instance. Look into its CI tools because it's got a great suite of tools. If you're on BitBucket, look into BitBucket pipelines. It's often a lot easier to bring in CI into your organization if you're not having to set up new business relationships with a CI provider or get provisioning for something that you're hosting yourself. So here are some of the tools that we have used to implement the jobs. Drush, usually for running database updates, installing Drupal, importing configuration. Robo is a task manager in PHP. I must admit I love this one because I'm terrible at writing shadow scripts. And with Robo, I was able to do the same but within a PHP class. So I know that Drush is using it as well too. Docker for spinning up and turning down environments and then Behat for behavioral testing. So here's a summary of what we found out over the past few years. If a team would ask me nowadays, hey, we are looking forward to implement continuous integration for a Drupalite project. What tool shall we use? Because there are many out there. From the ones I've tried, I would suggest the team to either go for CircleCR or TravisCI. I've written a couple of articles that they are a lot about block with just my personal experience about them. You can do the same. TravisCI may require a little bit more setup but also more flexibility. So it depends on the team. Also, if for some reason you cannot use TravisCI or CircleCI as a third party, they offer behind your firewall services so they can be installed inside your network. So have a look at that. So let's start with Jenkins. What we see here in the background is a typical Jenkins dashboard with jobs and their status. Let me move on. So here what we do with Jenkins is everything goes through the UI. It's a web interface. This is a shell step where we were running, in this case it was BHA tests, and we created a Docker image based on the official Drupal image. For this particular client, we were managing modules, not full Drupal sites. So what we would do is we would run a Docker command that would build a container. We would pass the module to that container and then run a script that was part of the repository. We did that so we could put at least the scripts under version control. So for a newcomer, Jenkins is really, really useful because you install plugins via the web interface. You create new jobs via the web interface, define steps, conditions. If you have never done CI, it's really easy. It also has some cons that we'll see in a bit. So here is the script that we were running before. What we're doing here is we are starting Apache. We have here a robot task to install Drupal. And then we are using at that time phantom.js. I think this was for maybe a couple of years ago. We then moved on to headless Chrome. And then finally we were running VHAT test providing a configuration file. So the setup for Jenkins is you take a server, you install it there, and then you go through the process of installing a few plugins so you can start creating your jobs. These two plugins are the ones we use the most to connect Jenkins with a repository, to watch for repository changes. And then after that, it's a matter of going through the web interface to create the jobs in order for them to adjust your continuous integration strategy. Some of the pros is that it's pretty un-customizable. You just need somewhere to install it and then there you go. Also, because you're starting from scratch, you can be very specific with your implementation. For example, I started messing around with the commit status messages here. For example, if the code coverage job would pass, I would also print the coverage statistics. If some of these other jobs would fail, I could say there how many tests were failing, for example. What I wanted to do with this was provide here some feedback, so it was not necessary to click on the details to view the full report. Again, making that merge button as big as possible and as easy as possible to move on. Also, Jenkins has been there for ages. I read the other day Jenkins is seven or eight years old, but it's spanned out from Hudson, another project which is already 14 years old. There are lots of documentation, articles, recipes. There's a high probability that what you're trying to do with Jenkins, there is already either a plug-in there or somebody has written an article that explained you how to do it. Also, because you're in full control of the infrastructure, not only you can spin up a Docker environment to do Renek Hemen, you can also persist that environment if you need to. I remember that we worked with a QA team that wanted to mix and match the modules that we maintained to set up a Drupal site, Allah simply tests me and do some testing. What we did is I took the BHA job that we had and I made a clone of it. Instead of running the BHA test suite on top of it, I assigned a port to it and just made it available. Then there was another Jenkins job that would watch for merge or close events from GitHub, and then it would destroy the environment. Some of the cons, yes, it's free, but it takes a lot of time to set up and it's not easy to get a certain setup and then replicate that anywhere else, especially when you're moving on to different projects. That takes a lot of time. I also had to monitor hard disk space and memory. Some jobs would generate logs that wouldn't be perched. I would find zombie Docker containers running there, and that was sometimes problematic. Because it's so oriented to the web interface, even though that's something good to start with, once your jobs get a little bit more complex, you run into the problem in which you want to make some changes to the job. It's impossible to make those changes in the job without breaking the job for a while. That would affect development. Something else with Jenkins is that as the self-hosted option, you're responsible for maintaining it. After having used Jenkins on many, many different projects, I think the first time I used it was probably 2010, but we wanted to look back and get an idea for how much are we actually spending to maintain and use Jenkins, and how does that compare to the other options that are out there. For these numbers, we were looking at a team of eight developers, and in this case it wasn't a single site, it was a bunch of different Drupal modules as individual repositories, and when we look at those jobs that we were looking at before, if you were to run them one after the other, meaning not at the same time in parallel, but at about 18 minutes total CPU time, they're obviously not all the same amount of time. Unit tests run really fast, and B-hat tests are probably the slowest to run. We're also assuming that each developer on average does about one pull request a day, but that they're testing more than one commit, because they might push up a commit, open a pull request, and then realize that they broke a test and need to do some fixes and submit another pull request to be tested. We're also assuming that the team runs some of your tests locally, meaning that they're not just pushing up work and progress commits for the sake of testing, that they do know one way or another how to run those tests locally, which is reducing the load on your Jenkins server. Most of the time when we have worked with clients to set up Jenkins, or when we've come into a client and they already have Jenkins, they've basically got something like this where it's an EC2 instance as the VPS, meaning that they're not using Amazon's APIs to manage the system, they're just treating it like any other virtual server on the internet. And the main reason we find for this when we talk to our clients and we talk to ourselves is that it's the easiest to understand and set up. But the server is always there being paid for, on weekends, on nights, on evenings, on nights and on evenings. And it doesn't help when you have activity spikes, like at the end of a sprint. Maybe you had a long weekend, everyone was out Friday for a holiday, and then you come back the Monday and you're at the end of a sprint, and you're like, oh, we got to go, go, go, get all these pull requests merged and tested. And if you just got a single server, it's not scaling up and down to handle that. And when we talk to some teams, the answer is just, oh, well, just autoscale. It doesn't Amazon make that easy? Not really. And when we look at most of the enterprise Drupal 8 websites we're working with, unless they're using a completely managed service provider, they do not have autoscaling for their production Drupal websites. So if they don't have autoscaling at that level, what are the chances your Drupal focused development team is going to enable a fully autoscaled Jenkins server for something which is not directly related to production? So you might think autoscaling is your way out of it, but unless you are already doing that throughout your organization, it's probably not something that's actually going to happen. So in this case, this was an Amazon EC2 instance, and we were spending about $300 a month for it. And in terms of sizing, Composer Update uses a ton of memory. It used to be way, way worse back in the day. I'd see it take four or six gigs of memory to do Composer Update. Since the Composer packages are now hosted on Drupal.org and the repositories are split between Drupal 7 and Drupal 8, it's significantly better. But even then, we're still looking at around half a gig to a gig of memory. And for a server like this, maybe you're able to get up to 48 concurrent jobs if you're using 24 gigs of RAM for Jenkins. But we always found that the CPU limits were hitting first. And of course, you could just try and go to a different sized instance and try to tune that, but that's more time. You've got to spend tuning and rebuilding and seeing how it works. Maybe you get it tuned for less memory, and then all of a sudden something changes in some library that you're using and you need a lot more memory per job. And you don't know that until your jobs are failing and someone spends half a day trying to figure out what's going on. So I actually went into our time tracking system and I pulled out all of Wampie's hours for a project. And he spent 116 hours where he logged entries with Jenkins, continuous integration, or just the word continuous. And this was a project with no migration, so there was no continuous migration entries being tagged. And you take that at your rate for an employee that you're paying or the rate you're paying your contractors, you're literally looking at hundreds to thousands of dollars of employee time a month on top of your hosting bills. And in this case, this doesn't include the client hours for the clients we were working with where they had dedicated DevOps and infrastructure engineers who were providing the underlying Amazon hosting and so on. So the realistic costs of this were a lot higher. And when it comes down to it, for teams here at a conference like this, like your team probably knows Drupal and PHP and JavaScript, and maybe the people in this room, maybe we know Jenkins, but what happens when you're working on critical features and you don't have time to maintain Jenkins during those moments? What happens where you move on to a different team or a different project? Who's going to maintain those instances? The knowledge pool for Jenkins is not very wide within a programming community. And so it really comes down to that the costs of hosting Jenkins are really related to the expertise of your DevOps staff. And you could pay someone to be a Jenkins expert and Amazon expert to handle auto-scaling and really making sure everything is running smoothly. But you may actually end up spending even more money because Jenkins experts and cloud experts are very expensive. So you really got to look at those numbers for your specific organization to see which way makes the most sense if you're going to be hosting Jenkins internally. Andrew did the initial effort of taking one of the modules we were maintaining and porting some of the Jenkins jobs that we had to CircleCI. And I saw the set of changes and he told me this is great, so you should try it. I took another module and followed his approach. And I must say I was very impressed. In minutes I was seeing the jobs running in the Travis CI dashboard and them being reported back at the pool request. I'm talking minutes. For me setting up and fully configuring Jenkins systems for a team normally would take me from hours to days depending on the team's infrastructure and the CI demands. So that already looked promising. Here is an excerpt of the CircleCI YAML file. This file lives at the root of the repository. I need to find everything that CircleCI should do when you make changes to the repository. We define a workflow where you can set four jobs. One for running VHAT tests, another one for PHP unit tests and another one for code coverage and another one for checking code standards. The reason why we split the code coverage report out of the PHP unit tests is because we benefited from the workflows at CircleCI. Here you can set dependencies. So only run a particular job if a condition is met. Code coverage takes a lot of time and uses a lot of memory. But PHP unit tests they can run really fast, right? So we wanted to give faster feedback to developers. We made the code coverage report job dependent on the PHP unit one. So only if PHP unit tests were passing then we would generate a code coverage report. Here let's go quickly through the syntax. We benefited here from these YAML aliases. It's like a way to reuse steps or variables. You want to call it that way. We're checking out the code. We are defining here the cache strategy. We used it just to cache some of the composer files. So builds would run faster. And here is the meat of the job. At this point we moved most of the implementation to Robo. So Robo would define a set of tasks one per job. This was great because when I took this to Travis I could reuse most of this stuff. And then finally we are storing search results so CircleCI can present them in the web interface. And also we're storing some artifacts. So for example the code coverage report can be browsed. The setup. If you have already a CircleCI config YAML file you add it to the repository. If not you go straight to number two which is authenticizing yourself with either GitHub or Bitbucket and then you allow CircleCI to watch for repository changes. CircleCI may either analyze your repository and suggest a config YAML file for you to start with or maybe you can just click the technology you're using and use a template. Let's see some of the pros and cons of CircleCI. As I said, you can define workflows, a set of jobs that will run in parallel. Depending on the concurrency level that you are using this may actually run in parallel or they'll run one after another. And it uses, well anybody here has ever used Docker compose? Right, so I wonder if for any of you this looks more or less familiar. This is also part of the CircleCI config file. Here I'm defining the mix of images that I want to use for my jobs. What I did in this case is I created a Drupal 8 project with Composer Drupal project and then this is an extension of the official Drupal image where I'm just removing Drupal core from it because I'm going to use a full project instead. I'm also adding Xdebug Composer to it. Then we also use in Chrome for Selenium because we want to run Behat tests and finally MariaDB as a database server. Yeah, one of the things I really love about Circle setup is if you've used Docker compose before you have to manually set up the links between containers which is actually really painful when you're dealing with Behat tests because you basically got to co-dependent containers for networking. And Circle basically makes it so every image to the jobs is as if they're on a single server. So you're talking to my SQL over local host. You're talking to Selenium over local host. You don't have to think about it, which is a pretty neat feature. You also get SSH access to jobs. This is great for post-mortem debugging. If you have ever done CI, it's frustrating when a job fails and you think, oh, it must be because of this. It's like a small change, push, wait, and it fails again. And then not only you are polluting the branch you're working with but you're also wasting a lot of time being able to jump into the environment after a job has finished so you can run some commands and figure out what's right. It will save you a lot of time. Also, CircleCI offers a command line interface. Because it uses Docker to build an environment and run whatever you want it to run, you can run the same thing. You download the command line interface and you do see CircleCI build. And here in the screenshot, for example, what we see below the command I'm running is the CircleCI command line interface pulling down the images into my local and then running the jobs. This is great because I can jump into the containers in my local environment and do my own debugging. It helps me, for example, when I want to verify that jobs are passing before even creating a pull request. So when I do, I'm pretty sure that jobs are going to pass. Some of the cons is that, well, it's not free. It's software as a service and the free, I would say that the free service is good for you just to get a taste to see how it works. For example, I'm maintaining the Drupal Camp Spain website and I added this to it and it is perfect for our needs. And if you need better concurrency, you may need to look for upgrading. And also because you're not in full control of infrastructure, you may need to adjust your continuous integration strategy to what CircleCI offers to you. Yeah, so when we looked at the cost for CircleCI is really interesting because it did take us about 50 hours to set up CircleCI across 10 different repositories. But something to keep in mind with that is we had a ton of Jenkins experience going into doing the same thing with Jenkins. So we basically went from we literally have no idea what we're doing with Circle to done, meaning we learned the service, we set it up, we made it work and be repeatable across multiple different repositories in about 50 hours. And now that we've done it once, it would be way, way less. Like for setting up similar projects right now is probably if you're to say, hey, Andrew, could you set up 10 repositories? It'd probably take me half a day or a day at most. And what was really interesting is that with Jenkins, the hours that were spent maintaining it were fairly even month-to-month across those 15 months. So it wasn't just an initial sprint of setting it up followed by minimal maintenance. There was actually pretty decent amounts of maintenance throughout that time period. And what we found with Circle is once we did get it finished being set up, you basically had no maintenance. You could pretty much ignore it, which was kind of fun. And the only time that you really are forced to do any sort of maintenance is really a software issue, which is when you're doing Drupal Core upgrades. If you have your module and you're testing it in Drupal 8.4, then you're going to need to update the containers so that they have Drupal 8.5, make sure everything passes. That can be the PHP unit configuration changes, those sorts of things. But that's not really on Circle's responsibility and that's going to be the same regardless of whether you're using Jenkins, Circle, Travis, or anything else. And from a pricing perspective, you can go all the way down to free. So the way their limits are is that one job concurrency limit is for private repositories. So if you're dealing with a public repository, you actually get up to four concurrent jobs the month, which is great for open source or just work, you don't have to keep private. But for private repositories, because you're paying month to month, maybe you have a really large team today and you want your tests to always run really fast and get that feedback as quickly as you can. So you go all the way up to eight containers and each container costs $50 a month. And then your project is done and your team goes on to something else completely differently and the site is still there. You could scale it all the way down to one container or two containers and your jobs will still work. You don't have to change anything about your integration. It just means when someone does do a pull request, those jobs are going to run one after the other instead of all at the same time. So you have a lot of flexibility there, which is pretty nice. So as I said before, I've used Travis in the past with open source projects, but I thought, how about for a Drupal 8 project? And that would also give me a chance to compare CircleCI and TravisCI. They are the same. You can do the same setup process is the same. You may need to do a little bit more setup with TravisCI, but you will end up with exactly what you get with Circle. You may also have more flexibility for your continuous integration, so that may be also a good thing. TravisCI has been out there for seven years. There's a lot of documentation online about it. Symphony Project, for example, uses it. So it's very popular. It's very probable that in your team somebody knows it already, which is also a good thing. Here's an excerpt of the Travis YAML file that I added to that Drupal project. Here we're defining the cache strategy. We're also caching some of the Composer files. Travis has a different approach than Circle. Circle lets you define the mix of images you want to use. While Travis has two environments, trust your precise, with a lot of tools already built in, available for you to enable. Out of those tools, I needed the Docker service, because in this case, especially for running the VHAT tests, where there is a database, there is a web browser, there is Chrome as well running, I wanted to define my environment using Docker Compose, so I took the Docker for Drupal template, and that's what I'm using here within this robot task. What we're doing here is we are leveraging the test matrix to define this variable, the job variable, which each of these values will be passed to a robot command. This then turns into three different jobs that will run in parallel when somebody makes changes. Again, all the implementation on each of these jobs is within the robot tasks. If you're curious, by the way, this link takes you to the actual implementation so you can see the full details. The setup is identical to CircleCI. You add a Travis YAML file to your repository, and if not, you go to step two, authenticate, and then use a template that Travis will provide to you, and then you allow Travis CI to watch for repository changes, and there you go. Somebody makes changes to the repository, Travis will trigger the jobs and report the status in the pull request. So some of the pros. I found a lot of useful documentation as I was going through implementing the CI for Travis, which is also a good thing. And some of the cons is that I found Travis doesn't have the concept of artifacts as CircleCI does, so in my particular case, I wanted to host somewhere a coverage report. I looked at the documentation, and Travis suggested that you either upload the files to an S3 bucket at the end of the job, and they also mentioned this tool, which ended up being really nice, coveralls. You authenticate with your GitHub account against coveralls, and then you run a command at the end of Travis YAML file. That command automatically finds the coverage report and uploads them to the coveralls website, and it also integrates with your pull request. It will tell you whether coverage is going up or down in that particular pull request, which I think is great. Also, Travis CI supports Docker, but needs a little bit more setup. I ended up finding that this gives me also more flexibility for this particular implementation. I was using Docker Compose for running the BHA test job, but I was using the precise environment in Travis to do, for example, checking code standards, because that's all I needed. And also, Travis doesn't offer a command line interface, but you can overcome this by implementing this in a certain way, because we move most of the logic to Robo, and we were using Docker Compose to build the environment. I could run the task locally, and that would be fine. I would get pretty much the same output and results that I would get them in Travis, so I realized this is not actually a problem. Something else to keep in mind with that is that by keeping as much of your test logic outside of the CI configuration, it makes it a lot easier to do something like what he's talking about, like run it locally. Following that model gives you a lot of flexibility. You've looked at all these services, and you're excited about one of them, or maybe more than one of them, and you really want to bring these tools into your team. Probably the first conversation that's going to come up is not actually the cost of the finances, because it's usually not a big deal, but what's the cost on the team? What's the cost on your workflows? What's the cost on the deliveries that you need to make? When you start talking to your developers, you're totally going to hear them say that tests are too hard, that they've tried writing tests before, they spent a week writing tests, didn't get anywhere, and ended up throwing them out so they could get their tickets done. You're totally going to talk to QA folks, and they're going to say, sure, it's great that you're writing all of these tests for B-hat regression tests, for example, but wait a minute, it's our job to not trust the development team. Just because you're writing tests doesn't mean that you wrote the right tests. How does this actually make our lives any easier or better? You're going to talk to your project managers, and they're going to say, sure, test coverage sounds great, but when I worked on it on a couple previous projects with teams, I discovered that they were spending all their time writing tests to prove the bug was fixed, rather than fixing the bugs that were bringing the site down. When you talk to your product owners, they're just going to say, we need to deliver. These are the words that every product owner is just going to say, I don't care how you do it, but you have to deliver these features by that date because we have this contract. Before you start saying to your team, hey, we need to do all of this continuous integration and testing and so on, you really need to evaluate your project to see if it's a good fit for continuous integration because not every project is a good fit. One of the first questions I like to ask myself and ask a team is the code that you're writing, are you expecting it to last for six months or many, many years? If you're building a Drupal 8 marketing website for an event that's a one-time event that's going to launch and be shown and then turned into static HTML when it's done, it's probably not worth writing B-hat tests for that unless there's a serious economic reason, maybe you're selling tickets or something like that through the same site, but if you're just going to stop maintaining the site fairly soon because it's got a limited shelf life, testing isn't necessarily worth the investment, but for a project that is a long-term project, especially anything which is, I'm going to say, internal to business processes or API integrations, if you come into a team and you discover that the CMS you're replacing has been around for 12 years, odds are that your code is going to be around for 12 years as well. We also like to look at this not just from a code perspective, but from a business perspective. Some industries, some businesses are in a growth phase they create one product, they create one website, they create one feature and then that feature is still there and useful, but they go on to something new they are adding and building out their portfolio of whatever it is they're doing. Or some businesses are in an industry that is under rapid change and they have they know that 6 or 12 months or 18 months down the line, what they're doing today is probably going to be completely irrelevant. You're going to have replaced 95% of the code that you're writing. So this does play a little bit into the longevity of the code, but just because you write this great API integration for some mail service you're using and you really test it well, maybe that API integration is only going to be used for 8 months because you're just always changing those sorts of requirements at a business level. We also like to talk about how the code you're writing is going to be updated and distributed. Maybe you're building a single website and you have complete control over how that website is being updated. So you need to fix a bug, you need to update a module, you need to update a new version of Drupal Core, you just do it. Or maybe you're writing software that is being distributed to teams you don't even talk to. And in those cases it's a lot more important to be able to show those teams when you do have support issues and you need to have communication with them to say look at our test suite, look at our coverage, look at all of the B-hat tests we're providing as a part of this software to show that what you are delivering is what they're expecting of you. And one edge case for this is I like to call it the one-time code dump. Sometimes you will work with teams who you're not really connected to but for business reasons you will write them some code. You'll write them a Drupal module, you'll write them a Drupal theme, whatever needs to be done. And then you're basically going to give it to them and never see that code again. In those cases you have to be really careful about how you integrate continuous integration because unless it's a multi-year project how do you know that that team is using the same tooling you're going to use or that they even care about continuous integration. It's really expensive to do a full continuous integration workflow for a piece of code for four or six months just to give it to another team to have them turn that all off. That's a lot of wasted investment. And then also the environment your code is running in. If you're a single site then you control the PHP version, the Drupal version, all the modules, all the libraries, everything. If you are distributing your code maybe you need to support PHP 5.6 through to 7.2. Maybe you need to make sure that you work on more than one version of Drupal. As soon as you start adding these variables to what your team needs to support then the business case for continuous integration starts to make a lot more sense because you don't want to have to have every developer switching between four releases of PHP to make sure they didn't accidentally use some PHP 7.1 feature in code that needs to run on PHP 5.6. QA planning is also really important as well. And in terms of team structure we have found the most success comes when your QA team has some development in PHP experience. If you have developers who you can bring into your QA team to not just help with the test plans but actually work on the automated tests it really helps leverage the continuous integration tooling that you're using for just your development team. It means that your QA team can start to become confident in the tests that you are working on. So when you get back to these conversations with your developers and they tell you that tests are too hard, remind them that tests are really tools and that the APIs that you use in writing tests teach you the underlying systems. You know, you're mocking test data, you are making sure that your code is well architected and able to be tested in the first place. You're making sure that tests are repeatable over time and not just today. These are all good things for a stable and quality code base. And when it comes down to it, tests are too hard is not an argument about skills, it's an argument about time. And if you are spending all, you know what seems like an in or amount of time working on tests then maybe it's time to step back and stop working on tests and fix your underlying code base. When your QA team says to you that we don't trust the tests that you're working on and we're not saving any time by doing this, make sure that your QA team starts to become completely integrated with your development teams. They need to be on the same meetings, they need to be on the same retrospectives. They need to be part of the team and just as important as development and design and project management. And they need to be able to say, hey, you wrote a B-hat test and that's great, but what you wrote is not actually what we want to test. They need to be able to look at what the developers are writing and what they're writing. As a developer you need to be able to say, hey I think I've got some good B-hat tests here can you take a look and use your QA skills to come up with new test cases. When your project managers are worried about the time spent on tests, talk to them about how important it is to be tactical about what to test. You don't need to test literally every single line of code on a Drupal site. You're not going to bother duplicating tests for Drupal Core. You're not going to necessarily test every little theme feature that is there. You're going to focus on the features that are either the most important from a stability and maintenance perspective or the features that are directly related to your site's ability to generate income. And it's really important to trust your developers. When your developers say, I want to spend the next morning finishing up the test cases on something what they're telling you is that they want to spend the next morning making sure they didn't break anything. So give them the time to do that so that way you're discovering those bugs during the development phase and not the release phase. And also track your hours. It's really helpful to be able to go back and say, I feel like we're spending too much time on testing or continuous integration. And if you are tracking your hours effectively you can get that information to find out whether it's just a feeling or whether it's reality. And when your product owners say, I don't care what you do, we just need to deliver, talk to them about how continuous integration is a necessary step to continuous delivery. You cannot be continuously delivering a product without having a continuous integration system underlying that philosophy. Think about how many people in this room have had to deploy a hot fix to production on a Friday night. Delivery of that sort of hot fix is a lot faster and easier to do if you have good continuous integration in place. I remember one time early in my time at Lullabot there was a hot fix I had to deploy involving a settings.php change and I totally deleted a semicolon by mistake. So that meant that instead of one page on the website breaking, now every single page was throwing a php error. So that was not good. If we had continuous integration in place the systems would have told me right off the back, hey, I can't run your B-hat test because literally every page is broken and we wouldn't have made things worse in production. Hot fixes are stressful and anything you can do to reduce those stress is going to make your team happier and more effective. So that's a lot to think about when you need to discuss bringing continuous integration into your team. And so I'd like to try and condense this down into some talking points that you can bring to your team. So first of all for the developers on your team talk about how you will shorten the testing cycle. How instead of having to write code merge code, wait a day or two for QA to get back to you and then deal with whatever fixes you need to do, you're going to get that immediate feedback. But then in terms of helping to manage that sprint risk which we talked about way at the beginning you know time box the effort that you're putting into setting up your initial continuous integration strategy. Assume it's going to take you at least one hour you know maybe if you have experience with a tool already and you've done it before it's going to be less but you know it's not going to realistically take you less than an hour. If you've never done anything like this before it's totally reasonable even with a pre-built hosted service to assume you're going to spend half a sprint learning the system and getting it up and running. And if it's taking you longer than that if you find that you spend a week trying to set up testing and it's not working or you're not feeling you're not like you're making progress that's a really good time to step back and think about why that is and usually that's a case where you need to then look back at your team composition your team processes and your code and figure out where those frictions are coming from. For you know QA and project managers you know this really changes the job description for a lot of QA folks. A lot of QA teams spend a third or half their time doing regression testing by hand. And once you have you know say a full B-hat suite covering your website QA's focus becomes not on doing the regression testing because you're writing code to automate those regression tests but on writing new test cases which when it comes down to it is what the QA teams every QA team I've ever worked with that's what their best at. You know you get the best results when you can say to your QA teams you have a week to do nothing but think about new and insidious ways to break my code and that leads to a higher quality product but then to help manage the product risk especially if you are delivering something to other teams or other organizations having the metrics that you get from continuous integration whether it's showing the changing code coverage over time or whether it's showing you know how many parts of your system you know individual modules now have full test coverage. Those are metrics that you get to share. The code quality reports that you can get by running either tools like PHPMD or PHPCS or using you know the various hosted services out there it gives your technical project managers who may know a lot about Drupal or PHP or even the web depending on where they're from it gives them something to look at and say we know this is well architected code because it's following these best practices in terms of code structure and you know code architecture. And for product owners and project managers you know it's really important to share how once you have good continuous integration in place you can totally start ignoring code that needs to be ignored because you know it's going to break. You can set up a nightly job that runs once a week or once a night to test against the latest you know minor point releases of everything that comes out and you'll get an email telling you hey guess what our module broken we need to now go fix it. But it also means that when developers are brought into a code base that they've never looked at that maybe no one on the team has looked at they've got tests and they've got reports and they've got all this metadata about the health of the project that they can look at and start contributing right away. And then in terms of the financial risk you know the monthly costs are relatively small given what you spend for a team of five or ten developers but they're also flexible which is really the most important bit. You know you're not walking yourself into an expensive two thousand dollar a month contract for you know any of these services. Here are a few links for you to dive deeper if you'd like to. If you're looking forward to implement continuous integration with CircleCI or TravisCI we have written an installer that will place a lot of files in your repository for running the kind of jobs that we talked about and that should help you to give you a foundation. We wanted to speed up as much as possible the setup process when you want to do such a thing and Andrew also maintains another another installer for a standalone module so if you have a Drupal 8 module in GitHub then you can use an installer as well to implement continuous integration for it. And also here's another link for the articles I've written about my personal experience with CircleCI and TravisCI. If you've never joined us as print please I encourage you to do it it's a lot of fun. You can see it next to people who drive initiatives in the Drupal community and please if you enjoy this session fill out the survey and give us your feedback and questions we are going to be after this session at the Lullaboot booth I think it's number 100 so if you don't have a chance to ask a question now or you just want to chat with us feel free to join us so we'll start now with the Q&A. I'm speaking from a position of mostly ignorance I haven't used CI before but I'm envisioning how I would use it I'm kind of getting excited about the idea when you have a website that has a Lapp authentication how do you test how do you have a robot test something that needs a person's login to get into a website? Yeah so there's a couple of different ways you could approach that. If you're building on Drupal 8 wherever possible again I'm just going to take two tags there's the authentication against Drupal itself it's just a Drupal user account and you're writing a B-hat test then the Drupal extension which is pretty much any Drupal B-hat suite lets you do something like create a new user with the given roles and then automatically log you in as that user so when you push a commit, CircleCI will spin everything up, create the database or import the database and then you'll create that mock user that then you log in as so that way you're not worried about encoding passwords or anything like that in your configuration and you don't have to worry about that being broken. If it's an external service then you totally want to mock those HTTP calls like if it's an OAuth login or something like that in Drupal 8 it's actually pretty straightforward to hack the not hack but hijack the HTTP client and say hey instead of actually executing this against a live service return this JSON that I've got in a file that I want to run and you can even do that with B-hat tests so that way you're not dependent on that external service even being up. If it's Drupal 7 it's pretty much impossible to do that so you need a test account or something. Great session but for us the biggest issue we have is CSS and a developer will change something here it works on the page of the test and we realize that it actually screws up all this other stuff. So do you guys have any recommendations for visual regression testing? We've done some work within the past but me personally I haven't done any visual regression testing. I know there are tools out there but I cannot recommend any in particular have you Andrew? Tug vote supports visual regression testing. So Tug vote Lullabot tool and service that basically does something like this except it gives you an interactive website instead of running your tests and so on. So it does do though visual regression testing between what's on master say and what's in your branch. Obviously it's not going to tell you if the changes are what you wanted or not. It's just going to highlight them and say is this what you intended but that can be a really good tool for those sorts of situations. Ben is the Tug vote maintainer. Ben do you know can you tell us what technology is Tug vote using to create the reports? So you said phantom.js and sorry what was the other thing? All right and some image overlaying library probably an NPM. So feel free to go and ask Ben if you need more details. I guess two things funny he should ask that because I submitted two sessions about doing visual regression testing in behat for like doing screenshot comparisons and things like that. Both were not accepted. So I'm a little bit better. Next year. Yeah, hopefully next one. One thing I want to point out if you want really easy continuous integration setups for your project aqueability is a really nice not aqueous specific tool set for building and launching and testing your site or your application has built in set up already and will do all of your build things like that. And you can with just one command say initialize this project for Travis. So you get Travis set up right away and it's really really helpful for that and they are always looking for people to try doing or try integrating other services like circle CI and other build platforms to contribute if you can. Thanks. We'll give it a go. Thanks. I'm curious if you guys have put any thought towards how to abstract this above the site repository level. I imagine you have many clients, many sites and you implement it to your best practice at that given moment but we're always learning and six months later you're like in my best today my circle CI configuration probably looks a little bit different than it did six months ago so we've recently solved this a little bit with deployments using Ansible roles and doing that in a different repository that all the sites can share and I don't know if this is a one-to-one comparison but is that interesting at all or make any sense? Yeah so what we do right now with our installers is the thing is there's certain files like the circle CI configuration file or whatever that has to be on a per repository basis and you actually want that because then it means you can easily make little tweaks to it over time that are specific to your project. We worked on systems before where everything was controlled say in Jenkins and it was really painful when all of a sudden one module needed one new PHP extension and that affected every test that was out there so what our installers do is that if you want to update you re-run the installer and because everything can get in your individual repository then it's up to you as a developer to say oh I want this change no I don't want this change oh this is a hack I had to fix some bugs that I no longer need you just basically use get add dash dash interactive and do as you see fit for things like the circle files we currently put the robo file and so on into the docker images and those are tagged with version numbers so that way you know those aren't going to change out from under you and because that's in your configuration file for CI you know that's locked down what was the other thing that we do no maybe that's it you got anything that? Yeah well first of all if you have any code that is public on what you mentioned we would like to see it so if you can just link test to it via Twitter I would really love to have a look at it so first of all in my personal experience every CI implementation is different so once I'm done with a particular client or project I let the team to just continue evolving that what I found lately is because when I did the transition from circle CI to travis CI I got to a point where everything was extracted into robo tasks and I let the platform specific stuff from circle and travis out of it so what I want to do now is I want to create a new repository that contains only the robo file and then circle CI and jammel and travis installers would use that so because maybe now I want to move on to gitlab and I want to use those tasks and that's as far as as we can go because more than that you know you may break up things in other projects while making changes to this central implementation but let's see what you have anyway that's great thanks for your talk just speaking to the visual regression testing tools we're using one called backtrack.io which is like a third party service that can integrate with basically any of your repositories okay thank you alright that's it so thanks everybody for coming see you around yes join us at the booth if you have more questions yep I might say that person is a big person I did have a question about the how do you do this and it doesn't do much but it's still a big portion and the other thing it's all balancing Yeah, so obviously we're going to take that down so you can see what's going on in there. We're never going to have questions like that. You know what I mean? I don't know. I'm focusing on what people have to say about that.