 to get things started. Is there a mic in this room? Can the people at the back hear me? Do I need to speak up? Hello? Yeah, there is a mic in this room. There we go. Okay. So, hello. My name is Andrew Berry and I'm here to talk about continuous integration. Who here has heard the words continuous integration before? Oh, exciting! Who here has implemented continuous integration on a project? Oh, even more exciting. Okay. Who here has implemented continuous integration on a project? Used it a year later and it was still running. I've been there too, many, many times. This presentation and this topic is about how to get to that point where continuous integration isn't just something you do at the beginning of the project or until you get to the point where you launch a website to your end users to production, into something that just becomes like part of your workflow, just like using ClareQuest and GitHub or Composer and Drupal 8 projects, those sorts of things. I work at Lullabot. We are a full-service design, strategy and development agency with a long history working with Drupal. I've been there for many, many years now and I've worked on many different projects with zero continuous integration and tons of continuous integration. And throughout all of those projects, even on the ones where we weren't using continuous integration, I think we have sort of come to this consensus about what it brings to our teams, which is that when it's properly implemented, it enables us to deliver more software, software that is of a higher quality, and to continue to deliver that software over a longer period of time without necessarily worrying about the software decaying into fragility. But for this to actually happen, you have to manage sprint product and financial risk. And if you ignore any one of those three risks, your continuous integration is at a serious risk of failing. So we started doing continuous integration because we were really selfish. We were doing a lot of peer reviews on pull requests in GitHub for private projects and we really didn't like having to report and manually type in the comments for common things which computers can tell us, whether that's as basic as code formatting rules, whether that's as basic as does the homepage on the site actually load if I'm going to merge this pull request in. And what we really wanted to focus on were on processes that could be automated while a team is working on tickets, working on the work that actually matters. We have tried way too many continuous integration tools. Jenkins is really well known, previously it was known as Hudson before the Oracle apocalypse when they bought Sun and everything fell apart. We've used it on AWS and elsewhere. We've also used CircleCI which is a sort of continuous integration as an app hosted in the cloud. Travis CI, same idea. And the truth is this is an exploding market right now. There are all kinds of other continuous integration tools which you can bring into your organization. We like to say to our clients, use the one you already have. If your company is using GitLab internally to store all of your projects, use GitLab's built-in continuous integration tools. Don't worry about trying to set up something else. You may find that it's really just a case of having to use the tools you already have versus investigating a vendor or a software product and setting it all up. Within those continuous integration environments, we've kind of settled on these tools. Drush, because we're dealing with Drupal all the time. Robo, which Drush9 is actually built on now. And it's a command-line task runner similar to Grunt or Gulp from the JavaScript world. But it's in PHP, which is great because if you're a PHP developer, you don't have to write as much bash, which makes everyone happy. Docker, because while Docker has some serious issues running in production still, especially with Drupal websites, it is amazing for continuous integration and testing environments because all of the cleanup, all of the tear-down activities are basically delete the containers and I'm done. And finally, we use Behat, especially on projects that have full QA teams because it allows us to write tests that are run by continuous integration services that the QA teams can look at and agree, hey, this is actually what you're supposed to be testing and what the software is supposed to be doing. And we like to jump ahead a little. And this is kind of the conclusion of the whole deck. But if you don't have any continuous integration software today, investigate CircleCI or Travis. Travis is a little bit more complex to set up, but it also has a little bit more flexibility. CircleCI is also a really great tool. That's what we're using on the project I'm on now and I've been really, really happy with it. And if you need to keep everything in-house, maybe you've got really restrictive firewalls between your website and internal APIs and services you need to use. There are behind-the-firewall versions of Circle and Travis, so they're pretty expensive. They're much more oriented towards organizations with multiple teams, not individual teams, or Jenkins is great in that case when you do need to keep something behind the firewall. So let's talk about Jenkins. We've been using Jenkins. I mean, I've been using it at least since 2009, 2010 for a variety of tasks. It's really great for its flexibility in that at its core you can basically just put shell scripts inside of it to do whatever you want. But it comes at a cost with that flexibility in that it can take days to go from a vanilla server to a fully working integrated Jenkins server. Even for projects that I'm on this year where the people have been using Jenkins for a long time know how to use it, understand it, it's on the order of days to get it fully up and working the way you want it to go from scratch as compared to a much shorter amount of time with some of the other services I will talk about later. The other thing to keep in mind about Jenkins is that it was kind of one of the first web UIs for continuous integration out there, which means there's lots of plugins for it, there's lots of community support, lots of documentation, but it doesn't necessarily follow all of the best practices directly such as keeping your job information in Git versus in Jenkins database somewhere. What does Jenkins look like? So Jenkins, as I said, basically at its core gives you the ability to run commands and it comes out of the box with a ton of plugins which are really focused on Java development. Jenkins is a Java product we've written by Java developers for Java developers and as PHP and JavaScript developers those don't necessarily apply to us. But in this case I'm showing an example that we had with a client where we had Docker installed on the Jenkins server and every time a pull request was opened it was in this case it would run JavaScript tests so that was B-HAT basically. So it would spin up the Docker container, take the code that came from GitHub, run the tests and report back. Something we really, really recommend is that when you're using Jenkins keep these shell scripts or anything in Jenkins as thin as possible. I've dealt with Jenkins installations where there's hundreds of lines of bash code in those text fields and it is really painful to maintain. It's so easy to make mistakes and it's also really hard to deal with the fact that maybe you've got to change those files for your development branch but you don't want to change those yet for production because you haven't merged your production yet. So keeping these as thin as possible is really useful. Oh, there we go. And here's an example of all of those scripts that we have that were being called by Jenkins. And so if you look on the left hand side you can see we had code coverage reports which was basically running unit tests and telling us if we were actually testing our code or not. Code sniffer which basically runs the Drupal coding standards. Installing Drupal which is really needed for B-HAT tests.sh which is basically unit tests without running the code coverage reports because as soon as you turn on code coverage it takes a lot longer to run and you want that very quick instant feedback, that glorious green check box in GitHub. And then test.js which in this case we were using B-HAT with Phantom.js. You don't need to use Phantom anymore. There was a while when Phantom was not really well maintained and had a lot of problems running within Docker. As of Drupal 8.5 you can now work directly against Chrome or against Selenium without needing some other tool in the middle. So definitely work with taking a look at because that will cover both B-HAT and any JavaScript tests you have written in Drupal. So what does it take to set up Jenkins? You basically can go to the Jenkins website and follow their directions for installing Jenkins as standard as any other sort of Linux package that you would get. The worst of it is you might have to figure out where you're going to get Java from which if you're using Ubuntu for example there's the open JDK you can get through apt. When you do this installation I would recommend taking a look at what they call their blue ocean user interface. What that does is that's a complete reimagining of the Jenkins user interface following best practices that other tools have brought to the limelight including keeping your jobs in each Git repository that you're working with instead of out of band. And it has this really neat workflow where you can make changes through the UI and then it will open a pull request to your repository with those changes that you made. So then you can see the jobs around and know that you're good to go before you merge them. The downside to that is there are a good number of Jenkins plugins which you might just assume you would be able to use which haven't been updated to work with that user interface yet. In fact even a ton of Jenkins management itself you have to revert back to the classic interface to be able to interact with. Once you've got that set up you have to figure out how Jenkins is going to be talking to GitHub or whatever source code repository you're using. There is the generic web hook trigger which is nice because it works with basically any web hook from any version control provider you have. There's also the GitHub pull request builder which has polling functionality. If you're setting up Jenkins, chances are you're doing it because you have to keep it behind the firewall in which case web hooks and posts request from your GitHub repository or whatever it is you're using aren't going to get through that firewall and tell Jenkins that there's work to be done. So you might have to fall back on a five minute polling interval to see if anything is opened in your source code repository that needs to be tested. And then setting up the jobs. As far as the advantages of Jenkins, it's free and customizable. It is the best example I can think of of you have all the tools, you have all the rope make your choices. And you can adapt them to your workflow really well. For example, my colleague added if you look at the code coverage line it actually shows in GitHub the stats. So you don't have to click through to see what those reports are if it's going up or down. A lot of the other tools that are more sort of turnkey don't let you have all that customizing functionality that you just get with Jenkins. And as I mentioned, there are lots and lots and lots of plugins and documentation available for Jenkins, though you do have to deal with the split between sort of the old and the new user interface with Jenkins which can make things a little bit harder to understand if you're just learning for the first time. Something else which is great about Jenkins is because it is literally do anything you want you don't necessarily have to tear down jobs at the end of a job. With a lot of the cloud services that are out there they really don't want to expose their services to interactive commands which aren't coming through a Git repository because then if you do something like you know, turn on a mail server and set it as an open review relay and start sending spam everywhere at least they have an audit trail of where that came from. So in this case we wanted to have testing environments that were interactive so that QA and stakeholders could click through to work with which was possible, though very complicated but possible with Jenkins. If you're not dealing with a firewall that's locking you down I would definitely look at a hosted service for this use case instead of wiring it up yourself there's Lullabot's tugboat there's probo.ci there's a couple other services out there which will, every time you file a pull request give you an interactive testing environment with the results of that pull request but if your organization isn't going to be compatible with using any of those services then Jenkins at least gives you a way out but lots of setup lots of tweaking server maintenance security patches tricky to manage your jobs over time it's more of a full-time job than you might like and the software is free but time and hosting is not necessarily free and I actually was able to pull some data from one of the projects we were on to get an idea of how much it was costing us to host Jenkins because one of the things many of our clients would come to us is basically say like well Jenkins is free we have internal servers already why would we spend $50 or $100 a month on some external service when we don't have to well, this team had eight developers working full-time 10 plus repositories this was a project where every repository was an individual Drupal module that other internal teams could work against four jobs per pull request basically mirroring what I showed earlier and in the real world it was about 18 minutes of CPU time if you were to run one of those jobs one after the other in serial it would take about 18 minutes to get through them all on average we were looking at about eight pull requests a day because in Git you're not necessarily pushing with every single commit you make we were looking at about 24 total commits on sort of an average day and this also assumed that teams were testing locally and running tests locally if they were working on something in the user interface they had B hats set up locally so they weren't just pushing code to see if it would pass which would of course reduce the amount of work that your server has to do for your day-to-day development and in this case this was an EC2 instance basically set up by itself as a virtual private server in that paradigm at least and this is the easiest way to set up a Jenkins server and it's easiest to understand it does mean it's always there adding to your builds you're paying that per minute cost there are some downsides to this though like what happens when everyone has a three day weekend you go out for a national holiday and then you come back on Tuesday and everyone's doing this mad dash to get the last print's work in all of a sudden instead of eight pull requests a day you're seeing 16 or 20 or 30 pull requests from that backlog of work getting opened and then your server falls over because it's run out of memory and your team might say well it's fine we're going to be in the cloud that's where we autoscale here have auto scaling set up on their production Drupal websites where the number of EC2 instances or containers or something like that self scales based off of the traffic yeah okay so if you don't have auto scaling set up on your production Drupal website what are the chances you have the knowledge and the expertise and the time to set it up for your Jenkins server your team might tell you we're going to do this so looking at what this was this was an 8 core 32 gig server 40 cents an hour again with Jenkins you can split it up into multiple nodes that are managed by Jenkins but again that's more time for your ops team or your developers to manage and I've never come into a client project where they've already had Jenkins and it was set up that way you know one big vertically scaled server we're looking at about half a gig of memory as a minimum for each job composer can take a decent amount of memory and this depends if you're building a site or building modules building sites composer will use a lot less memory but you might have you know just more work more code to evaluate if you're testing individual modules you're probably running composer update which means it's resolving dependencies and doing much work and that takes memory and we found that you know as far as concurrency when we found it was about 10 concurrent jobs it was about what this server could actually hit real time so I pulled my colleague Wonpy's hours from our time tracking system to see how much time he spent on this and over 15 months he spent 160 hours and this was a project with no migration so I knew every time he used the word migration in a time entry it was related to continuous migration and not you know Drupal 7 to Drupal 8 migrations so we're literally looking at you know hundreds or thousands of dollars in engineering costs to maintain this sort of server and that didn't include the fact that we had a full time ops manager ops engineer on the team with the client whose hours we weren't even seeing but they were working on it as well so it was really higher than this so you know as far as understanding like your team probably knows Drupal and JavaScript and you know Jenkins but what happens after the initial setup what happens when you hire new people onto your team are you going to hire people who specifically know Jenkins and Drupal backend development and the costs of hosting Jenkins efficiently are really related to the expertise of your DevOps staff and how much you pay for them you know you might think you could get someone who knows Jenkins and hire them but they're going to be expensive and probably more expensive than the costs of an unoptimized hosting architecture like I just went over so that led us to Circle CI which is a hosted service as I mentioned before there's no user interface for creating jobs it's all writing in YAML which as a developer I really really like it's reproducible it's pure reviewable it's relatively straightforward to understand and there's lots of great documentation for it so in this case this is showing you know running PHP unit tests and you can see in the command line we're using Robo to do all of the real heavy lifting of making sure unit tests are set up correctly with the right PHP unit configuration so on to work so the setup is fairly straightforward we usually start by adding an existing Circle configuration to a repository because we have ones for Drupal which are meant to work with Drupal but you could also go straight to number 2 authenticate your GitHub account allow it to watch the repository and then it will give you a template it'll probably say like oh this is a PHP project go ahead and use this template that will work well for most PHP projects that aren't Drupal or WordPress because Drupal and WordPress are not most PHP projects you know there's a lot of really neat things that you get out of sort of like treating your CI as an app you know first of all is parallel processing is something you can figure in the YAML file and it's really easy to manage you can say that I don't want to run B-hat tests unless unit tests pass very easily you can also say I want to run Composer install first and then fork that off I teach you all the different tests that need to be run as concurrent jobs so that you're not running Composer commands 4 times for every commit wasting time I really really like how workflows work with CircleCI CircleCI uses Docker under the hood and everything works through Docker and so you specify the set of images that you want to use in this case we've got a base image which has PHP and Apache and so on you can go take a look at that we're pulling in Selenium which we use for B-hat tests the neat thing about that is the debug variant includes a VNC server so if you actually want to see why your test is failing you can VNC in and figure out what's going on that way and then MariaDB for the database and if you've used Docker Compose or anything like that before it's very easy to wrap your head around and likewise if you wrap your head around this Docker Compose will make a lot more sense CircleCI gives you SSH access to jobs for when they fail which is great for debugging so you can just say rebuild this with SSH and at the end of running the job it keeps the containers all up and running you copy and paste the SSH command and you can do what you need to do they even allow port forwarding which is how we can use VNC to then actually connect to the Chrome instances running for Selenium or you can even do something like port forward port 80 so you can access a live environment from your local web browser a lot of flexibility there they also have a local command line tool which allows you to run the jobs locally and as long as your jobs don't depend on anything on the internet and you already have the Docker images pulled locally you can run the jobs completely offline it's if you look at the CircleCI command line tool 100 lines of bash it's really straightforward you know you can run the jobs locally if you want just by using the Docker commands that it spits out which is really useful and because they're running in Docker your laptop has got the RAM and the CPU behind it nothing stops you from running multiple jobs at the same time I've totally had a job running 10 minutes of beehive tests developing on another ticket because you can do it it's easy you don't have to lock up your whole local environment just because you want to run tests you know there are some downsides the free service for private repositories only allows one concurrent job at a time you can define how your jobs could be run in parallel but it will simply run them in serial and it's not as customizable as a Jenkins setup if you do the whole free for open source thing you get four containers and a ton of minutes they only really track minutes for abuse purposes once you're paying for it it's unlimited minutes no matter how much you're paying but what we did find was that we went in 50 hours of developer and dev ops time from we literally have no idea what we're talking about when it comes to Circle to completely done with demo maintenance we basically only have to touch the work that we've done with Drupal core updates so in that case the team they're building modules they want to make sure it works with Drupal 8.5 or 8.6 when it comes out you have to update those base docker images to have the newer version of Drupal but we'd have to do that work anyways even if we were doing Jenkins or some other service and the other thing which we really like about it is that they have a very friendly billing system where you can be free we found out at 250 a month was about as much as we needed I think that's 5 parallel jobs at a time you can go up to 16 parallel jobs if you have a lot going on with your organization and the other thing which is really neat is you can share your plan with other organizations so for example right now I'm on a project where Lollabot has paid subscription to CircleCI and we have granted our clients totally separate GitHub organizations access to use our slots which as an agency is really flexible because then someone's no longer your client you just they get downgraded to the free plan and you can use those slots for whatever your new project is so up next TravisCI and this is going to look pretty familiar it's YAML it's using Docker which you don't have to use Docker with Travis you have to opt into it but again it's really useful because it makes it a little bit easier to manage dependencies for your jobs in this case this example is showing what they call a matrix which is how they handle concurrency and jobs and so they treat it as you have a basically an array of variables and then it runs the same job with every combination of those variables and if you look at a more complex project you can basically have you know an N by N grid of all of the variable combinations that get run and this is really useful if you're doing a project where you need to support say multiple versions of PHP because you could have different environment variables which control which PHP version your project is being tested against but again we're using Robo that makes it really easy for us to have as much of the logic in a Robo file as possible so if we want to move from Jenkins to Circle from Circle to Travis to whatever it is we're supposed to be working with we're not locked into a given CI provider and you know most of the logic is PHP which any PHP developer can pick up as far as the setup goes it's the same sort of pattern as you've seen with a lot of these web services you add your Travis configuration file to your repository you authenticate with GitHub and then you will have to watch for changes and it will look on your default branch and see what's available to do you know as far as advantages with it there is really really great documentation for Travis it's an older application than Circle CI and Circle CI just went through a major rewrite where they basically throughout their old platform and rebuilt it on Docker which is great if you started when they launched that new platform but if you're on that old platform you've got a migration you have to do within the next couple months I think. Travis CI has been used by lots of big open source projects if you've ever contributed to symphony they use it for all their pull requests so you can take a look there and see how it works so there's it's even easier than Circle CI as far as finding documentation goes but there are some limitations it has no concept of artifacts so you can't store any sort of results from your job whether that's code coverage reports or databases after a migration or anything like that so you need to use third party services as soon as you have artifacts whether you're using for code coverage a service specific for displaying code coverage reports in a web browser or whether it's an S3 bucket or something like that if you're needing to upload something more persistent we actually use artifacts quite a bit with Circle for BHAT where every step it takes a screenshot of the web browser so then when it fails you can go back and see what the state of the web browser was when that step failed usually you know an exception that I totally forgot to handle when I was writing the code or something like that and there's no command line interface to run the jobs locally so you are stuck just you know opening a test branch pushing requests or pushing commits and hoping they pass but you know there's going to be a lot of people who are worried about the cost of any sort of continuous integration and if you're a developer you're probably thinking yeah you know tests are really hard to write and I don't have time to write those I've even said that a few times you've probably talked to QA leads who have said well I don't care if you write a huge regression test suite my whole job is to not trust developers why would I trust that you can write a regression test suite that actually does the right thing your project manager might be worried that you spend all this time fixing tests and infrastructure instead of fixing bugs and your product owners are just going to be saying this is all you know tertiary to what I actually care about which is delivering the product we're building on the dates that we're supposed to be able to delivering it for and so that's why it's really important to evaluate a project for CI before you just go and blindly implement it. Now ask yourself how long is this code going to be used for? We have one team where they've thrown out their entire project four times over six months because it's with basically a startup inside another enterprise and they keep changing literally what the company is going to be doing so writing a huge regression test suite and like this is expected this is what they're working at finding their business plan and their business model but having everything 100% unit tested when you're likely to be throwing it out three months from now not a great use of your time determining what environment your business is in is your business growing and writing code and then going to be using that same code for five or ten years or are you writing code using it for a year throwing it out starting something new you know the figuring out if you're replacing code versus growing code is really important determining the update model for your code is important if you're just building a website then you have full control you can fix bugs you can deploy them to production you can do what you want if you're working with a team who's distributing code to other groups or departments within your organization or even outside of your organization and you have no control over updates then you want to be really sure that when you tag version 1.0 everything is passing on your test because chances are no one will ever update past 1.0 unless there's like a literal database sequel injection kind of vulnerability in it that's how a lot of these big organizations work they take the code that you first say to use it and then they just leave it forever you know the other thing is if you're handing off code realize that the team you're handing it off to might not maintain the test suites that come with that code so if you know you're writing code and then giving it to some other team and there's going to be a transition there talk to that team and find out what their plans are what they want to do a lot of them will say if you give us the test and they're working we will just keep maintaining them and that's fine and a lot of others will say we have a half developer working you know very intermittently and it's just not something we're going to be able to maintain knowing where your code is running is really important in this evaluation if your code is running multiple PHP versions multiple web servers multiple operating systems and continuous integration really helps prevent your QA team from breaking under the load because then you don't have to be manually testing to make sure you didn't accidentally use a PHP 7 feature in code that needs to work on work on PHP 5.6 or that you don't introduce code that introduced warnings or notices on PHP 7.2 because some feature you've used has been deprecated and planning with your QA team is really important we have found the most successful model is where at least some of the QA team has a background as a developer so that they can get their hands dirty and work with B-hat tests write B-hat steps you know work on the automation side so that way they can trust that the tests are doing what they're supposed to be doing and that they don't need to double regress everything that you're delivering so when your developers say tests are too hard remember that tests are tools and APIs that teach you the underlying systems if your tests are too hard probably your code base is too hard you know if your QA team doesn't trust your tests then they need to be completely integrated with your tests they're basically saying we don't trust developers which is part of their job but also it shouldn't be you know you want them to be on your scrums on your demos you want them to be seeing the continuous integration work that you're doing so that they understand when you write a new B-hat test they can say yes this B-hat test does what the business is expecting it to do talk to your project managers about how you want to be tactical about what you test how you won't test everything in a sense trust your developers when they say I want to write tests before merging this pull request because that's them saying I'm not 100% confident that I don't have glaring bugs in this code and I want to make sure I don't before we hit that button and as developers track your hours so that way if your project managers are raising concerns about the amount of time that you're spending doing testing and continuous integration and so on you at least have data to go back to them and say well I actually only spent 10 hours working on tests in the other 10 hours we're all in meetings whatever it is maybe you do find you're working too much on tests and that's something you address but having the data is really important when you have those conversations and for your product owners discuss how continuous integration is a necessary requirement for continuous delivery most product owners would really like to have continuous delivery as a core feature of their organizations and if they're not coming from a technical background they may not have the direct understanding of how continuous integration enables them to deliver products that are more reliable on a shorter timeline and so having that discussion about respecting their need to deliver can really get some buy-in on continuous delivery so a few more talking points for when you end up having these long meetings or email threads about whether to bring your organization for developers talk about managing the delivery risk by reducing the testing cycle by knowing that when there's a hot fix you create the fix you wait for the builds to pass you can merge it without having to wait for 24 hours of QA to get back to you time box the initial setup I expect it to take no less than one hour be surprised if it takes you more than a week does it even with Jenkins if it takes you more than a week that says your code base is probably not testable and not in a state where it's reproducible for testing environments and if it's more than a week that probably means your new developers are having a really tough time getting local setup and so it's worth it then to step back ask yourself why is this project so hard to get going in new infrastructure and solve that and then come back to continuous integration for QA and project managers talk about how QA gets to focus on writing new and insidious test cases that really break your expectations instead of regression testing a lot of QA teams end up spending half or three quarters of their time during the sprint doing nothing but regression tests but that's not where they should be best QA teams are great at coming up with those oddball ways that users are going to break your software and you know when they have that freedom you're really helping them sort of elevate your testing to the next level and also talk about how once you have continuous integration set up you're going to have metrics that you can share in terms of code quality you can use tools like PHP CS PHP metrics, PHP code sniffer to give you at least some baselines to determine if code is getting better or worse over time and for PMs and product owners talk about how with a good continuous integration workflow code can be ignored and revitalized you can pull a team off of a project and if everything is passing you know you can assign a new totally different team back to that project six months or a year later and know that it was left in a good state know that maybe all you need to do is you know update Drupal update the libraries are using and then run your tests and make sure everything still works it makes those minor releases especially with Drupal 8 a lot easier to manage and also talk about how the monthly costs are small and flexible you know even $250 a month for a team of five or ten people is not that much compared to the human costs that you're already paying but because most of the services except for say Jenkins are you know month to month billing literally you just drag a slider for how much you want to spend budgets are tight and you're not working actively on a project then just put that all the way back to the free tier and wait until you need to put more back into it so if this is something that is really excited you and you're now ready to bring this in you know take a look these slides I've linked to them from the session page and these are all links that you can click on so we have installers both for Drupal 8 websites meaning I'm building a single website and I want to run tests against it and we also have a separate installer for individual Drupal 8 modules because the setup steps are a little different you have to get Drupal and you know install a fresh database for a lot of tests and so on and also my colleague Wampie who I gave this session with at Drupal con has been writing a really great series on CI tools for Drupal 8 projects so be sure to check that out I think we are we're pretty much at time but I can take a few questions if anyone has any have you had much chance working with GitLab CI and how it compares to something of the one? Yes so the question was if we had a chance to work with GitLab CI and the answer is sort of one of the things that we found is hard about a lot of the Docker based CI tools and this is an issue both with Jenkins and with GitLab is that they require you can't do circular dependencies between services with Behat in particular Behat runs on your PHP container and then it goes out to Selenium where the web browser is and then Selenium runs Chrome which then needs to go back to the PHP container and there's no way in GitLab to describe that relationship so you Selenium basically can't talk to your web server that's where we stopped we've had other teams who are not using Behat using GitLab and it's fine but not like to the nth degree as far as all the different types of tests you can really just basically like unit tests alright well I'll be up here so if anyone has any questions feel free to come on up but thank you so much for coming out and looking forward to the rest of the day so thank you