 All right. Well, it looks like it's about quarter two, so we might as well get going. Is this thing on? You guys here? It is. Perfect. All right. So we're going to talk about a little project we've been working on that we call Tugboat. The basic idea is that it's kind of a mix of continuous integration, continuous deployment. It's basically a tool that helps us be more efficient and build sites faster and more easily. And obviously, we'll get into details kind of as we go. So just to kind of start things off, my name is Blake Hall. I'm a senior developer at Lullabot. Lullabot, for those of you that might not know, is a design development strategy and consulting company. We kind of simultaneously try to kick ass and have fun, which is not always the easiest thing to do, but we're making an effort all the time anyway. This is my colleague, James Sandsbury. He's a senior architect and development manager at Lullabot. And we've been working there for four, four and a half years, something like that. So Tugboat, where did Tugboat come from? What is it? Why does it have a goofy name? It's basically, like I said before, something that kind of fills the gap between a traditional continuous integration tool like Travis, a task runner, kind of like Jenkins that you can use to automate things on your behalf. And it fits somewhere oddly in between both of those. It doesn't really replace them, but it's not really exactly the same either. It initially started life inside of Lullabot as something called the GitHub pull request builder. But we have this company policy that any of our little side projects have to have good names that would make cool t-shirts. And GitHub pull request builder doesn't really fit that recipe. It's a mouthful. It would be really hard to come up with a logo for. Since it is based pretty much around GitHub and the pull request workflow, we kind of pull Tugboats, what's more fun than nerds on boats. So that's where Tugboat came from. Like I said, it's basically a build tool. It spins up automated Drupal environments as you're kind of going through the development cycle. The primary effect we've noticed is that unlike some of the other build tools we've seen, Tugboat really works to increase the transparency and visibility that non-technical folks have into ongoing development. So while we're working away on features, project managers, folks in marketing, people that wouldn't actually set up their own local Drupal environment for whatever reason, like it might be a pain in the neck. They have a way of sort of staying on top of and seeing what's going on with a project when we use Tugboat. So, oops, sorry about that. Reveal in the down arrow. So the biggest thing we've noticed is that this is sort of the primary interface for Travis CI if you're a non-technical person. It's essentially functionally equivalent to the thumbs up emoji in GitHub. It's either a thumbs up or a thumbs down. Your build passes or it doesn't. That's the level of visibility you're getting with something like Travis. It's definitely great because you know whether you broke the build or not, but it doesn't actually provide anything tangible you can interact with aside from this is good or this is bad. So there's no kind of confidence that can be inspired there. The other UI that you can kind of work with is Jenkins. That's a pretty popular alternative. I would guess several of you, if not most of you in the room are using Jenkins in one form or another. This isn't a UI you would want to hand to a non-technical person either. It's definitely one of those interfaces that was written by developers for developers, despite the Bob Ross happy clouds and little storm icon stuff that's going on. Really useful, really, really powerful, but again, not necessarily at the level of user-friendliness that we were looking for. So the rest of the talk, where are we going? What are we going to kind of cover and talk about? We're going to cover the why, sort of how this came about. James was really the one that got it started and saw the initial need for it with the Tizen project. So we'll talk a little bit about that. We'll talk about how it's been used to help the MSNBC team build a really, really large site on a fast deadline. We'll talk about how we use it internally on lullabot.com and how it's helped us there a little bit. And then probably more importantly, how you guys can take some of these ideas to build a tool like this of your own if you don't want to come in contact and actually work on the tugboat project itself with us. And then we're going to close with some ideas about other pieces we can plug into this in the future to kind of add functionality and augment what tugboat already does and sort of other ways that we can make development more transparent. So before we do that, let's talk a little bit about kind of history and where we've been as far as Drupal development and deployment goes. So I assume since this is the DevOps track, most of you guys have separate environments for your sites, right? You're not logging into production and hand editing PHP, although we probably all did that at one point or another, right? So the first step is kind of having siloed environments. And typically you've got code that's going up this chart from development to staging to production. There are various ways in Drupal you can kind of accommodate that. But then you have data coming down the tree. So user-generated stuff like the actual content on the site, menu items, files that get uploaded, things like that. Drupal's been really good historically at handling pushing configuration up the stack. Going the other direction is a lot harder and pushing content up the stack is also really hard. The content deployment issue is something that we're totally not going to talk about in gloss over. We'll leave that for a core conversation. It's a really tough problem that we don't need to solve. The essential pain that tugboat is meant to solve is how the QA process and peer review process works. As you're pushing configuration up from development to staging and production. So that's kind of where we'll go. And ultimately what we found is it winds up cutting the length of time it takes for new features to get merged and pushed into actual production code. So Drupal's first kind of big hook, at least since I've been in the community, for doing this sort of configuration management stuff are update hooks, right? We've all probably written update hooks before. They're kind of a pain in the ass. It's a slow process to work with. I don't think anybody ever gets it right on the first try unless it's really easy. So then you wind up dumping your database, re-importing it, rerunning it. Batch API is very handy and useful, but I find myself having to look at the documentation every time I try to use it. It's not the most elegant way we can push configuration around. And if somebody comes in on the production site and makes some changes, that you didn't get in an update hook, you're out of sync all of a sudden. And there's no way to kind of replicate that and keep a record of what's gone on. So the tool that most of us completely abuse to do this now is Features Module. Features is really great at getting things like views or panels config into code, into version control, something that you can get a diff from to look at and kind of push through this process. I remember sitting in a room at Drupalcon in D.C. when the development seed folks first introduced this. And I remember at that time I wasn't working at a consultancy or an agency. I was just one guy working on a site. And for me, the update process was the biggest pain in the neck because there was no really clean way to manage this stuff. And I remember sitting in that first talk before the first beta came out thinking, I'm going to abuse the hell out of this because this is really going to save me a lot of time and make my life easier. But we've all been doing that for probably, what, five, six years now. And there are a lot of pain points using Features as a deployment tool because that's not actually what it was intended for. It may not be a tool we necessarily like, but it's sort of the tool we've got right now working with Drupal 7 sites. So Drupal 8 comes with the promise of the configuration management initiative. I think it's definitely a good thing that Core now has an API that will support kind of import, export, sync. Some of the things that Features were essentially providing itself will now be part of Core. So other modules will have a pretty reliable way of building on top of that. It's definitely a good thing, but fundamentally it doesn't address the complete problem of how you get code in one of your development branches up to production faster in a more reliable way. In part because you're not only working with the code configuration, you're also working with content, but also just because large projects get really complicated. There are lots of moving pieces, and if people don't look at the diffs for every individual part, they can sometimes have unexpected consequences kind of pushing each other back and forth. So the big thing that Tugboat has helped us with is make our work as developers easier. It turns out that we can kind of deploy Features in a more repeatable, reliable way, which is really what automation is all about in DevOps, and it increases the visibility that non-technical folks have into our kind of process and progress. So the big question that Tugboat is kind of built to answer is what would this code look like if we pushed the deploy button right now? And James will talk a little bit more about kind of that problem space. So yeah, we've been talking about all the great improvements we've been making on our deployment process, right? We've been working on this for years now. We've been trying to make it easier to deploy Drupal sites, and we've created a process for that. It's not everyone does it the same way, but we've been working on that process, and all of this great progress that Blake has been describing comes at a cost, right? And that cost is that now it's slow and it's complicated, actually, to deploy these sites. When things slow down in this process, it takes longer for the people that need to see the work for them to actually see the work that we're trying to do, right? So here at Lullabot, we usually do one or two week sprints. We're doing, you know, agile development, one or two week sprints. And through that process, at the end of that one or two weeks, we'll do a demo of what we actually built, right? If our clients or the stakeholders or the project managers, the designers, external teams, if they can't see the work that we've been doing until that demo, until the day of that demo, how many of you guys have been in that situation where all of a sudden then, you know, the designer is like, actually, you misunderstood what I meant. Or maybe the stakeholder was like, well, now that I see that working this way, we don't really want it to work that way, you know? Now that I see, that is what I asked for, but now that I see it in action, that's not what I want, you know? So when you have that taking two weeks to get to that point, you know, we've come from, we think that that's actually like really efficient, right? Because we've come from waterfall or whatever it might be, where it's like, you get all your requirements at the beginning and then here you go and that's not cool. But when that happens here, what ends up happening is that this obviously is really, really expensive and even more expensive when you have large teams that are working for two weeks, siloed off, then the stakeholder sees it and you have to kind of start over from scratch. So back in that Stone Age era of before we had all these nice processes as well, it was pretty easy to show the work to that stakeholder, right? Or the project manager, because you were either working right on production or you could just have them look over your shoulder or whatever and see what you were working on or if you had someone that was pretty technically savvy, they could, you know, it was simple enough at that point where they could step through and probably build it themselves and see what you were doing, right? Or see what the change was that was going to happen. But Drupal's so much more complex, the process is so much more complex now and it's so much slower, we can't expect our stakeholders to know how to do this anymore. So traditionally, the way around this has been to create large releases, right? So now we work in these large releases and we get those large releases into a QA environment, right? And then we have everyone look at that environment with everything all baked in together. Everything's touching everything else and we've got to test it all together, right? And then what happens, right? We have one little bug in one feature that no one even really cares about, ends up sinking the whole ship. And now the release is blocked and we've got all these critical things that need to get out and it's all baked together, right? It's all stuck together and so now everyone's scrambling, everyone's, if whatever day you're doing deployments on, the day is ruined because everyone's like, we've got to fix the release. This has to go out tomorrow because of whatever, right? Has this happened to you guys? Never. No, of course not. Rookie. Rookie mistake. The other interesting phenomenon that happens when this starts to happen is that once this release gets blocked you still have other teams that might be working on the next release. So these releases actually start getting bigger because oh, we had to push this one back and so now let's roll it into the next release and now you have, it's even bigger and even more bugs that might come about because these things are kind of colliding with each other. And the way to get around all of this mess is we often start to make compromises on the process we've created in order to pay off that debt, right? The process is incurring a debt and we're taking shortcuts around it. All of the stuff that we've been talking about, we're making trade-offs, right? Over the past however many years in Drupal development, we've been starting to make trade-offs as our projects grow in complexity and size, we need to make some trade-offs. So the first trade-off we're making, sorry that that looks actually a little small. Hopefully you guys can read that. The first trade-off we're making, we've got the trade-off for, like we've been saying, stakeholder visibility for project testability. And another trade-off we've been making is process agility for project stability. Those are probably wise trade-offs, you know, in the grand scheme of things. We want the project to be stable. We want it to be testable. But by creating this consistent and testable deployment process, we've slowed things down to where it takes much longer for our stakeholders to see where we're building. And if we slow that down, that touches everything, right? Everything suffers from that process. We've created a bunch of amazing ways we can QA our Drupal sites, but that all makes the process slower and it makes the process murkier. How many of you guys have experienced this? It worked fine on my local, like I just tested it on my local site and it worked completely fine. So again, all of the ways that we used that we cut corners to get around this pain of how slow and complicated it is, that's why we run into this. Because guess what? Well, I skipped the DB re-import. Or I didn't sync down on the files. Or I'm using, you know, I'm using cool modules like stagefileproxy so that I'm not actually, I don't have a local copy of the files. Or I actually didn't revert all of the features. That sort of stuff is what creates this problem. All of these shortcuts cause inconsistent behavior across environments. So yeah, that's it. I don't have any ideas on how to solve this so I hope you guys have a great session. If you have any ideas come up afterwards and no, you get it. There's a lot of problems, right? So what can we do about it? What if we could, you know, eat our cake and yet still have it? What if we could have that we could show stakeholders what we're working on right away? What if we could catch regressions before they ever happened? What if we could squash those bugs before they ever ever even made it to QA? And what if we could, like, detect performance, visual, functional regressions before that code ever even got merged into our master branch or whatever you guys are using for your main code? The code that goes to production. So Blake mentioned before the Tizen project so this is what we actually built Tugboat to solve, this very problem, right? And it all started with the Tizen project. Tizen, if you're not familiar with it, is a mobile operating system among other things. And there are three websites for it. This is a project that is a joint effort between Intel and Samsung and the Linux Foundation. And there are three Drupal websites. It's a multi-site installation. There's Tizen.org developer.tizen.org and source.tizen.org. And on this project we have this super technically savvy stakeholder, Meg Shaver, he's great. He knows all about Drupal. He can build views. He can do a lot of the work that actually we do for him. But guess what? He doesn't have the time to do that. And he also doesn't have the time to test our work before it actually gets merged in. He doesn't have the time to go in there and download three databases and set up a local environment and check things out to make sure they're working as he expects them to before it gets merged in. So Tugboat started out as a set of very basic bash scripts to act as a sort of glue. We've got all these tools, right? We've got Jenkins working for us. We've got Drush working for us. We've got our configuration and features. All of that stuff we're trying to do to make our process work. We've got all these tools. What we need is a little bit of glue, right? So we've got, up on the top of this picture this is our production environment. And down at the bottom is our new little happy feature. And what these bash scripts would do is it's not, nothing really fancy or anything. It just goes through the same process that it would go through in deploying this code to production, but it does it in an environment. It gives this a URL. It gives this feature whatever's being worked on a URL that anyone can go to whether it's a developer that's doing peer review or if it's Mike Shaver that needs to look at our features. So we created these scripts to solve this problem so that Mike anytime we create a new pull request it automatically detects, hey there's a new pull request. So I'm going to clone the production database. I'm going to spin up this Drupal site and I'm going to create it at this URL and every URL happens to have the pull request, the GitHub pull request ID in the URL. So each site has a unique URL and it's actually doing it across all three sites. I may actually be working on code that's only for the developer site but Mike wants to make sure it didn't break the other two sites. So it actually is spinning up all three sites. Are any of you guys familiar with the Simply Test Me anyone raise your hand if you're familiar with that. So it's very similar to that except that Simply Test Me is spinning up a fresh Drupal site. And this is spinning up three Drupal sites with production database, with production files and it's doing it automatically when a new pull request is pushed up. MSNBC this saved us a tremendous amount of time on MSNBC. So we had these bash scripts they were basically written for the Tizen project but when we wrote them we tried to keep in mind that hey this actually might be useful for other projects but it turned out to be tremendously useful for MSNBC. They had a super aggressive timeline and a highly complicated site to build in that timeline. So like I said tugboat the way we built it it's building a URL basically for work that we're doing and this is great this turned out to be huge for the MSNBC project in this area for our external team. We had to integrate with the Newsvine project and they were kind of plugging in various features into the MSNBC site so like commenting and stuff like that. And they were working on their API well we were working on creating the site and things were changing on both ends right and that's if any of you guys have worked with that that can be really frustrating because you have to work a little bit then you run into problems send it over to them then they're looking at and working with and they run into problems with what you've written and it's back and forth. So what tugboat has allowed us to do is to have a single place where okay we can bang out some code and show it to the Newsvine team and then they can come back and say okay can you tweak this and then they can also realize okay I see we've got a bug over here we need to change this because they can plug in their tools into this environment but it's not just for these external teams where it's been helpful it's been helpful for internal projects as well so we have EPICS where a huge project is being worked on with our internal team you know we want a place where anyone can look to see the status of that project it's got to be fully baked before it gets merged in right where can we go to see that work right now at whatever state it's at tugboat has been great for that trying to think of an example of that right now we're working on some polling some live real-time polling feature and so again we're working with an external service in that but there's also we can actually work as a team on this EPIC there's a URL it's got a pull request for that feature and anyone at any point can go and see how it's working you guys remember Blake talking about hook update N and the pains of writing update hook so this tugboat has actually been great for that as well because I can work on that update hook and I don't have to re-import the database every time I can let tugboat do it for me so I can make a little tweak push it up and then I'll go work on something else for a little while and then I'll come back you know when I'm done with that I'll come back and check to see if it did what I expected it to do or other things I mean there's lots of times when you're in the middle of something and you don't want to have to refresh your database you can just quickly create a pull request for the code that you think is going to work if it doesn't work it's not the end of the world you know you're not sending it to anywhere but just as a separate branch of code so basically what James is kind of talking about is that classic XKCD comic where they're rolling around on chairs sword fighting and you know they're kind of asking what's going on and the developers yell back compiling compiling tugboat kind of takes away your sword fighting time in some ways which may or may not be a good thing we've also really found it useful just internally so like James mentioned initially written for Tizen we pretty much unapologetically adopted it for use on lullabot.com as well it's kind of the typical cobblers children have no shoes sort of scenario I think to be polite about it our redesign last year the launch of that took a while I mean it was a pretty slow process we just had a really hard time finding developers that weren't doing client work that had free time to actually work on a feature so you know we would sort of get five hours one week and then three weeks later you'd have another ten hours to try to work on a feature and we're all kind of perfectionist so we all wanted to see something through but if you're not keeping your local up to date by working on a site every day it's really kind of hard to do an adequate job peer reviewing but with these tugboat environments we actually had a URL that we could go to that was being updated every time code was pushed so between the github code review tools and using tugboats environment to actually click around and interact with the site we could in my case I know I did this several times just hand edit views exports without even having a local set up on my site on my laptop at all and just push code and it builds an environment I can make sure it worked that was a huge time saver the other thing is in this case you know it's obviously lullabot.com's not the same scale as tizen it's not the same scale as MSNBC in terms of complexity and traffic but we were definitely interacting with a lot of non-technical folks even on a small project so people for marketing obviously have some input somebody's cousin doesn't like a particular article like you can incorporate that feedback if you have a URL to send somebody to show off work so that was really really valuable and the other thing is the tizen project now we're still using tugboat James has moved on to work on MSNBC and I've kind of taken over stewardship of that there but right now it's basically a development team of three of us me, my colleague Brock Boland who's also a lullabot and Mike Shaver still and we're using it constantly all the time as a way to peer-review each other's code without kind of getting off track so when I started working with Mike it was about a year ago, shortly after the launch of lullabot.com and it was on a different Intel project and Mike said this has been so successful on tizen I want the same thing set this up the first week of the project so this was my first exposure to tugboat at the time and like James mentioned it's basically just a bunch of glue and bash scripts at this point so setting it up was actually kind of a hassle I had to install Jenkins I had to get the right plugins Jenkins stores all its config in XML which changes differently depending on which plugin versions you have so the Jenkins config wasn't necessarily portable it probably was a two or three day setup process which over the course of a project setting something like this up in three days isn't a big deal it saves more than that amount of time over the life of the project but still it's a three day setup process so we tried to figure out how we could cut that down and make this part of just a normal project kind of startup and cut the ramp up time down so the first thing that kind of occurred to me is who doesn't like Drush we all use Drush all the time to save a ton of interaction with the actual website what if we could have something kind of like Drush for interacting with this tugboat setup so that's basically what I spent my time doing I wrote a little Node.js based command line wrapper around some of this bash script and glue stuff that we had as a tugboat as it existed then I also found a pretty handy Jenkins API so you can do things like trigger job pushes from the command or trigger job runs from the command line export and diff the actual Jenkins XML config so you can get the Jenkins configuration that makes up your tugboat setup in version control the same way you would with features so even though I said it's kind of the tool we're stuck with not necessarily the tool we want I kind of reinvented it for Jenkins in this case the other big thing that we added was prior to this tugboat had basically been something that only responded to GitHub pull requests but Joe Schindler who works on DrupalizeMe really wanted to use it and they're not using GitHub internally and they're not using pull requests so he was kind of stuck so I tweaked the code a little bit so we can now build arbitrary environments so if you want to pass in a commit hash or a branch that's not quite a pull request yet or a date and time from a particular branch all of that kind of stuff will follow the merge path that's kind of set up in the config file so you can see what would happen so the arbitrary git reference tree-ish was deployed in production right away another thing we added was a really, really, really rudimentary and basic plug-in system so we could do things like kickoff B-hat test runs or use phantom or I think it's Casper in the case of MSNBC you could do that kind of stuff from the same command line tool you can also back up a new site using this so you can do that down sync kind of in one interaction just like drush-sql-sync does but it's also on Tizen at least used for the actual code deployment itself because it's the same tool that builds these tugboat sites can push to production as well the most exciting thing excuse me that we've been working on lately was integration with resemble.js you all might be familiar with this but it's basically a tool that helps with visual regression testing so you can regression test your CSS which is pretty mind-blowing under the hood it uses phantom which is a headless webkit browser it's all javascript based we'll kind of take a look at the output here so this is a screenshot from developer.tizen.org like two days after we found out the session was accepted here's the screenshot from my local at the same time so if I kind of flip back and forth between the two you'll notice there are a few things that have changed a little bit but unless you've got kind of a keen eye it's not immediately obvious what's different piping those two through resemble gives you something like this so what it's actually doing is it's taking in an array of URLs so you can test multiple pages per run and an array of resolutions so you can test different breakpoints if you have a responsive design and saving off jpegs for each one of those URLs at each breakpoint then it takes the two so in this case from the actual live developer site and then my local that will become a pull request and it does a pixel by pixel comparison of the two and produces this visual diff so this gets posted back to github as a comment or an image link in the comment and you can kind of quickly skim through and see, you know, oh something's definitely wrong with this particular pull request and in this case if you actually take a look at it up near the top you can see there's the your own application now line I hadn't down-sinked the production database in a few days so there was a legitimate content change so it was just me kind of being lazy there's a missing image a little bit off the screen that you can't see that was part of this original image so again something like stage file proxy you know cutting that corner would have showed up here and then by and large most of the diff here is I didn't have typekit properly set up on my local so all the fonts are just slightly off but the idea is that if you introduce you know a bug where you're working on a feature for say the photo gallery and all of a sudden a couple blocks in the right sidebar are off a few pixels you'll be able to pick this up really really quickly and kind of in an automatic way with something like this so we've been talking for quite a while about the why and a little bit about the how but let's get into kind of the nuts and bolts for how you can do this yourself the wrapper code that I've talked about for Tizen isn't publicly available yet we're still kind of working on it and figuring out where we want to take it and what we want to do with it that part I definitely want to be a conversation kind of both for the QA and then the rest of the week you come up and find one of us and we'll figure out sort of what we want to turn this into but the actual glue and bubble gum and duct tape and shoelaces is out there for you to set this up on your own now if that's something you want to do it's all the precursor code for that first version of tugboat we talked about so MSNBC and Lollabot.com are actually still using the first version of tugboat the only people that are using the second right now are Tizen about a year ago if you go to our site you can check the blog Jared Bittner wrote an article that goes into this in pretty great depth like he essentially gave our session in print in July of last year all the resources are linked to here he kind of walks through the whole process at this point it was still called the github pull request builder so if you hadn't heard of it it's because it was poor marketing on our part maybe all the pieces are there one of the key one of the key pieces that you can get linked to from that blog post is this Jenkins github Drupal repository, this is on github in the Lollabot organization these are those shell scripts that James mentioned so there's one that takes care of cloning the site there's one that takes care of setting up the directories in a particular way that Jenkins expects them to be set up takes care of commenting back to github on the status of a build after the build's finished and then there's another one that handles cleanup so wiping the site out after somebody's kind of said yep this is good, it's been merged in it's kind of past mustard the other key piece that you need when you're setting this up, aside from installing Jenkins is the Jenkins github plugin and that's a dependency for the Jenkins pull request plugin so what this basically takes care of is listening to github unfortunately via polling which is a little bit silly for those pull request events and then grabbing the code and doing the merge on your local disk directory and then tugboat kind of takes off from there having had having had the code for the pull request checked out, tugboat handles the merge and the rest of the Drupal site build so that's kind of the status of what's out there now and what we've been working on the next big question and the thing I really want to talk about the rest of the week if folks are interested in this is where do we take this next how do we make this even easier how do we get features from development to production faster what kinds of things can we do to this to provide more value both for the people we're building sites for and for ourselves to make our lives easier so one of the first things is to take a pretty quick look at what we've built so far I kind of poked fun before at Travis's interface and Jenkins interface our interface at the moment at least in two thirds of the instances that are out there is a GitHub comment so that's kind of equally lame I guess as far as kind of UX polish goes the difference is it's accessible still and it provides more meaningful information than a Travis build passing badge would and it's not as cluttered as an entire Jenkins UI so if someone wants to know did the did the feature ticket for the photo gallery build successfully or not they can hit the GitHub issue for that feature and see the big green GitHub check mark with links to that links to that custom environment right on the GitHub ticket they don't have to go to Jenkins and know what the name of the job was and find whether the last build succeeded or not so we're probably parameters passed to that job that you have to check out to make sure you're looking at the right one it's just a bit easier when it's in line with GitHub the other thing this is kind of the simplest version it just has links to the three ties and environments and a link to destroy the current environment that's built in a couple of other instances like I mentioned before we'll have those resemble JS screenshots posted along to the GitHub comment or we'll have links to other test results whether it's Behad or Phantom or Casper essentially anything else we've kind of plugged into the build process for a particular tugboat environment we can have those artifacts available in the GitHub comment as well GitHub's APIs is pretty nice when it comes to that sort of thing so rather than continuing on that API the vision that we kind of have and the thing that we're building towards right now is working on project specific dashboards so we'll have sort of one overarching screen that will list all of the different tugboat environments for a given project so if you're a project manager you could hop on this and say here are the four open pull request environments with URLs for those features that I can click into and sort of see how things are going for each feature we could have stats on each one of those individual tugboat environment builds on here performance regressions the cache grind file is you know, 17% slower with this particular pull request issue build or here are the hard archive for the front end waterfall compared to production and we can do some sort of performance metrics in a way that's pretty low friction for developers but also high value we don't have to kind of keep track of those files we can just auto generate them and make them somewhere easily available and useful the other thing we want to do is get some of the Jenkins type UI with this so to let project managers kick off a new build if they know some extra code has kind of been pushed so we've got two more just kind of administrative slides and then we want to open it up for questions and kind of start that dialogue the first is if you're interested in any of this kind of stuff we're hiring lullabot.com slash jobs coincidentally you'll see two very very good looking guys on the top of that page, you can go check it out and then the second one is tomorrow evening lullabots hosting a party at the handlebar it's 121 East Fifth Street at 7 p.m. so hopefully we'll see you guys there and we can talk more about this kind of DevOps process stuff if we can't catch you beforehand so that's kind of all we've got otherwise we'll just sort of open the floor to questions and hopefully hear more so I must have missed it what was the image comparison tool called? resemble.js resemble.js, thank you on the same topic does resemble.js provide any kind of scoring for the deviations between the images it compares so you can automate stuff for reviews yes that was the first one the second one what is the infrastructure behind all of this that you are using it's cool to know that yeah there is a tool that can do this magic but there has to be like work horses behind this that will spin up these environments otherwise what is powering it so that's one of the things we didn't talk about but I think it's another bullet point for where we see this going in the future I would love if this was leveraging something like vagrant and if we had kind of a config file for how the staging environment at least if not the production environment gets built and local devs could use that for their own development environment we're using the same vagrant config to spin up these environments for tugboat itself where Ben who is sitting right in front of you is working on that right now as well but at the end of the day I think all the instances that we have running it's like a VPS with linode or something hardware wise so the resources for this are sort of limited by how big your database is how many environments you're going to have simultaneously that sort of stuff but disk space now is pretty cheap so it's not a huge concern so are these multiple environments just huge multisites for the project or how does it work so yeah so it's is this on? no it's a it's a wild card virtual host for on Apache so it's pretty basic so each like I was saying before there's a each github c is a number so if you can just think of the wild card being star dot whatever and then it maps that to a directory so it's not multi-site each one is in its own directory encapsulated that way but it's yeah I don't know so you basically pull the sources you pull the production database you pull production files you deploy all of that in that sub-directory yes and for large sites where the files directory might be 30 gig or something like that we're actually using hard linked files so that we're not just blowing out our disk space if there's 20 different environments so I see how long does it take to deploy a site or a pool request so MSNBC currently the database is about 3 gig and it's growing it's a news site the content is just streaming in every day all the time it's currently about 3 gig it's probably bigger than that since the last time I looked at it and the way we're actually so it did get to be slow actually because of that it took about 30 minutes for the whole thing to end and once that's done that's just the Drupal deployment then if we've got to run our CASPER tests which takes another 12 minutes so start to finish that's 45 minutes what we ended up doing first of all we switched that linode Ben helped us switch that linode to uses SSDs which sped things up tremendously in that database import and secondly we're kind of doing what I'm calling a hot spare database where we import we import the database before we actually need it and then all we're doing is renaming it for the one we need and that brought the time down to 5 minutes actually 4 minutes to spin up a new Drupal environment so from pushing up the pull request to having an environment you could click into that was 4 minutes so really it's just the time to revert features in database updates and that sort of thing thank you a corollary to that 4 minutes is the way the GitHub pull request builder works the Jenkins plugin it's actually polling GitHub looking for pull requests instead of using something like service hooks so if we were using a different method to actually detect the pull request like say ditching Jenkins which I'm actually really in favor of we could get that essentially instantly and start building it without possible polling lag time too do you have any idea if it would work with GitLab? we unfortunately use that at work there is work in progress to support different kind of random back ends the shell scripts that are up there now are pretty tied to the GitHub pull request builder plugin but the new version that we've been working on is essentially back end agnostic the DrupalizeMe team is using a different tool as well that's not GitHub and they're very close to being able to use this so that kind of pluggable back end essentially the only thing it's used for is how the merge process works and how it figures out what code you've pushed needs to merge how to get into production so that's certainly something we could support conceivably the other question was is there a website or a place to find documentation or maybe status updates how things are going? hopefully we've got one that we need to remove some lorem ipsum on we're crossing our fingers we'll have it live this week so keep an eye on the twitter account otherwise that blog post that I mentioned before has kind of a ton of the documentation so far and I'll get a link to the slides up on the Drupal con site too you can also email tugboat at lullabot.com not lullabot lullabot.com if you have any specific questions or ideas or anything so this is really cool I'm wondering can you say a little bit more where the client sees all these spun up Drupal sites now you're generating a lot of them they're not monitoring them they're not getting automatic emails I'm presuming or maybe they are are you sending them emails saying I want you to look at this version now it's ready for you to look at so this is one of the big problems we're trying to address with the dashboard that we've been working on the last couple of weeks we totally need a way to kind of provide more visibility for that sort of thing right now it's basically in the github issue queue so if there's a particular ticket that's really high touch and high profile at a given time there's a single comment from the tugboat bot user but we can definitely make that interaction for the non-technical person a lot more rich than it is right now and that is definitely item number one on our radar that we're actively working on trying to improve so if you have ideas of things that you would like to see definitely let's chat I would love to hear your ideas hopefully this is quick so what is the length of time or the amount of testing that's happening on these environments and with that are you doing any type of sanitization of the live data so it might in the case of MSNBC like it probably doesn't apply but as you guys know the site that we work on as a very large user base and we'd have to sanitize that data so how would that affect this type of script that's a really good question baked into this is sort of an extra settings.php template file that gets appended on to the environment that gets built so like in Tyson's case they use LDAP for all their authentication they're not using the live LDAP server on staging or any of these tugboat sites that get spun up they're using a separate staging server so that sits in a little separate settings file just config values that get stuck on to each one of these each one of these environments we've also I'm abusing this on my local so I can run tugboat environments on my local to do peer review because it's easier for me than having to trash directories and change things because it also lets me test tugboat development as well and as part of that on my local I've changed the build script a little bit to do the email sanitization so I don't accidentally email people is that affecting the performance time of the sync or is it still pretty quick? it's pretty minor even if you're getting into like 100,000 users I'm done alright thanks for your attention