 Hey, everyone. Is everyone here like a front-end-related person? You're here for that, right? Yeah, that's cool. So, if you were looking for something else to do in say April next year? April, April 19th? April 18th and 19th. You can go to front-end United, it's in another country. Bristol, yeah. And then there's like a buff for the magic module. Thursday quarter to noon. Thursday at 11.45. Yeah. All right. Morning decay, everyone. Okay, cool. People who are recording this, I'm starting now. All right, thanks everyone for coming. Glad you could make it. It's still the middle of the week and it's the middle of the day, so glad to see everyone's here. If you've come here for automated front-end testing, you're in the right place, otherwise. I'm sure there's something very interesting to go see somewhere, but that's not here. We are talking about automating the process of breaking things. Last year in Prague I had a presentation called Front-end Ops, and it really was about automating the process of breaking things. This year it's a little different. We're automating the process of detecting when you break things. All right? A little bit about me. Let's get this out of the way. I am a front-end developer at Four Kitchens over in America in Texas. I am R-U-P-L. It sounds just like Drupal without the D. And on the internet, like I said, I work for Four Kitchens. It's a company in Texas where Drupalcon Austin was just in a couple months ago. However, I live in Germany now, so I am the European office of Four Kitchens. We're hiring, so if anyone is looking for an awesome place to work, we work with about 25 of us, in between 20 and 25 of us. So it's a small team, cool stuff. There's a link in the slides here. I love contributing and presenting about open source. That's why I'm here, and I've got a list of some interesting stuff there if you want to check it out. I maintain the Modernizer Drupal module. I very recently started working on a Web Components API for Drupal, so that's going to be hot. The FastClick module was something we did recently, and then we've got another one in the works for pre-render and pre-fetch. So all good stuff, and I like sharing information and talking about all this, and so if you've got questions or something afterwards and we don't cover it during the session, talk in the hall or do whatever, find me, I'm here the rest of the week. So front-end testing, why do I need front-end testing? We have an admittedly more mature set of tools on the server side to do this kind of stuff. Everyone knows about if you are debugging PHP performance problems, you've got XHPROF or XDebug and all these things that will give you stack traces and all sorts of good information that help you begin your detective work as you solve these problems. But in the browser, sometimes there's much more subtle problems that are very human-facing, right? Minor CSS change that might throw things off on your page. Like someone just forgets a semicolon and breaks several libraries that you've concatenated together in your build process. Aggregates changing when not necessary and killing your cache. And then things like performance regressions that come from adding features to your website. So like I said, front-end development is starting to mature to the point where it's got as big of a community around it as some server-side languages. And so tools and automation and making sure that you're being held accountable for what you're doing is more important because you can do a lot more and you can do things a lot more quickly and you can break things more quickly, like I said before. So we need the same kind of tools that you have on the server-side to do things like test page load times, test rendering speeds in the browser, help you stick to a performance budget so that you can keep a site under a certain number of kilobytes or make sure that it only uses a certain number of requests and stays very fast for all of your users. Verify that visual changes are or sometimes are not happening when you don't want something to change and then also just accountability for code changes in general. So all these things can be accomplished with the tools I'm going to show you today and we're going to walk through each one and I've got some demos to show everyone too. So the most important thing to probably take away is that we are talking about changing your workflow not just adding some tool or like one little phase to the life cycle of a project. In order to deliver the best, most secure, most performance site you have to integrate these tools into your daily routine which is why we want to automate it so bad. This is a quote from Ilya Grigorik. He's an engineer at Google who is an absolute genius and he's starting to really make big contributions to HTTP and the very foundational layers of the web right now but his quote that I always fall back to is, performance is not a checklist, it's a continuous process. So you've got to make sure that you're checking all this stuff every time you add a feature because that's how you catch regressions quickly and so this is something that we need to think about on a daily basis. And also Max Fiert, he said, Fiertman, he said, don't take measures without measuring them. So it's a small play on words but it's true. When you add a big feature to your site you need to make sure that you've tested it and not broken something else or if you want to make a performance enhancement that you think is going to help you do need to make sure that it's helping. Sometimes things can hurt as well because we over engineer things in our head. Everyone does it. So to get this out of the way, is that a little easier to read? So to get this out of the way, yes. The slides are on GitHub. I believe they're linked in my session thing. If they're not, I'll put it there immediately after this. Also there are many live examples in the code base. This is sort of a lie but I'm using a pretty fresh clone. I would never try and run NPM install on stage on conference Wi-Fi but there are examples in this repo so you can clone my slides, go to the examples directory and all the code I'm showing you is in this slide deck. You are free to take these code samples and use them for any purpose, commercial or non-commercial that you wish. Happy birthday. So there are a couple, there's three major sections that I'm going to go over today. There are going to be functional testing which tests like actual features of your website, does things that you don't want to do like squishy-squish your window or click through a menu and make sure that you get to a certain place, lots of QA tasks. Then there will be performance testing and then visual regressions. First up is functional testing. My tool of choice here is CasperJS. If you're not familiar with CasperJS, it's a layer that sits on top of PhantomJS which is in turn a headless WebKit instance. So what it does is with PhantomJS you can do all sorts of stuff and you don't actually have to have the monitor in front of you. You can render screenshots and stuff like that but basically you can code actions with PhantomJS and thus you can code things with Casper as well but Casper adds a testing suite on top of it. This is not a new idea. Like I said, server-side tools like this have existed for a long time. Selenium would be an excellent example of a tool that predates and is similar to this one. However, Casper has the distinct advantage of being very familiar to people who can write jQuery. That's what I liken it to. It's not exactly the same but the syntax and the way that you use it is similar enough that I find it to be a really great tool for if you're already familiar with writing a little bit of jQuery you're going to pick Casper up in no time so I find it to be superb for that reason. I'm very accessible to someone who's familiar with JavaScript already and it lets you do a lot of things. Practically the only blog post I've ever found on the web about it is taking screenshots at multiple screen sizes so you can do that for sure. You can also navigate a site and test a funnel or you can test a set of behaviors within a web app by causing the computer to click through your site or use keyboard navigation. That's automating complex user actions. You can test things like content creation, transactions or other very macro type features of your website. You can also do unit testing as well so if you've got a JavaScript object such as Drupal object in Drupal you can actually look in that thing and have a test node and make sure that the metadata that you see in there, for instance if you wanted to test JavaScript translation you could make sure that the object that has the translations you expect is filled in. And you can also keep an eye on problematic pages. This tool can test HTTP responses and so you can make sure that every time you deploy you can just ping URL and make sure that it's not 500-ing or something like that. So let's look at the first real example here. We're going to test Picture Fill. If you're not familiar with Picture Fill it is the library that is bridging the gap between not having a native picture tag in the browser and having one. So if you use the new picture tag you need to be able to make sure that it's working on older browsers via the Picture Fill JS script. And so it's important to have that thing working because you want people to see images on your site. So let's take a look at this thing. Okay. You know what? Actually I'm going to look at the code first. Like I said all these are coming from the examples folder and when I started using Casper not so long ago I don't know, a year ago or so I started looking around for examples and I would find little code snippets but never any thoroughly documented examples which explain why and what they were doing and where you could find more information about it. So I went ahead and made that. And so all this stuff has an explanation for every line of code here and then links to the Casper docs. So feel free to read through this at your leisure but I'm going to step through it a little bit quickly without explaining it in detail. So yeah, this is the first test and this is testing Picture Fill. This number five here just says I'm going to execute five tests during this test week. And then in true JavaScript fashion you just pass it a callback which is 400 lines of JavaScript. Why not? And that's your test. So it begins by opening a URL. I'm going to go ahead and open this up and we're going to test how this URL works compared to our expectations. This is the canonical Picture Fill demo. You can see that it's loaded up extra large JPEG here on the screen and I'll bump this up so you can read it. But this is the source code that we're going to... This is an example of the source code that we're going to test against. The source code itself is rendering this image here which you can see as I bump the font size up change so you know that it's working. So we're going to test that it's working in code. First, let's just observe a couple things here. There's a picture tag on the page. Inside the picture tag there are two source tags. And then finally there is an image with a source set attribute. So the way Picture Fill works is it reads these source tags and then injects the new source into the image tag. Fairly simple but that's what we're going for here. And you can see as I changed font sizes that it actually responds to this because the width and m's is changing and responsive web design, yay. So we've got this page and we want to make sure that it's working. So... No, I'm giving away the... Here we are. So Casper's going to open up this URL that I just saw and it takes... It goes and looks for those tags that we were looking for. So this is a CSS selector. Any selector will do. We're being very vague here or very general, I mean, because there's only one on the page. So we're going to test for this picture tag. Then this is another CSS selector. Source elements. Inside picture elements. We're going to assert that there are two of those in there, which is the number two. And then we are going to do something that everyone does all the time, which is resize the browser viewport, right? So you've tested your responsive design by, you know, grabbing it and saying like, oh, yeah. Oh, perfect, perfect. I have tested every device in the world. I'm done now. Yeah. So actually it's worth noting. I mean, I'm joking, but at the same time, I should mention that because Casper is a specific build of Phantom. It's a specific build of WebKit, excuse me. You are only testing in one engine, one version of one engine. However, being able to automate this and make sure it's not broken somewhere is better than not being able to do this at all. And there is another project called Slimer.js, which is the Phantom.js of Gecko for Firefox devices and browsers. So we've got two engines down, and then also... Morton, are you in here still? No. Okay. Ken, what's Ken's name? Ken Alkenberg? Yeah, so there's this guy, Ken Alkenberg, that started a remote debug initiative that is... Basically the browsers are starting to work towards a common API for dev tools and engine interaction and stuff like that. So one day in the bright utopian future, you will be able to write a Casper script and have it run on all of the engines that have participated in this protocol, which we hope is all of them right. So you're testing a specific WebKit right now with all of these scripts, but in the future, hopefully it'll be much more generic and you'll be able to run this everywhere. Anyway, I digress. We're now changing the viewport. I'm not going to full screen my computer, but you just saw when I changed the font size that this thing, when it's below a certain value, which happens to be 800 pixels wide, it shows this medium JPEG. And then when it jumps above that, it shows large. And then when it jumps above 1,000 pixels wide, it shows the extra large. And that's the behavior we're trying to test here. So once we change the viewport, we go into another block. We actually, there's this thing called Casper then. It lets a bunch of stuff react to whatever you've done in Casper so far. JavaScript is single threaded, so if you just keep on trucking, it's not going to give the browser any time to do anything. But after we resize the window, we go and we look for that picture image, right? And then we go and search for its source attribute and make sure that it's reading medium.JPEG at this very small resolution. We're going to compare the file name that we find to the string that I hard-coded into the document. And hopefully we find it and it all is well. Then we're going to resize the browser again. 960 is between 800 and 1,000. I picked a completely arbitrary number, but it is a familiar one. We run the same test and look for large.JPEG. And finally, you put the browser width somewhere above 1,000 pixels and then you test for extra-large JPEG as well. So this test is pretty simple. We checked for picture. We checked for a number of source tags inside the picture tag to make sure the markup was right. Then we checked for these three. We checked to see if it was displaying the image we wanted at three different resolutions. So I'm going to go ahead and run this picture fill test now and we're going to see the result. So we're done. That quick. Super easy. Like I said, picture element found. It found two elements inside and then it found all three of these files that it wanted to see. I'm just going to put a typo in here to show you what it looks like when something breaks also. All right, so that didn't go so well. But you can see here that the subject, this is what the outcome of my code found, the extra-large JPEG, but then I changed it to say extra-large. It's a little more Dutch, because it's got like seven As in the name. So yeah, that's picture fill. So this does not test the native picture element behavior. This tests the JavaScript polyfill. So you would have to do something different to make sure that you're testing a picture-enabled browser. None of them are technically out yet, but we're getting really, really close, like two or maybe even only one week away from the next Chrome release. And Firefox will be in sync with them as well. Cool. Do we have any questions about the picture fill script in general? Okay, cool. Oh yeah, I've got a blog post and a screencast. So if you want to see this again later, visit our blog. There's a link in the slides, but it's called forward, F-O-U-R-W-O-R-D dot fourkitchens.com. Yeah, cool. Next, we're going to test an author workflow in Drupal. This is much more complex than just testing the front-end-facing component, right? So we're technically interacting with the server on this one. So you've got to be able to submit things to the server, fill out forms, log into the site to begin with, and then check and see that your data was all there. So that's what we're going to do. I've got this local Drupal install here, and we're going to go to content. I'm going to show you that there's no funny business, no content available. So in this script, I set up a couple variables at the beginning because it's just JavaScript. You can write whatever variables you want, and I find it much more sane to store anything you're going to reuse in a variable so that, like, say you change the... you've got a string that adds content to a node, which is exactly what we're going to do. And then you test for that string later, and then maybe you change the first one a month after you've created the test, but don't change the test itself like you're going to break things. So I like to store everything in variables, and so we're going to walk through some very simple configuration here. Number one, this is my local Drupal install. And then I've got the classic local DevEnvironment credentials here, admin-admin. These selectors that I've... these are selectors right here, and these are the selectors that Drupal uses to present this form to me. So if you change these for whatever reason, you would have to put new things here. The node content is the exact same set-up. So this is a form name field, and then this is pretty obvious because it's Drupal, so very straightforward, body-ooned-zero-value. That's where you store all your content in Drupal. And we're going to say hello, DrupalCon Amsterdam. This content was added by CasperJS. So that's all the config set-up. We're going to begin the test now. This one is testing in site. There's eight tests. Like I said, we're going to load up the hosts that we're going to. Then you fill in a form by selecting it. I suppose I could have put the selector for the form up there, but I didn't. Oh, well. Then you pass it the JSON, but the JSON-looking configuration that I created before, and then this true tells it to submit the form immediately. Then we log a comment, because this is going to take a second on a remote environment that you might be testing, and that just gives the user some feedback. If you're watching the console output live, then you're like, oh, okay, it's going to take a moment. After logging in, we see that the status code is 200. For instance, if it were 403, that's access denied. You can actually test if you've got permissioned areas or permissioned content within your Drupal site. You could test a series of URLs and make sure that they are 403 for an unauthenticated user, for instance. Or if you have a migration and you need to create a redirect strategy and you wanted to test a sample thousand URLs, you could test and make sure that when you ping a URL, the code is 301 instead, to say that the server is redirecting me somewhere. Then after that, we look for the classic body logged in class. It says, hey, it told me that everything was all right, and I found the class that says that I'm logged in. Then we click the menu. When I clicked up here, that is the exact link that Casper is going to click. Just because I wanted to. Now, as you can see, the overlay is activated. When we look at the URL, it actually says node overlay, all that stuff. After clicking that, you can check and make sure that you've gone to the place that you've gone. You can do this with regular navigation. You can do this with something like the Drupal Overlay to test a web app that has multiple states. Whoops, sorry. After that, we're going to go and add a node to the page. We're going to just go to the plain node add page URL. Again, nothing broke. Surprise. Then we fill out another form here, which is the node add form. We found it. Do another Casper fill, which just fills in the form with that configuration that we had up here, right there. Then we save the new node. Then it redirects us somewhere. Then we set up a... Basically, we check the title to make sure that the title we submitted is the title of this page that it redirected us to. It says, hey, our custom title was found. Then because I know that jQuery exists on a Drupal install, you can use jQuery in these little evaluations. Assert eval equals is exactly like typing into the JavaScript console on your laptop. If I go here and I type jQuery.fn... Whoops, fn.jquery. It tells me I'm using the latest and greatest jQuery 144. You can test arbitrary JavaScript code in this eval equals stuff. What I'm doing is after we load the new page, we will go and look for the content that was created. Because we know how Drupal operates, we can do very specific things like node page, content P. I just know that it's going to be there on a vanilla install using Bartik or whatever this theme is. Finally, we run done. Let's see this thing in action. This will be pretty quick because it's a local machine, but there we go. It just did everything that I talked about. The authentication was successful. The class was found. The overlay link, it found that the overlay had been opened properly and was functioning as expected. It went and added a new page and then it saved the node and then it found that all the text there was on the page. Now I'm going to go back here and refresh. You can see our shiny new node is there. Casper has created this for us. There you have it. You can do all sorts of things with this. Because I just used admin-admin, it's a very generic test, but you could, like I said, test and make sure that specific roles are working and you are expecting to see these four menu items added when an editor has logged into your website. You can go and test all of these things and just have a whole set of scripts that runs to make sure that you haven't broken anything. You might find that you just need to revert a feature on your environment or something like that to get it working, but it's nice to be able to have a robot do this instead of having a person go in and click everything and figure it out for you. So that's Casper. Does anybody have any questions in general? Yes. Visual problems? Okay. The question was, can it detect visual changes? Yes and no. Casper in particular, you can test the CSS values of something. So if you did want to set up some excruciatingly detailed like CSS unit test for your nav or something, you could say, is my computed style this exact hex value or something like that? I wouldn't recommend it. We're going to go over a tool at the end that does show you actual visual regressions. So it can do that kind of thing, though, and you can test and make sure when you click on a button that a drop-down menu is now visible instead of invisible or something like that. So you can test anything that is code-related, basically. Yes? Sure. The question was, is there tear-down? You could just do the opposite thing. You might need, say you want to tear down the test content that you set up, you could have a separate piece of the script that basically starts a new session. So you could log out, then re-log in as some sort of protected admin user, and then delete all of your content if the user doesn't have permission to do that. But anything that you can do yourself using the interface of the website, you can do with Casper. So, yeah, tear-down is possible, but you have to write the tear-down yourself. You can't just say, reverse what I did. So, yeah. Good question. Oh, that's a lie. I put that for front-end United, one month ago. Well, the code is in the slide, so you saw it working. So, yeah, this is the link I was talking about. I have a whole blog series about CasperJS, along with some other people at Four Kitchens that are writing articles as well. So check it for updates, and if you're into this stuff, it is there for you. So performance testing. This is the thing that interests me the most. I fancy myself some sort of performance engineer when I get the chance. Performance testing comes in a lot of flavors. We're just going to cover the few that I think are the most common needs for most website builders today. There are all sorts of issues that are much more detailed and granular. I've got some links at the end of this thing that go into if you are trying to monitor, you know, 800 instances of some sort of app that you're running, but that's not what we're going to cover today. The first thing is automating page speed. Page speed is a service from Google, very similar to Weislow, that boils down all these factors and gives your site like an index, a speed index. So if you were at my colleague Ian's talk yesterday, it was about getting content to a phone in 1,000 milliseconds or less. And when you've got a scream in sight, they give you 100 or 98 or something like that, especially if you're using Google Analytics. That's a troll if I've ever heard one. But they boil it down to this one number and tell you what to fix, and they order it by bang for buck, as we say in English. So testing sites with page speed can be automated. Here, I'll click on this and show you one example of what page speed normally does. Whoops. Oh, yeah. I should put a URL in there instead of just nothing. There we go. Brutal on the Wi-Fi. So this thing is basically loading the site, analyzing it on a server remotely, and then returning the results. So Ian is a bad ass. He's got 99 out of 100 here. Don't feel bad if that's not the score when you run this for the first time on your site. It's just telling you how you can fix it. I believe my own site, because the Cobbler's children never have shoes. Mine is like 74 or something like that. So this is what it does. There's almost like nothing you can look at here, but it gives you these little recommendations and says what to fix. So I'm going to do the same thing on the command line, and you can basically run these on your feature branches before merging and stuff like that to see if you've caused a regression or not. So I'm just running the word grunt. And it goes in there. If you're not familiar with grunt, it's like this task manager thing. I've got links on my session description if you are not familiar with any of these underlying tools. So I tested grunt.js on this one. Same thing. They got a score of 92. Solid. And then it gives you a bunch of output here. So you might have a performance budget that says I only ever want to have 25 resources being requested when I get my HTML or something like that. And this can help you keep a lock on that. And then this is an interesting section. This is the part where I said like an easy low hanging fruit type wins. It orders these recommendations, which is what these are. The second group is recommendations on how you can speed this site up. And this number in the right column is a relative gain to be had by implementing said recommendation. So the one here that is slightly high is number six. You will see some crazy spread of numbers here. Sometimes it'll say 30. And if you see something that says 30, fix that thing first. And then you can see there's a six, a 1.5 and a 0.08. So relatively speaking, you base all this off the highest number. 1.5 goes into six four times. So you're going to get four times as much performance like benefit out of fixing this thing that's labeled as a six first. And so then, like I said, if you end up seeing like a 20 and a 30 in your list and then the rest are twos, do the 20 and the 30. But this is automating page speed. So you can run this against, like I said, feature branches or run it every time you deploy, run it on your production to make sure that nothing's happened and changed too dramatically. But you have the power to run this very quickly inside the command line. And it just takes a tiny little bit of configuration. And it does rely on an outside Google server. So we're going to look at some other tools that are local and you can control where they're coming from and all of that. Oh yeah, you need an API key but it's free. So related to, no, I just showed you that. Sorry. Fontimus is the next tool. This is a really awesome one. It is an extremely granular version of what we just looked at. So if you don't want the macro level simplified instructions and you want the raw data, this is what you want. Fontimus. As the name might indicate, it is based on PhantomJS and it is a very extensive tool. I will run one little thing here just to show you. Whoops. So this thing's going to churn for a second. Oh, no. Someone doesn't remember how to use it. Sorry everyone. There we go. There's a lot of data, like I said. So it'll give you the total number of requests. It'll split them up and filter them and give you all this information. You've got your time to first byte. You've got your amount of data that was sent down the wire. How much of it was Ajax, types of requests, HTML, CSS, JS, images, web fonts. So it's all here. And I didn't put any filters on here and so I just output all of it. And it even does stuff like starting to read your jQuery and see how many like sizzle selectors that you used. Because sometimes you might find that things are initiating slow on a page and so you can kind of dig in with this data and see what's going on. Then it starts telling you like, hey, all of these images don't have caching enabled. So lots of good raw data here. And this is the basis for a couple different tools that we're going to go over because it shows so much data that you can filter it down and do useful things with it. Here's some crazy like big selectors that are labeled un-recommended by some metric. I'm not really sure what it is. So then... No, no, no. There we go. The next thing you can do is assert tests on this thing. So I'm going to clear the screen and I'm going to run an assertion that says, I only want 20 requests and that's my limit. So it's going to do the same thing. When I go back up to here at the top, you'll notice one difference. There are 28 requests in this page and so the assertion of 20 was failed. It failed. So you can actually test things and make sure that, like I said, you're sticking to your numbers here and making sure that nothing is getting crazy out of hand for any one given change, which is cool. So you might be thinking, wow, that's neat, but how can I actually keep track of all this? Well, there is a way to do that, which is grunt Fontamas. So this is not just a wrapper for Fontamas. What it does is it actually compiles detailed reports over time and lets you track the data and now very recently added the ability to track the assertions and visualize those on a graph. So this is my favorite image that represents grunt Fontamas. I was working for a client earlier this year and I ran this tool every so often because I just wanted to keep track of the performance on the thing they were complaining about. Some images loading kind of slow. And so I was like, okay, I'll set up some tests. So nothing happened for a few weeks. You can see that the CSS kind of, like, stayed relatively the same and then one morning it wasn't. And people were complaining about the site being super slow, just like the development team was complaining about this. So I did what I do every morning and I ran this test and I started looking at the graphs and I was like, dang, okay. Someone inlined 8 megabytes of CSS into the file. No. So it was very evident what had happened though because I had this graph to show me what was going on. And so I showed the graph to the team lead and said, hey, I think we need to get all this CSS out and we found the lion's share of it and the next day it was all okay when I ran the test the next morning. So if you have the ability to, like, put these graphs and just, like, run them every six hours on a website that you're monitoring and put it on, like, a TV in the office or whatever, you can keep track of this and have, like, a very instantaneous feedback on when something is going a little off the rails. So it's super cool. Let's see. We should look at the demo of this thing because it's gotten a lot better. Stefan, where is it? Here we go. Good old here. All right. So you can see there's a couple failed assertions on this page. Now this is what the full thing looks like. This is what it's based on and then makes D3 visualizations out of this. So it's all real-time and you can alter it. So you can see here this is a good one. Is this going to work? Yes. So the red line is the assertion for this particular metric, which is biggest latency. And then when you go over it, it colors the dot red and you can easily have something set up to catch these failed assertions and email you or do something like that. But having a graph over time that can be an early warning system, you're like, oh, on month one of the project, we weren't anywhere near the performance budget limits. But on month three, we're starting to get there and you know that you've got to slow that curve down. And then sometimes you'll just pop way above it and have a problem to fix. But it's better to have the data and see it and know about it rather than just like wonder what happened when you wake up or come back from the weekend and have a problem to solve. Yeah. We just looked at the demo. So I've mentioned performance budgets a couple of times. Performance budgets are like a money budget. You just want to make sure that you're staying under. There is a grunt perf budget, which is actually being adapted to use Gulp as well. Grunt Fontamas is working on Gulp support. I don't really see it being that necessary because Gulp is better at concurrency. Well, the Gulp is better at concurrency than the current Grunt. And so when you need to like run SAS, minify your JS, do all this, Gulp is a little better. But for these types of tasks, I find Grunt to be perfectly awesome. And the configuration is simpler to set up anyway. Grunt perf budget runs on webpagetest.org. If you've ever used that, it's this super cool service that lets you type in a URL and they let people sponsor the website as well. And basically like gives you all these pretty graphs and waterfalls and stuff like that. So this thing is yet another command line integration for webpagetest.org. This is probably going to take a second, but we'll look at it in a minute. Webpagetest.org I think I would have to say is the best way to test your site right now. So if you're looking to try one of these and you don't have a lot of time to mess with it and make sure that all the data is working right, try Grunt perf budget. You have to request a key from them, but they let you run like less than 200 tests a day. Grunt is a free key. You can sign up for an instance and get unlimited action. Let's see if it... Now I've done it. I think I was at the end anyway. So this is what it looks like on the other side. It also boils everything down to a speed index. This particular... Oh great, I'm trying to highlight my image. They rendered the website in like 500 milliseconds and the budget was 1500. So the number of requests was 31, the budget was 32, so this passed. And this is a way to set just a couple numbers and it's like a very macro, high level way to make sure that you're performing well. Now if this thing starts failing, you might want to dig in with the more detailed tool and figure out how it's working. Yeah. So it gives you these types of things, the Webpagetest.org goes. Let's set up the budgets. You just make an agreement with your client and then they break the agreement like three weeks before the launch. But in all seriousness, you basically just pick a number and you pick a number based on how ambitious you want to be. Like 30 is very ambitious just for context. Like Drupal has five CSS files and five JavaScript files, like just core aggregated stuff when you've not installed any modules. There are ways of reducing that, but you're going to get 11 requests right there and then you start adding in things like views or whatever. Then you start adding assets and if you set a number, like I said, you start approaching this. You want to keep an eye on it. You can't just say, we have a budget, let's worry about it when we get to phase three. It's not feasible to work that way. But if you have it from the beginning and then someone starts asking for this big slideshow on the homepage, you can be like, hey, it's going to blow our budget. Let me show you. I'll take the afternoon to code it up and then run the metrics and then show them how we're never going to hit the goal now unless we just ditch the slideshow because we all know that they're not that useful anyway. But yeah. There are a couple links to blog posts in the slides there. I skipped over the links, but they're in there. CSS regression testing, visual regression testing. This is what the gentleman in the front asked about earlier with Casper. Making changes that you didn't notice in CSS, I've never done that. Since CSS is just this wild, wild west of stuff that happens in this big global scope, CSS can be problematic sometimes. But it is pretty easy to catch and prevent. So I'm going to demo Wraith. I've actually been looking at a different tool which is related to Casper. But the one I'm demoing today is very easy. It doesn't require any coding. It's just a YAML configuration file. So this one is the least fuss to set up. So it also uses either Phantom or Slimer, either the WebKit or the Gecko headless engine to produce a visual diff of two different environments. So this is an example from the Wraith GitHub. And you see they added a login button up there near the menu. That's new. It's dark blue. But also someone seemed to have added like a pixel of padding to the left of this interface. And so you could see that all the words are bumped out of place there. And you're seeing lots of blue. I probably didn't want to see that if the only thing you're adding was this little button at the top. So that's what this thing helps with. Do you have this demo? I'm going to go... My website is running. So I've got a local installation of... We're done with that. Cool. I've got two different versions of our website. One is forkitchens.com. One is this local host that I'm running so I can do some live editing to it. And they're identical. So you can see that nothing's changing as I switch to the tabs. So I will go ahead and make a change to the website. And I'm going to... Oh man, there it is. Make this text a little bigger. All right, we're going to go. So I'm going to go into the Wraith example. Okay. And I'm going to say Wraith. Capture demo. So demo is not an arbitrary word. It's the name of the config file. And we're just going to look really close, or really quick here. It's using the phantom flavor of this. There's this snap file that they supply that. I picked the word shots. You could put whatever you want here, like the Amsterdam demo. And then I've got the live URL and this local host URL that I just showed you. And then we're going to test the site at three different widths. And then what it's going to do is it's going to test both environments at three breakpoints and then diff those each individual screenshot. And we're just testing our home page. You could put in another path here, like about... You know, you get it. So I'm going to run the demo. And it's going to work perfect. What did I call it? There we go. Amsterdam demo. All right. Because it is a local thing, we've got this browser sync running, a browser sync tool running. And so it puts this little blue box here every time, but that's kind of something to ignore. You can see this third column here. Oops. The first column is the dev. The second column is the live. And then the third column is the diff. So when we open one of these up, you can see that what I did was I made the font size of this paragraph bigger and it in turn bumped everything down the page just a little. Which is okay if that's what you wanted to have happen. But if someone's doing some hacking and you run these tests and you see that a bunch of stuff on the page you weren't working on has changed and you're like, ah, cool. We need to fix that before committing all this and then we're good to go. The converse of that is actually more interesting to me. So, true story. One time I was building a responsive website and I stopped caring about IE8 for like four months. And then it came time to launch the site and they were like, we looked at this in IE8 and it looks terrible. And I was like, you're right. I did that. So then I had to go back and fix IE8 without messing up every other thing that I'd done. And so how I did that is a different story. We did it with SAS and all that. So basically we tried to change our IE8 style sheet to fix the bugs while not changing anything else. And so I used this tool in order to verify that on the webkit, in the webkit engine at various viewport widths, I had made no changes while fixing IE8. So you can use this to verify that you aren't changing things. It's probably more useful to verify that when you aren't changing things than when you are. That's the use case that I end up finding most convenient. But yeah, this is called Wraith. It's an open source library from BBC and there are a couple alternatives. Phantom CSS is the one that's integrated with CASPER. I tried it a long time ago. I was both worse at CASPER and the library itself was less polished. So for the last year or so I've been using Wraith. But whatever floats your boat is a good one. Does anybody have any questions about the regression testing? No, you are lively today. All right, so it can do multiple tests. I don't know why I haven't gotten rid of this. But you can do all this with GitHub. We've got a blog post on our site that shows how you can do this kind of thing. But I'm not the one that knows how to do all that. I'm not even going to act like I do. The last thing that I wanted to talk about is automated testing services. So if you don't want to do any of this stuff, if you do like it, that's cool. I actually like coding this and I like supporting my code with tests and all that kind of stuff. But if you don't, there is a service that's become very awesome recently called Ghost Inspector. So what it does is it actually lets you catch bugs and do all this kind of stuff without writing the code yourself. So you install a Chrome extension and then you basically hit the record button and navigate your website and it builds CASPER tests for you just based on your user actions. And so it'll let you start recording, you stop. You can build them manually using their interface or edit the tests after creating them with the macro tool. You can see here that it is letting you, I don't know what it's doing, but something user-friendly because you're not having to code all the specific syntaxes of CASPER. And finally, it does regression testing too with Phantom CSS. And this is a service, so you can have it running and testing like remote environments or testing your production or anything like that. It's pretty cool. I haven't actually used it beyond just experimenting and trying it out, but investing in this kind of stuff that I've talked about today is not something that's super interesting, then maybe give Ghost Inspector a try when the need arises. Yeah. Oh, and I got a YouTube here. Well, I'm sorry I ran a little long, so there's not much time for questions, but I've got some further reading here. Like I said, all the examples are in the repo, so if you've got any questions, you can stick around, but I think we're getting close to the time, so thank you for coming. That short link is totally wrong, but if you could go in, like sometime at your leisure this week, go back to my session page and fill out the evaluation form. I'd really appreciate it. Whether you love or hate me, I would like your feedback.