 This is a talk that I put together based on a lot of my experience working at Pantheon, which is a hosting and development platform company, as well as just building Drupal websites over the last decade. That I think we've made a lot of improvements in the possibilities of actually being able to test the work that we do. Today, I'm going to share with you a number of tests that I do using a number of different services or DIY solutions, and hopefully in the next 45 minutes or so before lunch. We'll give you a sense of some of the testing options that are out there, some of the ways you can use version control to make this testing really painless. Then at the end, I'll show you a demo just to put it all together, to see how this could happen in action on your own setups or wherever, to really learn how to make the best code possible because I think that's a lot of what we ultimately do as programmers, as asked, and as technologists. To walk you through the stuff we're doing today, which hopefully won't come as too much of a surprise, but we're going to really introduce the basics ideas of workflow, and try to get a little more deep with talking about Git branches. This is ground stakes for actually doing most testing and workflow tools. I hope it's not going to be a huge surprise to you, but some of the ways you can use Git branching hopefully will be a little interesting. I'm then going to talk through three specific types of testing that I do. Cross-browser testing, visual regression testing, and performance testing. I'll talk about what each of those are, I'll talk about some of the pitfalls and some of the services that I use. I can even show off some of the stuff as well, and then we'll actually run through a full demo around all of this, and then hopefully get you all to lunch, and thinking about this kind of stuff for your projects. My name is Matt Cheney. I'm populist on Twitter, although this is a picture of me down at DrupalCon Latin America, which was quite lovely. I had my bio up here, but I think it's better to talk about the content. Let me get going here. The basics of testing, which hopefully will not be as surprising to folks in this room, is the idea of implementing a workflow for your code development. This is pretty basic in the bare-bones sense. We have a development instance where you actually end up doing your actual code writing, and where you end up actually experimenting with stuff and working on things. When you're comfortable with what's happening there, you commit it to version control, and then move it of course to the testing environment, where you perform a lot of the tests that we're talking about today. If those are successful, we'll pass that over to the live environment, and that's where your fans and followers will see your work, and where you'll be judged on the quality of it ultimately. There's a lot of sense for why you do it this way. In my previous work life, before I got very much into workflow, I would try to skip steps. I might try to do debugging and testing so I could quickly push it back up to live, or I could just yolo my way through with live, and just edit there because it's faster. But I think as I grow older and maybe a little more experienced, I don't really think you should yolo, you only live once through your workflow, because we have really great tools to do this. And I think when we embrace this kind of workflow technology, we actually get in some sense liberated from some of the stress, maybe, that would come with trying to edit something in production or edit something about to go live, that the more testing and the more assurances we have that the code that we're writing is good, the more comfortable we are pushing things live. There's many cases in my development where I'll work on something in Dev, I'll do all the testing and all the evaluation in the test environment, and then when I push it live, I don't have to go check it to make sure it works. I'm confident that that actually will work and work correctly. And that takes a lot of stress off of me. I've definitely, I haven't crossed my fingers for development or for deployments in a while, and I think that's very healthy. And so going DevTest live, I think, is something hopefully all of you do. It's not a particularly difficult thing to set up. There's a lot of hosted services that will provide this for you out of the box, and you can sort of set this up on your own. I'll talk to you a little bit of sort of how that would work in version control. But from a sort of perspective of, you know, sort of the DevTest live workflow, I think when we talk about larger projects or more complicated projects, we actually are talking about maybe a much more complicated workflow than just a simple do it in Dev, move it to test, move it to live. While those sort of fundamentals are there in sort of larger projects and with more developers, we actually end up having multiple development environments. Every developer will have his or her own environment, perhaps every feature or every Jira ticket would have their own environment. And those are all things that you're gonna actually do development in. The testing environment will sort of exist as that, although some folks will do a testing environment for just themselves and their sort of agency, but we'll actually use a QA environment to get client approval as well. And that's a pretty decent workflow as well, especially if you have a lot of external stakeholders that need to review stuff. And then of course, the live environment is sort of as that is. But one of the things that sort of, you know, you sort of think about, if you talk about workflow, you go from Dev to test to live with your code. But you also have this sort of, you know, sync option that's really important to do where you're actually moving your content, your files and your database from live through test and through Dev. And that's the part that I think maybe we do sort of, you know, sometimes or hopefully all the time where you'll do a MySQL dump of the live database and then flash your Dev database and copy over the files. But it's something that I think from sort of making sure you have really quality testing and even quality development, having a sort of as close as possible version to your live environment that includes the database, the files, as well as the versions of PHP, Nginx, MariaDB that you're using is really important. One of the things that's definitely been frustrating for me and I know for others is having something that sort of only works on your local, only works on your Dev instance, but doesn't work in live. And I think the closer you can get to a workflow where you have that kind of parity, the better it's all gonna be. So to sort of make the workflow work and this is something that I think hopefully we're somewhat familiar with is to do those sort of Dev test live workflow, we actually need to have our code in some sort of version control system, Git being the virtual system, I think most popular, certainly used on Drupal.org. And Git of course is a distributed version control system, which means that you have a central repository that ultimately holds the code, but you can individually sort of do a lot of manipulation and work on your sort of local environments and local checkouts. And that one of the sort of really awesome features of Git, which I think is central to a lot of the more advanced workflow operations, is the idea of having different feature branches for the different sort of concerns. That when you have sort of multiple Dev instances like this, these are actually all gonna be different feature branches within Git. You can create a branch in Git relatively easily, you sort of clone your database, or clone your version control, check out a certain branch, and then you can work on that branch. And what's awesome is that when you have different branches, you can work on different things at the same time. So you can have a branch that's designed to just test an update of Drupal while you have another branch that's designed to build a new slideshow feature, another branch that's designed to test out some sort of new module that you want to do. And that each of those can sort of be stored and saved and worked on separately. And when you actually want to go ahead and sort of merge them together, you can run that operation. And assuming there's no conflicts, you can just put them all together. If there are conflicts, of course, you need to resolve those. But that's something that I think can also work for multiple developers, every developer can have their own branch. And it sort of is this best of both worlds kind of approach where you have the ability to have everything in a sort of saved version that can all go back to a sort of master branch, but it's also something to be individually spun off to do sort of granular development. And of course, the branches can be deleted as well. They don't always have to go into the final product. So it's very good for testing and experimentation. When we actually want to do deployments, when we want to move things to test and to lie, if the sort of strategy that happens is that we actually go ahead and do a sort of tag on the state of the code and do a deployment on that. A tag is sort of a snapshot in time of the code base. You say, hey, this is the sort of version of the website I want to test. So you'll do a tag and get, have that tag sort of have a reference for it. And then you can say, deploy this tag to test and see if that works. And if that works effectively, you can then deploy that same tag to live. And there's a lot of confidence in that because it's sort of a very clean snapshot of the code base that you get a specific tag and that it's always gonna be the same code, always gonna be the same code, always gonna be the same code. And I think that can solve some of the problems maybe that we've had in the past where we're sort of trying to like copy stuff around and maybe some permission means something doesn't get copied or we don't properly do recursive thing or there's sim link that gets a little messed up that I get deployment is very clean because you have that snapshot. You also, by having version control and these tags, you actually also have the ability to quickly roll back changes that you don't like. So if you do have something you deployed a test that doesn't work right, you can then revert that and go back. And this is all very powerful. And this sort of creates a sort of project workflow that looks a little bit like this, that this yellow line here sort of through the center-ish area is the actual sort of master or dev branch that you're ultimately doing all of your work. And these sort of color lines, the purple, the blue, the green are individual feature branches. And these are things where sort of the particular point in time they branch off of the main item is when the work starts to happen on them. So if you decide, oh, I wanna build a feature at this time, you'll decide to branch at that moment, you'll take exactly the code base as it is at the moment you branch, and then you'll start sort of doing modifications to it to reflect that specific change. And what's awesome is that you can sort of work on that independently of the master branch. But if you decide that maybe there's some new updates to the master branch you wanna sort of take into account, you can actually bring those back in as well. So you can sort of keep your branch up to date with the latest from the master dev branch, but still do your own kind of features. You even get sort of this ability where you're working on a feature where you can even branch further from that feature and branch off of another branch. So you can end up with different kinds of layered kind of development. And a lot of this is gonna depend on the kind of project you're building and the kind of people that you're working with and sort of the requirements of this. But increasingly as you sort of move into more agile and more sort of quick kind of rapid feedback kind of development cycles, having the ability to quickly spin off different features, work on them, show them, and then if they're approved from Roger and back, becomes really powerful. And then the release process I mentioned with the tagging are sort of those two red tags on there, which is just like at that point in time, we're gonna take this particular feature set, this like sort of set of code, mark it and deploy it, and this is what we ultimately test. But this is the kind of workflow that I think is really helpful for everyone to do and something that if you're working in a Drupal sort of agency or organization right now, that you should really try to adopt something like this. Even if you're already using Git and using maybe a DevTest live, but all the developers are working on the same Dev branch or something like this, something like this can help to separate out concerns a lot easier and can make deploying a lot more straightforward. So use it for all your projects. Here's a sort of creepy graphic I found on the internet to highlight that. But I think it's really, I think it's a good way to manage code. I think that using Git and using a workflow like this really makes some awesome stuff happen. Okay. So wait, you might ask, what about the sort of configuration that happens? So Git is really good for managing code. PHP, HTML, CSS, JavaScript, these kinds of things. But as you're obviously aware within Drupal, there's a lot of clicking and pointing and kind of things that happen that are not actually code. They're, you know, and the database changes. Making something like a view, for example, could take a whole afternoon to make a really awesome view that has the right filters and appears in the right places. And this is something sort of by default that doesn't actually exist in the code base. That in Dev, I can work on my view all I want, but it doesn't actually show up in the code. It can't be part of this kind of workflow. And one of the things that's really awesome with Drupal 8 that I hope you're sort of embracing and working with is the Drupal 8 configuration management system. This is a system that has basically been designed from the ground up in Drupal 8 to make sure that all of the different configurations that you do on your site can also appear in the code base and can appear in the code base in a way that can be a versionable, releasable, deployable, this kind of thing. And the general concept here for just to run through it quickly because I'm sort of in love with it is that you have a sort of settings form, be it a complicated view form or be it the site information that can be represented in a YAML file which is sort of machine readable information. So you can take this information, values you put into the database and instead put them into a YAML file. And for course for Drupal, there's all sorts of YAML files that exist. This is just a sort of snapshot of the different configurations you could have in your site. The idea is that all of the configurations in the site can be implemented as code in this way. And there's hopefully some talks here at this conference as well as a number online sort of that go through this really thoroughly, putting the link down here. I definitely sort of encourage you to explore it. But for the purposes of this talk, really what we're talking about is that in a concept of doing Drupal development when we're writing new code on our feature branches, we also want to write new configuration. And that configuration can be easily exported using the graphical user interface in the configuration site of the Drupal administrator system. You can easily sort of export out configuration as well as sort of synchronize it, bring it back in, sort of how it works. You sort of push it out to the file system, write it, then bring it back in. And if you're into the command line interface, Drush has as well as Drupal console has specific configuration options to actually do a config import and config export kind of operations. And this is the kind of thing I think you should re-regularly doing in your projects. That as a developer, you'll be working on a feature branch for a particular feature that you should regularly take the database from live and pull it into your dev. You should make the code changes you want, make the configuration changes you want, export out the configuration, and then you're good to go. All right, so that's like a ground stakes for what I'm talking about here. Like, you need to have this kind of system to get a lot of this sort of testing work I'm gonna show off the ground. So I encourage you all to sort of, you know, sort of grok that kind of framework and ultimately get to this kind of place. So actually now let's talk about some of the testing things we're gonna do. So the sort of theory here is that as sort of web developers, we do a lot of, we should do a lot of testing for our projects that every commit we do, every deploy we do should be reviewed as best as possible. Like we want to do good work. And one of the things that though is frustrating, especially for sort of knowledge workers and folks who are more experienced, is that a lot of the testing is very boring and repetitive. And if we do our jobs right, then if you're like good developers, like most of the testing, if not almost all the testing that you do is just kind of come back and say, okay, it works. It's what you think it should do. It's acceptable. Most of the tests hopefully should pass most of the time because you're typically gonna do a decent job. It's just that like small percentage of time where it fails that you really need to pay attention. But the problem is that like this is something that like humans aren't particularly good at. Especially if you're writing the code yourself, you might skip over the tests, you might not do them as much. But having the ability to repeatedly do this and have the robots do this is a really awesome, I think, situation. That the robots and the AIs are gonna be driving our cars and picking our books and movies and answering our email inside of a decade. I don't see any reason why they can't help us with our web development right now. And a lot of what this is gonna be is talking about using robots and automation to make sure that your sites are tested as efficiently as possible. So let me jump in to sort of, I think one of them might be more easy sort of test items that we're talking about. This is sort of lining up sort of a cross browser testing kind of process. That for those who have been doing web development for a while, this used to be pretty awful with like very old versions of Internet Explorer in particular. But just in general, like as a web developer, you're gonna use probably a few browsers and on your laptop or your desktop computer. But in this world, there's a lot of different web browsers that you need to be aware of and make sure your stuff works effectively on. And you need to have a lot of different platforms that you also need to be aware. That Firefox does work differently on Windows as it does on Mac. It works differently on Windows 7 than Windows 10. And that especially when you're starting talking about the world of mobile devices, Android and iPhone, all the different versions they're in, there's a lot of different places that you need to make sure your site works. As we build more responsive sites with more different breakpoints, these things are even more important. And that especially if you're using like a Mac, you have to like spend of VMs to do Windows and vice versa. This can be a lot of tediousness and can be something that clearly you're not gonna wanna do every time because it takes a lot of time. And although people get like this by your computer because you get so frustrated, I wouldn't encourage that, of course. But one of the things I would encourage is the idea of looking to use sort of hosted cross browser testing services that actually already have these VMs. Hopefully some of you are using these services now. There's many of them out there. The basic premise here is that you sort of submit URLs or Selenium test patterns or other things to these services. They're actually gonna go out and they already have the VMs running all the versions of the operating systems with the versions of the browsers and they'll actually do this testing for you. One of the ones I'll sort of show off today is a service called Spotbot. It's similar to the other services. I wouldn't, I think it's good, but it's not anything that different. The thing that's most important to me about this, about this service and all the services I'll show off, is that the services that have the APIs are gonna be the best services. That to do a proper industrial workflow, like you want to fire off these tests automatically. You don't want to have to log into a site to type a URL to hit scan and then wait for results. You want, when your workflow gets deployed, for it to talk to an API and already start that process. And this is sort of when I talk about industrial grade workflow, it's the idea of moving from step to step to step as seamless as possible. If you've ever watched a car assembly line or something like this, they're very quick and things are already set up in advance. And this helps to save you time and make it more efficient. So this is the kind of thing sort of you can get using this kind of thing. There's a browser stack is another one. I just added as a link that also has an API. I don't want to show favorites for particular services, but this is nice because it's pretty sort of easy. It's part, I'll show you this in the demo, but you basically can trigger an API call to scan a URL and then you can get a bunch of different browsers, which is really nice. Second test that I think is really important and something that hopefully is near and dear to your hearts is the sort of performance of the site. Making websites fast can be actually one of the harder things to do. There's a lot of things that it takes to make fast websites. There's a lot of server stuff you have to do, for example, but a lot of the server best practices are best practices, they're well known. You set up Varnish, you set up some object caching, you make sure you have the appropriate number of nodes, you do a load balancer, you get fast disks for your databases, you tune the configurations for sort of Drupal size application and you sort of make sure you have APC and you do all the caching and all this stuff. But one of the things that's sort of the biggest variable in performance is it's not necessarily your server stack, it's the code that you ultimately write or the configuration that you do. If you make a very complicated view, for example, it can slow down your site. And this is something that can actually be pretty hard to test because the people, I think there's a lot of people in this world who are good at performance, but it's not everybody. And it's typically also the people, it's a small number of people may be inside of a company and that everybody has to write code and everybody's code has to be performant. And so one of the things that's helpful as part of a sort of industrial grade workflow is being able to do a performance test on every commit of code that you do or every deployment that you do. Because performance matters a great deal and having the ability to quickly review every deploy and see how fast was that code base that was being deployed gives you this baseline of how fast things are so you can compare. So if the site is slow two weeks later, you don't have to go back and dig if you're at what happened, you could just look at each deploy over the last two weeks and say, when did it spike up? Okay, that was probably when the code, we needed a review. There's a lot of really awesome ways to do this. My colleague, Josh Koenig, has a really cool little spider that he's been doing for getting performance data. It's just, there's a link down here that'll be in the slides. It's just uses WGAT to spider a site, pull down a bunch of pages and that'll actually generate a lot of traffic on the site which you can use a tool like New Relic or an application managing kind of thing to actually see what's happening. And this is helpful because when you're actually looking to do testing, you want to use some sort of testing tool. This is a pretty easy one and you can just write it on the command line, no big deal. You can also get a bunch of hosted services as well. These are tools like Blitz.io, load impact, load storm that, and JMeter which you can set up to sort of work this way that are designed to be, they already have servers all around the world. You like give them a sort of test plan that you set up and they'll execute that test plan. So I'll show you one a little in the demo where you actually can basically record a sort of user pattern on the site. So you can record logging in, you can record clicking on some pages, maybe submitting a comment, doing a search, updating your profile, and then logging out. And they can then simulate doing this for five virtual users, 50 virtual users, 5,000 virtual users on your site and they'll give you reports back on that. And that's really helpful because having that kind of data helps you to provide that baseline and gives you better sort of analytics around how good you're doing. So load impact will be the one I'll show today. They of course have an API. It's actually relatively advanced. You can do a lot of different stuff with it. But the basic idea is you set up a test plan, you tell it to execute and then it'll kick you back the results. So this is the performance test recorder that you can do with load impact. It's just a Google Chrome extension. You just turn it on and hit record and go from there. And you can do a lot of really nice stuff inside of the Drupal site clicking around. The one gotcha I would share just from recording this is if you're trying to record a sort of login behavior for a Drupal site, it will record your password and stuff when you do submit. But Drupal has some built-in security features with a sort of form token that you actually have to abstract out and do that in sort of a variable load impact. They have a blog post about how to do that to log into a Drupal site. But it's a Drupal security feature you have to sort of work around in the script. But it's no problem. And once you get these scripts, you can run multiple of these scripts as well and really simulate the kind of traffic that you want. Because the best performance traffic is gonna be as close to the real traffic as possible because that's what ultimately your users are gonna experience. They'll experience a site with the users doing real stuff and making sure that that sort of works effectively is very important. And so I would totally encourage you to create test plans for your performance that is as close as possible to the real site because that really will help you make it all shine. So the last kind of testing I'll share, which maybe is the one that be most unfamiliar to folks in the room, is this concept of a visual regression test. So it's not as popular as other tests, it's sort of a newer kind of thing. But I think it actually can be one of the most helpful kinds of tests for a certain kind of development and sort of testing process. So what visual regression testing is, is actually a before and after comparison between two different versions of a site. So it'll sort of look at like, your site sort of right now, and then your site sort of after you do a deployment what that would look like. So it sort of takes a snapshot of the test environment, it then updates the test environment, your new stuff takes another snapshot. And what it does is it helps you to identify on a sort of per pixel level what's changed. So we could take these two things, let's say this is the before on the left and the after on the right and you wanna sort of know what's changed. Well, you know, they look sort of similar, depends how good the projector is, sort of how you can see, but you sort of have to try to figure out, okay, is the spacing different? Did the font size change? What actually got affected? And when you do CSS work, as you're probably familiar, small CSS changes can affect other stuff you don't really know, you wanna be really careful. Running a visual regression test will actually highlight very specifically the stuff that's changed. So in this case, we actually have these different titles that got changed. And maybe that's what we expected to do. We wanted those titles because they're links to have a different color. But it also changed that little read more link down there as well. And maybe that was something we intended, maybe it wasn't. But it's something that like visually, we can get information about and we can quickly find out what actually was changed. And that's something that I would say is helpful from a sort of standpoint of figuring out what's changed and sort of on the page because I expect certain changes. I wanna see those and I don't wanna see other changes. But the scenario where I think this is actually the best and most effective kind of testing is when you're actually pushing security updates to your Drupal site. If you're putting a security update to a Drupal site, you should have the expectation that there will be no visual changes on the site, that it will look the same before and after. Because all the security updates doing is fixing security problems, which if you're not like actively exploiting them in the front page, it shouldn't change. And so if you see a sort of visual comparison before and after a security update and there's zero percent difference across 50 pages, you have a lot of confidence that update will work and not break things. Now, not for everything. There's things that can break behind the scenes or things that you have to do to break stuff. But from a just sort of high level, if you're not seeing visual changes, you're not seeing error messages, you have a good chance just to go out and deploy that. And I think that's a really powerful use case for this testing. To set this up, you can do this yourself no problem. The best tool to do this is a tool called Wraith that was developed to make this happen. It's sort of a command line tool. You do a little bit of configuration. You set up sort of your URLs, it'll go out and take snapshots of those particular pages and then do comparisons with them. There are, of course, are also a bunch of hosted services that you can use as well. Apple Tools, Browser Sack, Sauce Labs, Bashrack, which will show off, that actually do this testing as well. And this is just sort of, I mentioned the hosted services because I feel like we have a lot of work to do as web developers. Setting a lot of this kind of stuff up is definitely doable. And if you're interested in that stuff, totally go ahead. I love server stuff. I love tinker around with it. But when you're trying to build an industrial workflow, you don't want to have to write your code and maintain all your testing suites as well and then also maintain your testing servers. That when you can find third parties that can do cross browser testing, can do performance testing, can do visual regression testing, using their servers and their uptime and their expertise is great because you can build on what they're doing to do this kind of stuff. So you can set it up if you want. You should definitely play around and it's good to sort of learn how it works. But have somebody else run it, have somebody else update it, have somebody else build the APIs for it so you can do it. And I'll show you off Backtrack, which is a tool that basically does this. And it helps to do a lot of that visual comparisons and they run all the workers to actually do the different kind of stuff. Because you can get this kind of vibe where you're sort of looking at, moving around a bunch of stuff on the site. You can see if you move stuff from the right to the left, we'll show you a bunch of stuff. It's moved, if there's nothing moved, it'll show you zero. All right, so let's do a little bit of a demo. A demo here, about 15 or so minutes. Let's show you this kind of thing off. We have Steve Jobs, of course, to bless our demo. That's like an older Steve Jobs with those sort of hippie hair. Just to sort of talk to you about the parts of the demo before we get into it, we're obviously gonna show a website. We're gonna have, I'll show you this on Pantheon just because I know it best. It already has DevTest Live workflow. And then the real trick to sort of making industrial-grade workflow is this idea of actually creating a platform hook or creating a workflow hook for what you're doing. So if you have a script that you have on your servers that do a deployment from Dev to Test, or if you're doing something like Pantheon or Aqwi or a platform, I say it's where you're doing deployments. They have systems to hook in each deployment and you can actually run code. And this is sort of where the magic happens. That when I can say, okay, now we're deploying this particular tag of a verge control from Dev to Test, then we want to actually run some code afterwards because that's something that then we can use to trigger our tests and that makes a lot of fun. So Pantheon is a system called Quicksilver. You write a PHP script and you can trigger this kind of stuff. And then you sort of, you end up defining your kind of process, something like this, where we'll say, okay, on the deployments, we want to sort of maybe tell our Slack notification we're about to do the deployment so people on Slack know tests are coming. And then after the deployment, we're actually going to do certain things like import configuration, import and do our tests. And these are the three tools we'll look at today. Do cross-border testing, visual regression testing, performance testing. So all right, that's sort of the high level of what we're up to. Now we'll get into this kind of test and I can sort of show you off what this looks like. So got this all set up. Do that. Let's get the Drupal party started and I'll show some Drupal images. Sort of finding a little energy. Okay, so you've got a site right here. This is demo made for the DrupalCon. It has a development environment. It has a test environment. And it has a live environment. No surprise here from a workflow perspective. Each of these environments has their own URLs. It has their own databases and it has its own code base and the code base is tied to a specific get tag. So you'll see this is the dev environment and then there's also a test and there's a live environment. There are also multi-dev environments for the different feature branches, for other kinds of things. You can implement testing with those as well. So if you're working on a specific branch, you can say I want to test PHP 7 for example, make sure that works on my site, create a branch, run PHP 7, do a visual regression test, see if stuff looks different. That could be really helpful. But for this demo I'll just sort of do the basic dev test live. And with inside of this we'll go to the dev site and this is our bicycle site. I love riding bicycles, live in San Francisco. Pretty big into that. So we've got this kind of thing. And one of the things we'll do is we can just go ahead and make some coding changes. Things we'll want to actually test. So we can write some CSS, we can add some modules, this kind of thing. But we'll go ahead and we can move some of these blocks around as one might. And we'll move the list of bikes and the about the shop over to the right side. There's a Drupal 8 site of course. In Drupal 8 the block system is actually much more powerful than in Drupal 7. And it's a little bit easier to use so I'm happy to show it off. And we went ahead and did the blocks and so we see we move from one side and now the bikes are on the other side. Which shouldn't be super crazy but is a visual change. We're actually gonna try to test. Then the performance change will also kind of test. Moving things around in different regions could change where our caching is done or whatever, let's see what happens. The other thing we want to do is that this is a change that currently only exists in the database. We haven't actually written any code of course. But we do have the configuration management system that we can use to export this. So configuration management system exists as something that is down here. Hopefully you've all played around with this. And one of the things you can do is you can actually go ahead and do like a, if you wanted to get this out you could get like an export and it'll download all those YAML files and then you can manually go ahead and copy those files into your first control system. That's one way to do it. It's by no means the easiest way to do it. If you're using Drush and are familiar with Drush you can just run config dash export and it'll export it all out. Which is my preferred way but I'm just real working on the command line. But for this demo, there's some module written by a legendary developer, CHX called config.devl that actually has with a little bit of a patch the ability to sort of, with inside of the Drupal UI write the configuration that you're working on to disk. So this is something that's not in Drupal core because Drupal core doesn't want to assume you can write to the file or to the code base. But in the case of our dev site we have it, we're in this SFTP mode which means that the dev site is writable. Obviously if you're working on a local computer your dev site is probably writable because you're editing code. And so we can just go ahead and do an export. And that'll actually kick out some YAML files for this and tells us our configs were in the disk. And then we can come back to our dev site and we actually sort of have those files exported out. And this is the kind of thing from a configuration management standpoint that we're really concerned about because this kind of like moving the sidebars around you can see actually the YAML file we have a change used to be in the first region now it's in the second region and we've changed the weight as well. And this is the kind of like machine instruction that like Drupal can do how to process to actually change it as well. And it's something that if you're used to using feature you can do this with features in Drupal 7 by the way. It will work the same way. It just will be a lot more code and not in a little bit more ad hoc, let's say. The configuration management with the YAML files is very clean and very easy. You can just run on command. And then we can actually go ahead and commit that to our version control, changing the sidebar. This is, we are using the version control here even though it's in the UI it's just having the sort of robots and the server commit this for us. If you're working on the command line same kind of thing you just add those files and commit that. So if I can sort of take your attention from over here down over here on the right we actually are starting to do some integration over here with that different workflow to actually start to tell stuff that's happening. One of the things I think that's really important for a testing workflow is that you're telling people you're doing testing, letting people respond to and review those results. Robots can do a lot of things for us but there's still a lot of sort of human review and even approval that's required here. So one of the things we did is because we did that commit, telling our Slack channel that this commit's happening and Slack is an API of course you can do it this way. You can do the same thing at IRC, same thing on email, same thing on hip chat, same thing at whatever tools you want but being able to sort of communicate with a team about testing is good because one thing I found with doing testing is it's typically a good idea to try to have other people do testing than the people wrote the code. Doesn't mean you need like a dedicated testing person, you could share the duties among your team but it's just harder to review your own stuff. So having other people do it is great but having it just like pop up in the Slack is a good way for them to get going. So now it's in version control, we can actually go ahead and create a branch and actually do the deployment. So one of the things that makes this easy is this tooling's already set up to do a lot of this workflow stuff but you can do all these operations yourself. The general idea is if you wanna do a sort of test deployment, if you wanna do the testing you first have to take a tag of your version control and then you have to deploy that tag of the version control to a specific server. You then also wanna take a dump of the database from your live site and bring that over to your test environment and you wanna copy the files from your live site to the test environment. The point is you want your test environment to be as close as possible to live to basically simulate what it would look like if you push stuff live. So everything needs to be refreshed and sort of you know, up to the minute. You also obviously should like run update PHP and clear the cache. Those are really helpful things to do when you do deployments of course and you also wanna make sure that the versions you're running on your test site, the version of PHP, the version of the database, the version of the web server and all the configurations are the same because it would sort of suck if you had like a really awesome you know sort of image magic manipulation kind of thing that you've got and your test server shows it really cool and you push it to live and you don't have the image magic library installed then it would totally break. So having like really sort of good sort of comparison and parity is important. Over here on the right, we'll see we actually have a little deployment here till it told us it was doing the deployment which is really great that I'll sort of alert people hey, something's been deployed to test and then sort of the expectation is we're actually gonna run a bunch of tests. So the first thing we'll have to do before we actually do all the testing is we actually import the configuration from Drupal 8. There's a command called config dash import inside of Drush that you could run. You could also log into the site and do a config sync but running the Drush automatically is really good. It would be my recommendation if you're doing test and live deployments to always run Drush config import for Drupal 8 after you do that deployment. If you're using Drupal 7 and features I would always run a Drush features revert all after deployments as well. You wanna get into the habit of doing what's known as a dry run deployment where when you deploy the test you don't have to touch anything that it automatically is gonna set itself up exactly the right way. It'll do the clash clearing automatically. It will do the database dumping automatically. It'll do the config importing automatically because we're gonna trigger a bunch of automatic tests. They are not gonna wait for us to go in and fiddle some stuff. We wanna make sure that it's really, really clean. That also allows us when we deploy to live to click one button and know we're just done. We don't have to go do a bunch of stuff. Now which we got into here we'll see we actually have got three tests that we've run sort of hopefully as promised. We have a cross browser test here, visual regression test here, performance test here. And I'm just gonna show you from where I was before sort of what all this stuff will look like. So first up we have for backtrack we have a different sort of system. It's not necessarily the most polished UI but it has a lot of power and that's really important. You can specify the kind of a different sort of desktop tablet or phone versions you wanna test. In this case we're just doing a desktop. And if you actually click through it and do a pixel comparison it'll actually show us sort of that we've changed pixels on the page. So we've moved around the sidebar so stuff is gonna look different. This is helpful because we do expect to change to happen, right? Like if we had messed up and not exported the configuration we wouldn't see any visual difference. So knowing there's a visual difference we might use a sin, okay, here's what sort of switched out. If we're doing a security update of course we don't wanna see a visual difference. We do, we might get a little bit confused. And this is the kind of thing that I think is sort of really helpful to sort of evaluate that kind of thing. Inside of our cross browser test we again fired off the API, the cross browser test. And we've gone ahead and run the different screenshots. And we can see that we have, you know, Nexus 5 and iPhone 6, Internet Explorer, Chrome, Firefox. At each of these things we can actually go in and with one click bring up a sort of version of the site in Internet Explorer. And that looks okay, which is awesome. But this is the kind of quick way you can do to actually test a bunch of different URLs without having to be particularly, don't have to run a VM, you don't have to do that kind of stuff. And that's pretty cool. And the last bit here is sort of running a performance test. This is running using load impact. It's actually ran 50 virtual users for three minutes on the site. You can sort of see the ramp up. It did a bunch of URLs, a bunch of pages. It's running a specific test plan. And you can sort of see the ramp up of the users and of the load time. And things are, you know, looking decent on this as well. And that's the kind of thing where like, you know, you may not necessarily have to check this every single time, but having this data sort of stored allows you to go back and be able to sort of review this. You can also, for performance testing, set up different thresholds where maybe you want the site to perform in under like two seconds or something. So you set that as a threshold. So every time that you do it and you have it deployed, if it's under two seconds, no problem. If it's over two seconds, it sends you an email. And that, of course, will queue you in that you need to do some different stuff. But in general, I think these three kind of tests and running more of these tests is really easy. That one of the things sort of I found when I'm doing testing is that the first test to write is always the hardest one because you have to get the framework set up. You have to sort of get the loop going. But once I've got this set up so I can actually integrate with Slack and do a performance test, do a cross browser test, do a visual aggression test, adding additional tests. Like I want to run a coder module to review my code or if I want to, you know, run a security review to review my security are all really easy. And that sort of, I would say sort of enclosing on this that I would definitely encourage everyone who's sort of thinking about doing workflows or has a workflow to make sure that you get a workflow that's in version control, of course, but using feature branches for individual features or developers that when you're doing a deployment to test, you actually run a number of different tests, performance, cross browser, visual aggression, security, code styles, unit tests, run, you know, kick off Jenga jobs, run, Selenium tests, whatever you think is appropriate. And then that allows you, oh, I didn't know that. When you actually are good with all your testing, you can actually go ahead and deploy to the live environment as we're doing here. And you don't have to worry about it, but you can just sort of go on with your life and go on to do other stuff because you don't need to like, you know, worry that the live environment will look different or be different because you've already done the testing and it already works really well. So with that, I thank you for paying attention here at the DrupalCon in my testing session and I'll open to a few questions if we have time, but everyone, thank you for being here and have a lovely day. So I have a question about managing visual, the visual regression tests and test results. Yep. So, you know, particularly for a large site, you could have hundreds of pages and so you're storing Wraith images, you know, the output and how to manage when things change, when things expect change and if you have any tips for doing so. Yeah, so the question is, when you're sort of doing visual regression tests, you're gonna have a lot of different images, how do you sort of manage those, how do you deal with the expectations of them? So I think, you know, I think sort of embedded in that as I think a really good point is that when you're doing a lot of these testing, I sort of showed you how to test the front page, you're gonna wanna test a lot of sub pages as well and a lot of good visual regression services will actually sort of look at your site map and scan for a bunch of pages and take like a sample of 50 or so pages. And then you are, of course, gonna generate a lot of different stuff. I think one of the things that I use to sort of try to keep track of this stuff is actually to sort of in the API, it'll actually give you a percent difference and come back. So one of the things I didn't show, but you can sort of do is that you can say, okay, I kicked off visual regression test and then two minutes later or three minutes later, five minutes later when it's done, it can write back and say test is finished, this percent difference on these pages and then if it's zero, you're in a much better place. If it has percent, you can look at it. And then I think keeping that sort of in the Slack in the real time allows you to sort of review as you go. Obviously, you're gonna have a big archive there, you'll have to sort of dig in when you need to dig in but I think sort of keeping that flow of information as notifications and Slack sort of how I helped to manage it, otherwise just get a lot of disk space and have fun with that. Any other questions? All right, well, in that case, thank you all for paying attention. Hope you have a great rest of the day.