 All right, four, four, five, I guess we can just start. Good afternoon, everyone. My name is Brandon McDonald. I work at Cameron Wilding. And today I'd like to talk to you about task automation. So at Cameron Wilding, we've been using VHATs for our functional test automation for quite some time now, as we're thinking about automating a Drupal side functional test automation. And more recently, we've tried to come up with some ideas about how we can improve our test automation process, make it faster and easier, and also spread the testing across an entire team. So more recently, we've been working on an automation framework that's built around VHAT, it supports VHAT. And the aim of my presentation today is to talk to you about the framework, explain what it is, how it works, and with any luck, hopefully get some of you interested in trying out. So the framework is available on GitHub. This is the URL. Before I go any further, can I get a show of hands of people who are currently using VHATs? Test automation? Not cool, each. So VHAT knowledge is a prerequisite to being able to use this framework. Having said that, if you're not familiar with VHATs and you want to get started on the GitHub page, we've got a really extensive readme which will guide you through the process of getting started with VHAT, how to write your first tests, and how to run them. And that readme is also specifically tailored towards this framework. So before I talk about what the framework is, I want to talk about why we went about doing this in the first place. And there's lots of reasons, some of them are listed here. So the framework's designed to sit inside your triple code base. So for that reason, we wanted the framework to be reusable and generic enough that we could take it and to drop it into any triple project. And it should just work. We wanted it to be straightforward to deploy, so we've got it configured to install by Composer. I'll show that in a bit. Automatic automation. What that's about is that the framework includes a series of tests so that as soon as you deploy it, as soon as you install it, you can immediately run these tests, hopefully on any site, and get some benefit from those. Making tests faster, the same point really, just we've got tests included. Incorporation into the CI process. So no CI process is really complete unless it has some kind of automation built into it, so you don't want to use build work. So this framework is designed to be, you can incorporate it into a CI process. We wanted to encourage team-wide testing, so rather than testing and sitting within the test team and then being responsible for doing the testing, the framework's designed in a different way so that everyone on the team, developers' testers, have access to it, and they can all contribute to the tests and run the tests. We wanted a way to enforce the standard for our tests, so the framework has a certain folder hierarchy. It's got some names for the files that we use, certain structure within those files, so that everyone's writing their tests in the same way. And future rework of a site is probably my favorite point really. It's this case where you worked on a project several months ago, and then a client comes up to me and says, I need you to make a change to that site. By this point, you've long forgotten about the product or maybe you weren't even a person working on that project at the time, so having the automation in place that you wrote at the time when it comes time to make a change can be a real lifesaver. Okay, so what's included in the framework? So let's quickly go through this list and then go through the points one by one. So it's designed to sit in your code base, so when you make a commit to your repository, you're committing both the code for your site along with the tests. It's installable by Composer. Within the framework, we have a number of templates, so if you're familiar with BeHat, this will make sense. We've got sample feature files in there, sample context, and page objects. We've gone for a page object class approach, which I'll talk about in a minute. We've included a helper context, which comes with lots and lots of functions that you don't get with Google extension or make. The framework includes a number of scripts, so we've got a bunch of shell scripts in there that do things like installation, aiding the execution of the text, stopping installing servers, that sort of thing. And then down the bottom, the last point, reporting, just kind of tweak that a little bit in terms of how we go about doing the reporting. So I go through these points one by one. So this is our Composer file. It's pretty lean on purpose, so this is a pure BeHat installation, really. The first thing we're requiring is BeHat in 306. We're getting the Drupal extension that gives us lots of functionality specifically for testing Drupal sites. We're getting a BeHat HTML formatter that generates slightly prettier looking reports than you get with the standard BeHat report, and we're getting PHP units, which we're using for our various assertion tests. Down the bottom, you'll see a bunch of scripts listed there. These are there to stop and start the various web drivers we use for stopping and starting slunning, web drivers for starting and stopping fan of GS, and a bootstrap script down the bottom which we use to aid our installation. So I kind of want to talk about the kind of typical way you use BeHat versus the way you use BeHat within this framework, just some subtle differences. So this would be a fairly standard approach. So for all the examples we're going to talk about, they're going to refer to an article content type, so you can imagine an article content type with a title, a body, some tags, topics, an image upload, and a save and publish button. So this is the kind of thing, this is the kind of way you might go about doing it. So you have a feature file with your test, and a context file with your functioning methods to do the things listed in the test. So the top feature, so the scenario is called create an article, it's going to go to a certain path, it's going to fill in that object with article title, it's going to press edit submit, and then it's going to assert it, see article article title has been created in that region. And then down the bottom is the function for filling in a field. Okay, so that's the kind of basically way you might go about doing it. The problem with this, so the yellow arrows, this is the same scenario from the previous slide, the yellow arrows indicate all the objects within the scenario, so they're all hard coded. And this is fine, this totally works, I've written lots of automation like this in the past, but the problem is that these, if these objects have these changed, you have to come in here and update them. But if you have no one scenario, you may have many, many scenarios, you've got to update all of them, or even worse, it can be exacerbated if you've got many, many feature files with many, many tests, updating objects like this can just be a bit of a pain. So we wanted to get rid of that altogether. So this is how we do it in the framework. So this is the same scenario, again it's going to create an article, it's very similar, but there are no git studies that have been here. So everything is one sentence, yeah, absolutely devoid of objects. So how do we go about doing this? So we have created essentially a three file approach. The standard behave way is a feature file and a context. So we've added into that a page class. So for an article, a content type we'll have an article feature with all the scenarios. We'll have an article page class listing all our objects and we'll have an article context that will actually do the action on the article code. So the scenario dictates what we want to do in the test, the article context is able to do it by calling in the objects from the article page. And so we take this approach for any content type. So the blog post, we have a blog post feature, blog post context, the blog post page class, and so on and so forth for any content type. So looking at the page objects, this is how we structure it. This is a real example, this sits in the framework, this is one of the templates that's included. And what we're listing in here is everything to do, every object on the page that we plan to interact with in our test. So the first thing we include is the path to the content type, and then we group. We list out all our fields and group them by type. So we'll have an array of fields, we'll have an array of buttons, an array of frames, an array of regions, and so on. And then the bottom of that, we'll have the number of getters that can return those objects to us. And they get returned to the context. So this is a real sample from the article context included in the framework. And this is gonna perform all the actions on the page for us. So we've gotten rid of the feature context and we're placing it with individual content forms. And the way we set it up, so there's a one-to-one relationship. So for every field that we have in our page object class without a single function that interacts with it. And the benefit of this with the kind of smaller atomic functions that do one single thing, you can then string them together in any order you want, which gives you a lot more freedom in terms of how you treat your tests and what you want them to do. So by way of an example, we've got the top function there. It's a private function within the article context and it's responsible for solely filling in the type and there's nothing else. Down the bottom, we've got a public function which we can call directly from our feature file and the lower part of that method there just to call the full title field from above. So this kind of, it's a bit more work in terms of the number of methods you're creating, but the freedom you then have, once you've got a single method to interact with every single object on the page, it gives you a lot more freedom. So the templates that we've included, so they're grouped into three. So within the framework, we've got template feature files, we've got template page object classes and template context classes. So we've got them set up in there for the article content type. We've got one for login and we've got one for roles responsibilities, so kind of basic security testing. And in the feature files, we're doing fairly standard content-typey things. So we're doing create, edit, delete, and view of the content types in various ways. We're performing validation rules. So we're checking things like if you omit a mandatory field, we're making sure that that text is displayed in the right place. If you successfully save the content type, we verify the right messages displayed in the right place. That type of thing, we're also validating the page structure. So we're making sure that every object that's meant to be there of a certain type is in fact listed on that page. The page object classes, all the ones that we include within the framework. First of all, we'll need to set the path to the content type we want to work with. There's an array for every object type that we have on the page and we're going to get our functions to return those to us. And there's templates for the context classes that will perform all the actions listed on the feature file. So once you've installed the framework, this is what you're gonna see. So this is, if you view the app, this looks fairly similar. I guess I really want to direct you to the yellow arrows. So these are the templates that are included. So we've got the feature files sitting inside the features folder. SourceContext has our context files and SourceUtil has the page object classes. The idea of going forward if you've got another content type you want to automate, or the basic page, or whatever it happens to be, you would take the article feature, the article context, article page, you'd make copies of those, rename them to be something meaningful to your content type. And with any luck, simply updating the page class, the objects are sitting inside there, shouldn't mean that you can run, create, view, edit, delete, validation tests across your content. Okay, so our helper context. So when you install the framework, you get, again, we have, you're getting the Drupal extension, which gets the main extension. So that gives you lots and lots of features to test your sites, but there are plenty times when the functionality they give you isn't enough. So that's the reason behind our helper context. This is something we started long before we worked in the framework with pretty big context, and it includes lots of functions for doing things that Drupal and Mac don't give you, such as verifying the assets on a page, selecting running values from drop-downs, creating unique values to generate unique content, lots and lots of things tucked in there. Okay, so the script runner, we wanted to kind of simplify the way tests are executed. So we're really here, what we're doing here is we're harnessing the power of BeHat. BeHat has already for running tests, but we've simplified it down into another script. And the syntax for the scripts to execute is the tag of the test you want to run and the profile that you want to use to run the test. So if you wanted to run all your article tests then Firefox is run dash BeHat.sh, space article is Firefox. If you want to run them on Chrome, you start to do Firefox on Chrome. If you want to run them on Fandroid.js, substitute Firefox or Fandroid.js. And if you wanted to run a specific scenario within your future file, you can pass in the title of your scenario on the command line, and that will execute that single test. So this is a simpler way to execute the test rather than having to do dash dash type, dash the BeHat line or whatever. So we've kind of simplified it down into one script. So we also have a bunch of other supporting scripts. So we've got a bootstrap script that's run during installation. This does a couple of things. It will download a slenderman server for us. It will rename it, it will put it in the right place. It will create the folder structure that we need for BeHat to run. And it will copy all the template files that come with the framework from the vendor folder up into this new folder structure. The test can be run using Selenium or Phantom.js. So we have got scripts in there for both stopping and starting those web servers. Those are triggered automatically by the runner from the previous slide. So when you say run on Firefox, the script's in there to launch. Selenium server. So that is started automatically when the test initiates and the script knows to turn it off when the test is done. So just kind of a useful feature. Okay, so reporting. We haven't done much here, but we've kind of changed the way we sort of save the files and where we save them. So the framework is configured to run with it'll generate two reports for you. So it'll generate the standard BeHat two reports. And the twig is sort of a prettier version in terms of what you want. And we've got a results folder in there where we house our results and reports. And we save our reports. So it looks like, okay. So first of all, these are the two reports you can get. Pick whichever one you like. That's the standard BeHat report top. And that's the do not PyChart version down the bottom. It doesn't look so exciting there because I've been passed, but if you've got some fails in there, there's green and red. You know that PyChart visual representation or what happened is quite nice. So reporting. Yes, this is the bit that we've tweaked. So we've got a results folder sitting inside the framework that's set up and created by the installation process. And I guess the things I want to point out. So after results of BeHat folder, then we have a history folder. So the way the tests run, the first thing you do when you kick them off is I'll archive the test results in the previous one so that you have that history there if you need to go back and see what happened last time. Beneath that, about screenshots folders. We're doing the standard thing of taking a screenshot when a test fails. But we kind of tweaked that a little bit so that when a test fails, we're saving it into a sub directory named after the future file that it failed on. These are created as the tests are running. And we're doing a fairly standard thing. So we're daytime stamping the image and then we're tagging on the end there the sentence from the scenario that it failed on. And the idea behind all this was to make it easier to trace a failed test back to a particular image. It's easy in this instance, there's one that if you've got a bunch of fails, sometimes you're going to have to trace an image back to a certain field test. So that's meant to aid that. The other thing we're doing at the same time is taking screenshots. We're taking an HTML dump on the page, renaming it in the same way, which can hopefully, can be useful for tracing problems. And then down the bottom, BeHat2 and the twig versions of it. So the configuration. So BeHat runs off a YAML file. This is included in the framework. It's set up to run with the login tests, the article tests. But it won't work if, well, because there's changeable values in the YAML that are relevant to a particular machine, as it is you can't run it on different machines. So what we've done is we've extracted from the BeHat YAML those variable values. So we've taken out the base URL and the Drupal routes and put them into a local YAML. The base URL is the URL with the site that you're developing, which can be every developer can name anything you want. So we want to extract that out. And also the Drupal route, where you've got Drupal installed in the machine, also can be made available. So that's taken out as well. When you set up a framework, it's a one-time step to filling these two values. And at runtime, this file is imported into the main YAML file. This allows the test to be run on ancient. Okay, so I'm going to try and do a demo. I'm going to try and demonstrate how you install the framework and how you get your first test up and running. Cool, so this is the code base for the Drupal Show and Tell website that Cameron Wilding put together. So the first thing you have to do is decide where, about any framework you want to install it. It can be installed anywhere, but for the purposes of this demo, I'm just going to pop it in this test folder right here. Right, so then the next thing is to add a composer file. And then into here, we need to add our code. So we're going to add in our, sorry, it's very confusing. This screen is not that screen. So. Okay, so while I'm here, this is our getup page, which is massive. I apologize, I don't know how to make that smaller, but somebody just want to point out, so we got a guide at the top of the readme. So the readme is really long. And we've broken it up into different sections and we've got a table at the beginning. So if you are the first person to come onto a project to install this framework, you go for the initial setup, which is onboarding, and the framework's already in use and you want to get your own computer set up, you follow the onboarding to a project. And there's also a section there about how you go about updating the framework. Got some information about how you execute the tests, how to, where the test results are stored. And also down the bottom, how you write tests. But, so on here, we've got the composer file, so we, this, back here, hopefully the internet will hold up. Cool, so that'll take a minute. Whilst that's installing, I kind of want to describe how we use this in-house and the systems process around how we use it. So imagine you're developer number one and you've made some changes to your code base, you then write some VHAT tests, can verify that code change you made. You then are satisfied with that, so you create PR and that goes out for your PR process and ultimately gets committed to your code base. What we, as part of the framework, what we commit to the code base is configured, was controlled by IgnoreFile within this framework. So we only put up there the future files, the context files, and you page up your classes along with your configuration. So we don't bother saving up your reports or your screen shows or anything like that. We just save to your repository what you need but the next person will need it. So along comes developer number two, they pull that repository, they then get access to the latest code changes plus the VHAT tests and you're then free to add to them or run them and then that process is related to the code. So that's that first step. Now there's a second step to run our bootstrap script. The bootstrap script is going to download the line of server, put it in the right place, create the right folder structure for us. Cool, that's done. Right, so that's it. So with any luck, so that's that's it installed. So this is what you'll see when you first go in. We have a look in here. We can see the future files that come with the framework. We've got our YAML, our local YAML. We've got results folder created for us ready to store the screenshots when things invariably go wrong. We've got slide names stored in there. We've got our context and details stored in here. Cool, so the next step will be to set up your local YAML, this local to your own machine. So I'm gonna run this on the Chantel site. Okay, so that's the one time setup. Now with any luck, we can come out here, go into the VHAT directory and we can run some tests. So we're just gonna use the runner from that's included with the framework. We're gonna say run all the login tests on Firefox. I think in this particular feature, there's maybe, I'm not sure, maybe seven or eight tests. I'm just gonna rattle through their headless so it should be quite quick. So that comes, that's it. That's kind of what you're gonna get out of the box. We can do the same thing with the Oracle feature, run all the tests that sit in there. So we've got lots, we're doing things like validation, various rules and create mode and edit mode. We're creating content, we are editing, we are deleting, viewing. So these are just some basic tests to get started and then some various header reviewer tests on the end. Cool, so that's it, that's what you're gonna get. The idea is that going forward, you can take the article samples that are included in there, copy them and use them across any content type. And hopefully, with the way they're structured, the way they're written, the amount of rework you have to do to get them running on any other content type should hopefully be fairly minimal, certainly in terms of creating new edit. You will of course have to automate your own like business flows, you have through your site, but in terms of verifying content types in isolation. So we'll do that for you. So we didn't get here quickly. We incorporated feedbacks in a very important part of getting this framework to where it is now. Our initial sort of framework attempt was a behemoth, it had so much in it and it was ridiculous. We had not only BHAP, but we had JMeter in there for performance testing, we had site speed tests, we had multiple extensions in BHAP, we're doing rest APR testing and visual depth testing. And it was just getting too big and too out of hand, so we slimmed it right down, so it was just pure BHAP, nothing else. BHAP knowledge is a prerequisite. So when we got the framework to where it is now, we kind of gave it to people and said use it, and it failed immediately because it had never used BHAP before. So as a result of that, we went through lots of training and through that training, we came up with the readme that's on the KTEL page. So it's hopefully detailed enough that it will get you started with BHAP. There's lots of links within it to the main BHAP site, BHAP.org. Our initial composer setup, we had a vendor folder within a vendor folder, which someone shows me is not ideal. So we got some dev input and ended up with a much more solid, simpler composer installation. And our reporting, we've got feedback from one of the devs that we've proven. So that's where we came up with the subfolders for images, for HTML dumps, and the changing of the name, false, that we're saving, the images that we're saving, just to make it faster to relate that failure to a certain point in the report or a certain point within one of the images. So this is my last slide, a couple of things to say. So the framework is fully operational. We're using it on multiple clients in-house at the moment. And there is benefit there, definitely. But having said that, it's still a work in progress, there's other things we want to include. I think two that are top in the list are the recording of tests. So the idea of being there that we'd record every test, if it passes, we'll discard the recording, if it fails, we'll save the recording in much the same way that we do with screenshot. So that's one idea. Another thing we'd like to do is include some kind of countdown timer, countdown clock. Some of the suites that we're running are the 10 minutes, 20 minutes, maybe longer, and if you're just looking at the console, there's no indication of how much time is left or how many tests are left. So we might include some kind of test left to run figure or how many tests are already run or something like that, to give you an indication of how much longer there is to go. And I guess my final point is I'd like other people to use it, so I'm encouraging people to try it out. Try it out in your project, see if it works for you. If it doesn't, raise the issues in GitHub and get in touch with us. But hopefully you can, if you're using Behats, it's a fairly simple process to kind of port those tests over into it. I have done that. It wasn't, it took some time, but it was simple enough to do it. And that's it, so I guess please try it out. Thank you very much for listening. To keep this project out of it, so we're not facing something on, it's not a snapshot that's going to disappear. No, no, no. This is something we've kind of recently touched. Just started using it on two or three projects in a row, so it's something I think we're going to try and do for. Like for example, I think release candidates, one of, 3.1 of Behats coming up this weekend, I think. So we'll be updating this and that and so on and so forth. Yeah, so it's a different one. Just to follow up, you mentioned very, very briefly, REST stuff, have you got any solution for REST testing or has that just got completely out of scope? Yeah, I mean there's the REST extension we have for Behats, kind of the name, but you just have to find it. I think you can pop that into the composer file, once you've got it locally, you can add it. And then maybe something that you might include. Cool stuff about Behats is that. So in our case, so that's, yeah, that's a great point. So they don't ask, the idea of extracting, don't get that, he's out of the scenario, just to make it more readable for people to see. But in practice, it's the data from testers that are writing R tests. Ideally, we get clients. Do you have a core experience where like DMs or clients are writing R tests? I've had some experience, it's not something I've encountered very much in the real world. You know, I mean, ideally, someone would raise a story in Jira and he's copying and pasting it. But, and then automate it, but I've never. So at previous job, we'd like to do some tests. He'd written them at such a high level with so little understanding of what was there that it took ages to implement this. It's a lovely idea and I would really encourage people at that level to provide input, but I remain unconvinced that it can. It's difficult to say, it's something that's been like a time filler in between other things to get it up and running. It's taken, I mean, the first version of the framework, the one that was compatible with Suntime last year. To this particular version is, I guess, a few weeks and then a couple more weeks of people trying out and giving us feedback about what doesn't work. Yeah, so maybe you'll be together in a couple of two months or so. So that's you in the same repository with the, I don't know, so it'll sit in the drop site. So the, yeah, so this is difficult to see. This is a triple code base. Everything's gonna sit inside there. Yeah, so it's just testing the code to sit within the one repository, yeah. Yeah, yeah, so this framework won't do that at the moment, but there are, there's a big, little bit of extension in the effort B-HATS called perceptual diff to define a GitHub, and that will do that for you. It will do, so the first question was, what's the benefit? So, I think there's a couple. So, I mean, there's one we talked about earlier, so the scenario is the way they're written now is business readable and devoid of objects. Some other advantages are, I mean, if you do a B-HATS installation, you know, just with B-HATS, and then you go B-HAT-DASH-INIT to initialize B-HATS, you get nothing, you don't, you get a full destructor with nothing in it, apart from an empty future context file. So, you don't get any scenarios, you don't even get a sample of a feature or something, you get nothing. So the idea of B-HATS versus this kind of B-HATS B-HATS framework is instead starting here with B-HATS, you're starting further on with the framework. And that's really the main thing about it. So it's easy to deploy and you're starting further ahead. You've got to head start in terms of protection. This is the final version. No, I mean, this version is 1.2.9 stable, and it works, and we're using it in-house, but there are other things you want to add to it. So it will hopefully get bigger and better. So, again, so there's a few, I mean, the runner. So, I mean, B-HATS on the command line is pretty easy to execute in B-HATS. And then whenever parameters you want to give, but we kind of try to trim that down a little bit to make it, I guess, just shorter in terms of what you have to type to get it running and easier. And the idea of the page object classes, which is not a typical way necessarily to use B-HATS is just to, it aids your test scripting because there's so much more you can do with it. Because of your objects listed out in a dedicated class, some things change on your screen. Screen view testing is very obvious. And just the freedom to write to your scenarios against any particular object. I mean, probably doing B-HATS the fairly standard way for years, but this is, I think, I think it's cleaner. I think it's more, it's more kind of dev-focused, like in terms of its structure. Yeah, does that answer your question? Cool. Anyone else? Are you using this with any kind of continuous integration? Yeah, we haven't got it. So one project, we're actually about to do that next week, so we're configuring it to Jenkins. But it should just be fine. I mean, I'll use B-HATS for say, with Jenkins. Well, the typical thing I do is to email, email links that are for build fails, and the standard thing I've been doing. I'm not sure what that is. I'm sure there's other things you could do. How do you see the future? Well, the hardest part of B-HATS is, so the framework has to be reusable enough that it can work on any sites. But as soon as you start specifying stuff, it will start working across many sites. So that's actually how we kind of came about with it. We had lots of tests for the show and tell website, and we just extracted all the stuff that was show and tell specific. So we're just left with a variable test. So the idea of adding more to it in terms of templates or tests is quite tricky because every triple site's different. But in terms of the extensions we might include, yeah, we could add more extension into there. B-HATS is a good extension. The other excellent scenario is to your business users. Okay, no. Because what happened, yeah. I hear what you're saying about like, you know, if you can't get your business users to write to write the stories with you, necessarily a bit of the information, get them to read them. If they're on GitHub or if they're on some scary kind of unit text document or something, then. I've never really been in that position where I've had to, you know. There is a tree parking, but I tried it on the other side of the road, and it leaves the big bed. Yeah, it's a little bit like, you know, I tried it. Yeah. No questions? Yeah. Do you see contrived modules that are shipping with tech, this was not a bad idea? I think it's a great idea. It depends on, I guess, your use case. This is like a standalone project, really, but I guess, we designed this to be standalone, devoid of what modules you go over, Drupal version you're running right now. Well, it's not a sort of filling this gap of Drupal. No, are there other modules? The persistent ecosystem, the ecosystem, you see this sort of standardizing. I don't know yet. My background is not Drupal, so I've been doing automation for years on other things, and this is just one way of getting Drupal automated, but yeah, I'm not obviously. Yeah, so, again, you need, I think with extension or something like that to do the comparison, but yeah, then you can, yeah, it's probably easier. It would be, yeah, I mean, that's probably something we're going to include, but I find I'm doing that more and more, and you can hook it up either to, we've done it with Sauce Labs in the past, so you can get various devices in this place. Yeah, that's something we maybe will add. The JavaScript that sets the browser to check the site. Yeah, it's not nice, but it is possible and perfect in the context that we have. Okay, thank you very much. I think it's time for lunch. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you.