 Pa kaira, Johnson. Pa kaira, Michael. Pa kaira, Kent and I was also known as Kent Parker, so for those of you who knew me before 2015 when I changed my surname, I thought I'd said to written the tests. So who here has written tests for Drupal using PHP unit? Well that's good, yep. But who has intended to but never got round to it? That's good, too. And, of course, maybe why you're here. A mostly the pressure to write tests is for individual modules, which tests the functionality of the module itself in isolation. That's what most of the Drupal tests do. But today, we're going to look at writing tests for a site bill involving multiple modules contributed and custom. We've got a site specification with complex access permission rules and to the client, these rules are important and should not be violated because privacy is involved and or of course money. Just doing a quick overview of the site, it consists of nodes with access to view and edit, the site consists of web forms also with access rules we're using view all, update all, view only my, update only my and administer subscriptions so that's quite a few of those and users can apply to other users for permissions to access. So we have requests for access, we have pending requests, approved requests, denied requests and in the site we have a number of custom modules, contract modules, site configs and we want to do user logins and therefore this is browser testing. This is a movie theatre and so I thought we'd have a few movies and the first one is a demonstration of a user requesting access to have edit control of this node. So they click on request access, we confirm it of course and then we can see that the request is pending. Now we go over as the data contributor, so we go to the data set owner, we view the node, we go to the request tab and there's the request so in this case we're going to accept it, we could decline it but for this we're going to accept it, disappears from the screen, return to data contributor mode and we can see that the request is pending. So you can see the request is pending, we return to the other user and now we can edit that node, mission accomplished. And in similar vein we can apply to request access to permissions to a web form. We choose our web form, we select any number of the five options, we save it and now we go to the request is pending. Now we go to the data set owner, go to the request tab, the request is there and we accept it, we can see the operations and we go back to the other user who applied and you can view submissions. We now own this particular web form in our tab. So again a mission accomplished. So here is your brief but first as you can imagine this process for the process of manual testing all that access functionality would be rather tedious. If in the case of this site board having access corresponds with paying money then you can see how quickly becomes important for these access rules to work correctly. As is pretty much standard with any client they will ask for alterations to this code at any time requiring you to repeatedly test the same complex set of rules. So this is where automated that testing becomes not only cost effective but also lifesaving or should I say client saving. Because a screw up in the access rules deployed to production could be costly and in the case of this site build it was pretty much test driven development. So that every step of the way I created a test to ensure that my functionality worked but also because we had no tester. So in order to create these tests the following are required. To some extent this is a revision of what Michael spoke about but first we create our PHP unit config. We write a test based class with some traits and test classes. We add the required config and include the required modules. Then we write browser tests and run the tests using Chrome driver. Also we need to create configuration schema and we can view browser output as we debug the tests. And to ensure that your functionality is tested during development we can set up our browser tests in our CICD. And finally we need to maintain our tests. Drupal.org has got very good documentation on setting up PHP unit browser tests. For our PHP unit we write two versions, two different locations or one for the local install, one for the test build and we might as well get ignore the local one. These are the key settings for your PHP unit XML, your base URL, your database connection and the browser output. So let's write the basis for these tests. We usually use functional or functional drive script tests. And we start by creating a test based class that will provide setup for any of the tests done on the site. In this we create our test users, in this case the data contributor and the data owner and we set the theme for the tests. Then we create any traits for commonly used functions such as creating nodes or web forms. We clearly organise and articulate each test we need to create in order to properly test our functionality. And then we prepare to write a test class function for each of those. So to help us prepare and organise, this is kind of an overview of the things that we need to test for the site. Like testing access to the dashboard, which were those views that you saw in the demonstration. We need to test that allowed users can view the node, can edit the node, can view an unpublished node. And in this case the requirements was to be able to view the latest revision even if unpublished. And then we need to test the opposites, which is that people without published missions cannot edit the nodes, cannot view unpublished nodes, cannot view latest revisions. And so on and so forth. We need to look at each of the requirements and start a test for each. So just going to do a quick overview of our full cinematic detail. Just scrolling through. This is our base class. Sorry, now we've got a test class that extends that base class. And we've got some traits over here. This is our main DC data sets. We've got four custom functions, custom module I should say in this that we're using. One for the data sets or the nodes, one for the web forms, one for the application process, and another one for web form config. And some traits there that creates a node request, create a web form request, doing tests on web form access and on the config, and a final trade. So the main test functions we're using here are simply Drupal login, navigating to a page which uses Drupal get. And we test our access using the status, a set session status code equals 403404 or 200. And we can also test by the presence or absence of specific text strings on the page. Note that the set session functions work by outputting a boolean false or true. If the output is true, then the test passes. The false, it fails. So we need to write our tests so that they always pass, so that output is always true. And we log out again. Obviously, if you have got your custom functions, you also have your custom functions that reside in your base class or trait. Just quickly looking at a test function here. This is one of the test functions we've used, a test for a successful request to access a web form. First, the data contributor user logs in and creates a web form. After logging this user out, we log in as the data owner user and check we have access to the dashboard. And that there is a text on the page saying request. We could of course do any number of different tests to satisfy ourselves that we're in the right place. Next, we run a function to make a request to the web form we've just created and we log out. This function defaults to giving us administrative access to the web form. Then we log in as data contributor, navigate to the request page for that web form, check that we are in the right place and click on accept request. Since we know we have the only one request for the web form, we can be sure this is the right move to make. And we can always check on this by looking at the browser output. Then we log out again and again as the data owner and check that we've now got administrative access. While we're creating this test, we need to ensure we've got the required config and include required modules. Whatever modules and config we include in the test is what is available to us when the test runs. So if we don't specifically include something, it will error out. The automated tests begin with a basic install of Drupal only and everything else we have to add. So when you go to a web page in a test, any config needed for that page must be included in the config install directory of a module that gets loaded with it. We need to exclude any config supplied by the basic Drupal install such as anonymous and authenticated roles, otherwise again if you include those it errors out. So the config of a supply would include the following. Quickly back to our cinematic representation. So this is just scrolling through the config for our data sets. The model loads that web form access module and you can see the modules that the config that DZ data sets includes with it. And the config install directory. And if we have a look at the info file, these are all the modules that we're loading. And it was a learning process in terms of which modules to load. We come onto that. And similar vein, we've just done that. Required config for web forms. So we can see with web form access we are loading DZ web form access module which actually loads DZ data sets. And we have some functional tests there. Just quickly scrolling and they load DZ play access. DZ play access gets loaded. So if you look at the config install directory, this is the config that we include. And if we look at the info file, these are the modules that we're including. Right. Writing tests. Oh, we come to the money part. And run using Chrome driver. So Chrome driver is pretty easy to use really. Just install it and you run it using that. There's an excellent write up at that location. We run our tests with this. You've just seen this in Michael's presentation. We've got a command line. It's just calling PHP unit. Dash C is custom PHP unit XML. And then we point to the... In this case, the parent custom module, knowing that any tests nested within that also get run. So as you build your tests, there will be many times that there's a missing config which will require you to add another config. Many configs depend on other configs and you have to ensure those configs are present in your test to run successfully. If you persist with this process, then your tests will eventually run once you have all the configs you need and they appear in the appropriate order. Often you have to ensure that a config appears at the right time in the sequence of module loading so that the fields required in a view, for instance, are loaded either at the same time or before the view is loaded. So we're going to have a quick look at running a test with missing config. We're reverse engineering this. We're taking them out of the config in store directory and just casting them aside into a temporary directory. Then we run our tests. Oh, no. It's the red letter of death. We're erroring out here. And they've failed. And you can see that. They've all failed. All the tests have failed for the same reason. We're missing no field date published and field date modified. Those two fields that we just removed. Now we look at running a test with a missing module. In this case we're reverse engineering. We're going to remove. We're from config from our config in store directory. Run the tests. Obviously this is sped up. The tests take much longer than this to run. And we've got our big red letter of death. And we need to run it to the end in order to see our error messages. And again we can see that all our tests have erred out for the same reason. And we're missing configuration object which includes DZWeflm podfig. But also others because they're probably within that. The next thing is the configuration schema. In Drupal they use this qualifier inspired schema metadata language for configuration YAML files. The while building and running these tests just like with the config in the modules, you're constantly told of missing schema. Creating configuration schema is not as straightforward as adding the missile config or modules. I'll show you an example of... Here's a configuration schema here. You can see it's a custom one of DZApplyAccess.type with a wildcard and we have six fields in it. How many people are familiar with these configuration schema? OK, that's good. So now we're going to reverse engineer on a schema. We're missing. We've taken out that particular schema and we can see if we're not passing. Eh, for enough. Enough. No schema for DZApplyAccess type simple data sets. So we have to understand how on Earth what we do in response to the error message and we need to create a configuration schema. Now we're going to run... Now we look at the test with incomplete schema. In this case, we've got the first part of that schema that we showed before. We run the tests, same outcome, despite the fact that we're trying something different and we've got a whole bunch of output here now for the missing schema. DZApplyAccess.type simple description, help, simple data set and display submitted and preview mode. So we can identify each of the missing fields and then learning about configuration schema we can assemble our configuration file like that. OK, now running the test locally. We've got this in full cinema... cinema graphic detail. Oh, nice. We've got this friendly little dot now instead of those red letters of death looking positive. When we run the test for this particular site, the whole thing takes about 20 minutes and includes 21 tests with 426 assertions. So obviously, while you're running a test, you're not going to be sitting and staring at the screen like this and this is sped up about 2,000%. You'll be doing something else and then coming back to it later. So happy days and we can see here the location of all our browser output and, again, we can view some browser output. Mine's... our theme's not as nicely themed as Michael's was but it's got a previous next thing that we can flick through and we can see in this case we've made an application to access a web form and we're moving between the different users logging in, we're checking access, logging out and so on and so forth. Debugging the tests. When running a test, there's no log output unless you create one. So here are a couple of options. One uses the develop module and in both cases you include the code in your test base class and you can put the debug output in your test script. Moving on to using CICD. So to really make this test worthwhile it should be run whenever we deploy. This means that if we inadvertently make a change that breaks any of the functionality then it will be picked up. In this real-world case we're using CircleCI but obviously scripts will vary depending on what system you are using and as we found versions as well. Essentially we need to set up a web server, here it's Ubuntu and a MySQL MariaDB database and configure the server to work. We also found I had to fiddle with user group memberships in order to to make it work and as a bonus we run some PHP sniffer tests and we need to set up permissions on our output directories to avoid fatal errors. And then finally enable Selenium as our browser simulator. And then, well we can run them on build. After much testing failures and debugging we got the test to work reliably on our development build. The test in this case which I haven't even got to yet run for 17 minutes the whole thing I've got about half an hour. Note that this is a good idea to run this each time code is pushed to the developer branch before merging rather than any time after that. This ensures that we don't bother to spend any time on code that fails our fundamental tests. Further, there's no need to run these tests at any later stage because once they've passed well, they've passed. Here we can see we're under the unit tests I think this is sped up at least 1500% and it's kind of the the output is similar to on your local. And there we go, we can now merge. OK, finally maintaining your tests obviously if you make changes to the site sometimes you have to change the assertions that you make to test with. And yes, when you run your tests you'll find out all about that but as far as the other one is keeping your config changes up to date that's not something that you'll be alerted to you could be testing stale config so in order to overcome that I wrote a simple module that provides this function DZTestGetModuleConfig which implements a hook and updates all our relevant config and then in each of your custom modules you implement this hook by identifying each of the config items that are in the config installed directory so when you run the function those files are copied from your site config directory into the config installed directory of that module and there you have it you have a site that tests itself and your questions. Well, what I was testing is fairly limited kind of access rules but a lot of that is just copy and paste obviously I could take a page out of Michael Spock and write lots of trades obviously that could be refined a lot well obviously there are different methods you can use but I'm just using the Drupal Textbook PHP Unit testing here I've used Behat in the past but I wanted to see what Drupal could do with there and I actually think that Drupal uses Behat does it not having a lot investigated but as you saw in Michael's presentation if you were here Behat's included in the link, isn't it? Yeah, or something like that so I don't Right, anyway the purpose of of the presentation was perhaps to inspire people to to use testing in their site builds in this particular case we didn't have a tester and so it was worthwhile me spending the time to create the tests so that the production manager, the owner and I knew or were confident that the site did what we wanted it to do So you had something, I can't remember quite the numbers but you had somewhere in the facility of how many tests 21 tests and I think it was 400 assertions and that took how long to run? 17 minutes 17 minutes, ok No, not several hours I think it was 70 minutes on the server and 20 minutes on my local Ah, ok I'd have to admit that when I tried running it on a 16 gigabyte memory laptop it wouldn't take it so 32 gigabytes was the the minimum My question was and I think you answered a little bit about you should have been more opinionated about when you chose to run them for example but they're sort of the when tests start taking a period of time they kind of get to a point where it starts to slow down your change control frequencies because you have to wait for the tests to run, do your sword fights to wait for things to before you can deploy your changes and get those things out and that can start to impact business I remember a particular site I wouldn't name the customer but they had a test suite so large that it did take them something like six hours for it to pass it was all built around Be Hat and BUD and all that sort of thing and so they really invested in the idea that testing was a really important part or functional testing was a really important part they shouldn't release without all those tests passing and they realized that they couldn't couldn't move as fast as they could because of how long it took so they made them more opinionated about when they chose to run the tests and how more governance and policy around change control and what they decided to deliver on and led to other slowdowns in the business essentially as a result of it not to mention the long term impact of trying to get change through the barrier of six hours of testing so you would have to test God forbid because you would not know how you broke it or the piece of code that did that and was it the test or was it the code that was supposed to be breaking and all those sorts of things so do you have a sense of how much testing is too much testing and does performance a factor of that or is it some other aspect? In the case of this in our circles here the tests were only run when it had easy as a suffix, a prefix so if you wanted to if you were pushing through something that had nothing whatsoever to do with functionality of the site then you could skip that test so I mean you could obviously have one bunch of tests for you could divide the site into a functionality and target as long as your developers were well disciplined to ensure that you only run tests on functionality changes that apply to that functionality change that would be a way to get around but of course if somebody uses the wrong prefix and makes a change to a functionality that doesn't test it then but that's one way of minimising that We also see the example where you are heading that to a pipeline and the thing missing here is when you run in a pipeline GitLab or something you don't have a working website of course you can probably run your site in your server you created in the pipeline but for example the database where which database you are using to test a specific test script during the pipeline inside the pipeline I don't know if it makes sense well a lot of the contents of the database are created through all the config and the modules that you are adding and then of course the webforms and nodes that you add as part of your test so that was the point about you start from nothing you've got to build the whole lot you've got to add all the config so your database is being filled with all that config those modules and you're adding nodes and obviously over 21 tests you end up adding quite a few nodes and quite a few webforms So essentially you are testing on a vanilla version of your site not the kind of Yeah, so it's a site build that you're testing on It's like a simple test me website where you are a vanilla Drupal with that specific module to test out the functionality essentially you do that as they do in the pipeline and then you are to build all the configuration to achieve that particular test for example a content type with many fields everything of that needs to be config imported and created in order to do that specific test That's right, so you saw how those config install directories are full of config, well they are all getting loaded with that module and the tests Yeah, this is in addition like you do your test script on a working site where the configuration is already there you just run your test script but in case of a pipeline you need to get everything ready to have the same test running essentially and this is an extra kind of big thing to do before you are ready to to do the test during the pipeline Well that's all done when the test runs and it does the same on the local as well so it's building the um if you go to the basic constructions for creating browser tests you extend a test base class browser class which includes all that in it and it's building everything that you need as much of that site build that you, as is needed for those tests you probably in that case will probably be building 80% of the site and as you run your build your test just using say one of those dashboard pages like the requests page for instance so if you just do a Drupal get test on that page you'll come out with a whole lot of errors we're missing this field we're missing this module and so you okay I'll add that field I'll add that module and then you run it again and then it says it's missing this field and slowly you add the config you add the modules you saw the great piles of config and modules and then eventually the test runs the request page loads and there's no errors nice friendly dots and you've got some browser output so you'd start with just one test obviously I was trying to reverse engineer lots of tests so you just start with one test you build that page like just a view page that might require adding 12 modules and 10 config items just to get the test to Drupal get that page and so you can write the test and then just slowly build it based on the error message you get until no error messages and you can see the page that comes in the browser output