 Great. So my name is Dylan Kelly. I'm working at the Department of Premier and Cabinet with the Victorian State Government and I'm a front-end developer there. So today I want to talk a little bit about the single digital presence project that DPC is undertaking and the journey that we've had trying to implement a test-driven development practice. And some of the challenges in doing that with a, that seems to be going straight away, with a decoupled architecture. So the SDP project is made up of three sections. So it's a decoupled site. So we've got our content repository tied, which is a Drupal 8 distribution. We've got a decoupled front-end, which is a JavaScript application built on Nuxt.js and using Vue components. And then finally we've got Bay, which is our infrastructure layer, using Amazing's Lagoon. So our main site is big.gov.au. If you saw Emma's talk yesterday, it's a massive site. We've got over 3,000 pages. There's lots of content on there. But we also have an increasing number of sites that are semi-independent. So different front-ends connecting to the same back-end and also fully independent. So different back-ends and front-ends. So making sure we don't break anything when we deploy new releases is increasingly important. So presently when we're deploying a new release across both Ripple and Tide, to manually test all the features that we have, set up content, and then verify everything works as it should, it's taking a manual test of approximately two or three days. And lots of late nights for just our delivery manager here. So currently we've got about 13 sites that are getting regular updates. And our roadmaps to scale that up significantly. So we really have to do something here. We need to either hire lots more testers or we need to work out how to automate this. So out of curiosity of the devs here, who does do test-driven development, who writes tests before they? Yeah. And who never has written the tests before? And there's a few people. And who's never had anything break for them in production before? Kurt, yeah. Kurt's the only one. Yeah, you would say that. So why do we do test automation? So we do it so we don't make the same mistake twice. So when we write tests, hopefully we don't make that same mistake twice. It allows us to cache issues earlier in the development cycle. So we don't have to wait for a QA resource to catch something. They serve as documentation. So a test is actually good documentation because it proves that the app actually does what it's supposed to do. However, we can't test what we don't have a test for. So this is where we should focus our manual QA work. So there's basically three types of tests. You've probably seen the test pyramid before. So at the bottom we've got unit tests. So they test a single function or a unit of code in isolation. They're quick to run. So we write lots of these. In the middle we've got integration tests, so testing that those units work together. And then finally we have end-to-end tests. And so they check that your application works as a user would experience it, so in the browser. But because of that, they take a lot of time to run. But the thing is the further you go up the pyramid, the closer you're getting to how your application is actually experienced by an end user. So they might be slower, but they give you a lot more confidence that your application is working how you intended it. So in practice it usually looks a lot more like this. This is the testing Dorito. So this is the test that you plan to write. This is the test that you start writing. And then the test that you delete because they're stupid and they take more time than they're worth. And finally you kind of get a few tests down the end there. So writing tests can be seen as a burden. So we needed a way to make it easier. So we could build into our workflow. So it just happened by default. So the other thing we needed to do because it's a decoupled site, we need to make a decision about where to actually write our tests. So do we test the front end and mock out the back end response? Do we run the front end tests on the back end? Do both? So often with a decoupled architecture, there's a separate team implementing the front end and the back end. And it's often tempting to say that you just need to test the thing that you're concerned about there. But ultimately your users don't care where the bug came from. Every bug is a front end bug. And enough it goes to the front end team to triage and work that out. So in the end end test that we wrote had to be from the perspective of the citizen who's accessing the site. And so we decided to locate the test closest to where that is in the front end. So the great thing is there's lots of test frameworks at the moment in JavaScript land. Most of historically relied on Selenium and the WebDriver protocol to control the browser over REST API. So recently though there's been attempts to break away from this with tools like TestCafe and Cypress which control the browser directly using the native browser APIs. So here's some downloads that from NPM over the past couple of years. You can see the tools that control the browser directly Cypress and Puppeteer have got a big surge in popularity. And Nightwatch which uses Selenium is very distant. So Cypress in particular was attractive as it was clearly popular and it had good documentation and community. It's got a great developer experience with a test runner that allows you to debug in the browser. However because it controls the browser directly it's only available for Chrome at the moment. However they do have IE and Firefox in development. It's open source but it has a paid dashboard service which is free at the moment for open source projects. And it's got a good community of plugins. So one thing we're keen on doing is standardizing the format of the tests that we're writing. So having tests in Drupal we're familiar with BHAD and using Cucumber Gherkin syntax. We wanted to bring that into the front end as well. So there's a plugin for Cypress called Cypress Cucumber Pre-Processor which allows formatting Cypress tests in Gherkin syntax. So this is an example of a test written in Gherkin. So we have a scenario here. We have a feature rather and we have some background and then we have a scenario. So we're looking at an active legislation here. And so when we go to visit the page we check that the page title is there and we can write it in plain English. And then we take that and we take that statement and we wrap it in a step definition. And we use our Cypress Cy visit in this case to actually control the browser to go to that page. And then we have another step here which checks that the page title should equal what we expect there. And we can pass in variables there so we can have this as a reusable test. And this is what it looks like in the actual test runner itself. So on the right here we have our site in action. On the left we have the steps that are running through here. So it actually goes to the page. It can control things like button clicks and whatnot. It can fill out forms. It can have any kind of interactivity there. And hopefully we have all green passing tests at the end there. So another thing that we're keen to do was to bake in accessibility testing. So making sure that we didn't have any easily detectable accessibility violations on there. And one tool that we used there is a tool called Axcor. So there's a plugin for Axcor in Cypress which allows us to go to the page and run that automated tool over the page. And we can even put in what accessibility level that we want to test for. And we can also run it on specific areas of the page if we don't want to run it over the full page. So this will go to the page. And we can see it's run a check there if we did something like change the background color to an inaccessible color. We would get a list of the detected errors there. And we can see because we're in the browser we've got full access to the DOM. We can inspect the DOM. And this gives us a list of DOM elements that have failed accessibility check. So I'm just going to show you over here in the test runner what this actually looks like. One of the cool things with the test runner is it takes a snapshot of the DOM and it allows you to do time travel debugging. So you can click through here and see the state of the DOM at each step. So when we click the button here we can see exactly what's happening there. And so it saves that snapshot of the DOM for you. So using Cypress to test the front end was working well but we're relying on content and pages to be in Drupal already. So with traditional Drupal rendered sites we've got one place to locate our tests. We've got one server to test. We can have one test that creates content and asserts that it's present on the front end. With a decoupled site though we have to fetch content from the back end and we need to create our test starter then in the back end. Which means that it's not located with where your tests are actually being written. Because if that content isn't there obviously our tests fail. So the other approach to this is that we actually mock out that response. We don't connect to the back end at all. We just have a mock server in the middle there. So if your front end is using something like Axios to make a network request there's an adapter for that called Axios mock adapter which allows you to insert a mocked out response for any HTTP core. There's also a nice Cypress plugin called Cypress Auto Recorder which will actually watch all of your XHR requests and save them to a file for you so you don't have to manage those yourself. So there's pros and cons to mocking the response. Mocking is obviously faster because you don't have to make that network request. It's more reliable because of that. And probably a big one is you don't have to have a working back end to test. So it's good for implementing new features. So you don't have to have a natural back end response. However there's no certainty that the mock that you're using is the same as the response you'll get. So there's a chance that your mocks will be out of date and you need a way to keep that up to date. So you still need to know that the back end is serving the correct response and that's where maybe like Jason Schema testing or contract testing can help. And of course you've got to keep those mocks up to date. So the problem was keeping those mocks with where the tests were located. So we decided that why don't we just upload content into the back end from the front end. So we use Cypress to actually log into the back end upload fixtures using the YAML content module. So we could save those YAML files with our front end tests and we can control them in there. So one problem with that though is Cypress actually has an issue here where they only allow visiting one domain per test. Which means that you can't log into another domain which the back end is because it's in the same test. So we actually got around that by running Puppeteer as well to log into the background which is a bit of a hack and it'd be nice that we wouldn't have to do that. So that looks like this. So we actually have another browser that runs and logs into the background, uploads some YAML file and creates it in there. But it does mean that we can locate our tests and our test content together. So we've been able to reduce a task now that was taking three days to test per side module. And we've been able to reduce that down to a roughly 15 minute test run. So as we adopt this our QA guys will be freed from doing that row regression testing and we can get them to concentrate more on exploratory testing. So we get higher value out of our QA process rather than doing that repetitive work which is really what computers and what we can get a robot to do. Getting the right amount of coverage has been tricky. So too many tests and you've got more tests to maintain and there's a burden there to a few and you risk missing something and that breaks the point. So we're probably not at a point where we've got the amount of coverage we'd like but we're working towards that. And as we bake that into our process hopefully we'll be able to start with our tests as the first place that we're going there. And that's pretty much it. So there's some links there to some of the tools that we're using there and to our ripple and tide distributions. Cool. Any questions? And then to 2.1 testing as well? Yeah, so our user's Axcore. So Axcore if you're familiar with it allows you to fine-tune those rules and if there's specific rules that you don't want to include for whatever reason you could choose what set you want there. So you've got your automated tests all set up. What kind of things do you store manually test before a release actually goes out to production? Yeah, that's a good question. So we still do that kind of exploratory so there's always someone clicking through and doing that. It's just that the confidence there is that you have both and I think there's really an argument to be made for both. The motivation was really that we wanted to be able to release more quickly and you can't do that if you're waiting for someone to like it was taking too long for it. We worked out I think the amount of time that it would take across all the sites that we wanted to deploy to would have three months of just testing in the end. It wasn't a reality. So we would just reduce the complexity of those tests that a manual tester does and it's really just a sanity check for a manual tester. And I feel like that's just that confidence. So knowing that it is right and checking and I think we're still in the process of validating the automated tests there by a QA person doing those checks. So the focus was like testing the integration between the back end and the the focus was about testing the integration between the back end and the front end. But did you test like the front end application by itself as well? And if so was the tooling that you choose you know that that selection was on purpose to be able to test the view components for example and also the integration. Yeah sure. So as mentioned like it's part of the stack of testing. So we have unit tests across our components. And so we actually use Storybook and we have a for viewing our components as a test suite I guess and we have snapshot testing there. So we know that if anything changes between snapshots we've got that. We'll also have unit testing for component functionality. But then we have that layer on top which which is really that that end to end part gives you a lot more confidence that they work in concert with particularly the back end response. So if the back end response is changing you know we've had breaks in JSON API responses that have broken the site there. So yeah there's a lot of value in that end to end side of things as Thank you. Just wondering does Cypress support sort of performance fixtures? Can you do tests on time to first byte and image size for example? That's a good question. You could you have access to the browser so you could you could potentially write a a browser a plug-in for Cypress to any Cypress tasks can be customized so you could you could watch the window and and write your own tests there. There's nothing to my knowledge that exists currently for that but I think that'd be a good plug-in for Cypress to be able to test that. Thank you.