 So this is advanced JavaScript unit testing, or advanced advanced JavaScript unit testing if you read the printout. We had to change the name, just kidding. I'm Matt Grill. I'm a JavaScript engineer at Aquia. I work on REST, JavaScript, I work at Waterwheel. You can find me on Twitter, always working, I am, sorry. Or you can send me an email at nattoachgrillataquia.com if you want to get anything more about anything that you hear today. I love testing. I have been writing tests for in JavaScript for maybe a year now. Before that I was like, screw that, don't write tests, it's not worth it. And now I love it, and I can't go back and I can't imagine doing it any other way. So I also love emoji, and you'll see a ton of emoji throughout this presentation. So those are two of my favorites. So we're going to talk about unit tests today, brief introduction, what they are, they're not functional tests or integration tests. We're going to talk about AVA, and if you're not familiar with anything in the JavaScript world, everything has these really funny names. We just pick random words, and that's the name of your project. We're going to talk about generating code coverage with NYC slash Istanbul. Again, weird names, I'm sorry. We're going to talk about mocking functions. If you have a REST API that you don't want to keep online all the time, or you have this huge stub file that does all kinds of crazy things, we'll just mock a little part of it. We're going to talk about nightmare, not those kind of nightmares. I can't help you. We're going to talk about a browser automation library that automates with Electron and is way better than Phantom. And we're going to talk about hooking all this stuff up with Travis. I'm sure you all run your tests all the time on every commit and every pull request, so I'm going to give you some tips on how to do that. So before we get started, everybody always asks me why write tests, why do this? And, you know, tests allow you to make big changes to your code base quickly or relatively quickly, right? You can make a change, run your tests, see what broke, and keep moving after that, or adjust your test or adjust your code if you change something you didn't expect. Tests allow you to understand the design of the code you're working on. If you show up to a big complicated project and you don't know what's going on, and the previous developer didn't write any documentation, but he wrote tests, you can look at these tests and get an idea of what, you know, what's going on. People say you write tests, this is like writing twice as much code. Sure, it's additional code, it's additional work you have to do to make everything you do work, right? But test code is usually trivial, it can be trivial if done well, and it doesn't add that much overhead. And I hope that by the end of this you'll see that this is true. Really it's something you're going to get when, you know, when the end, after you've been writing tests for a while, you realize that you shouldn't go back the other way. And tests, in reality, provide documentation, they provide you a how-to guide for your code. So just quick introduction, JavaScript unit tests. Functional tests are, you know, code that describes how to use interacts, these are tests that happen in the browser, stuff we do in triple core right now where, you know, we simulate a browser environment and click around and catch that result and make sure that the stuff we're doing works. Unit tests are focused on a very narrow, small piece of code, a single function, something that returns a value, something that manipulates some piece of data and gets it back. This allows you to very precisely target what you're testing. So AVA, this is a test runner and an assertion library. And if you're familiar with like Mocha or Chai or any of those other JavaScript-based test pieces, AVA is like that, but from the future. It is literally the best. So, you know, JavaScript is single threaded in most instances, right? There's one thing, one event loop, one thing that happens. But one of the neat things you can do with JavaScript here is execute IO in parallel, right? If you have a bunch of tests, we can spawn child processes and run all your tests in those things, and that's what AVA does. If you have a ton of tests, like many thousands of tests spread across many files, AVA takes all those things and splits them up and allows you to run all of your tests in parallel. Yeah, AVA takes advantage of this, right? This allows you to run, you know, IO heavy tasks if you need to, like, do a bunch of disk or database or network access on your tests. You can do all that and not slow down everything. It's not this big synchronous blob of tests that run across your enormous code base. So that's pretty amazing, right? This is the, like, explosion emoji, like the firework just went off and this is, you know, what your hand looks like afterwards. Yeah, this is great. I think this is, like, the biggest advantage for using AVA. And whenever, if you've used another assertion library, another test library, you know, people, right, tests are running parallel. Each one of these files is a separate process. There's no shared state. You don't get leaky tests. You don't get, like, weird. Like, you set some variables, set some state piece, and you run another test and it affects that and throws it off and you get, you know, a false pass. That's not something that's going to happen here. If you want to live even further in the future, AVA sports async await functions, I'm not really going to get into that and those things, but you need to use Babel and all those things to make those work. Most people won't. So let's take a look at some pretty sweet examples. So, here, we just want to test to see if this unicorn is, in fact, a unicorn, right? And if you're not familiar with this syntax, right, don't worry, it's fine. It's ES6. It'll be coming eventually. But, essentially, AVA tests have two parts and only one of them is really required and that's the actual test body. Right here, what we're doing is essentially just returning a function that is a promise that resolves with the value unicorn. And then we check to see is you equal to the unicorn, right? This is like a very sort of contrived example to test this, but AVA is very simple. You don't have to do a whole bunch of setup to start writing your tests. You just have to require it and keep moving. Yep, testing the coins. Do they exist? I don't know. When you run this, there you go. We just have this GIF on repeat. It's pretty amazing. Essentially, each test has a pass or a fail state and when you run these, you can continue to execute your tests if some of them fail. It's not gonna blow up your entire application. Let's check to see if two unicorns exist and do two unicorns equal one unicorn. I guess you can probably guess the answer is probably no. So, there you go. Nice stack message and you can see assertion error two unicorns not equal one. AVA prints a very concise error message for what's happening, what failed, and where it failed. This gives you an easy target to look at to see where something is gonna fail and then what you can do to fix it. I love these GIFs. I love not having to do live coding demos. This is fantastic. AVA gives you a really interesting set of tools. This is before and after. Every time you run a test, you might wanna set up a state. You might want to supply some base data. That's a function in AVA that's called every single time before each test is run. If you need to require something or any kind of state that needs to be managed, this happens before and after happens after everything else is run, after your test is done, after it's cleaned up, whether or not it's failed or not, AVA will clean these pieces up for you and this is gonna be incredibly useful so you don't get sort of leaky shared state. Again, in the demo, we talked about promises. If you're in sort of a callback state, you can't use promises for some piece, a callback style API, you can use callback mode which allows you to just wait for a callback. If you have an API that goes out and does something and then returns a piece, you can test those as well. Test this heart emoji again. This is basically the same example from the first set except instead of returning that promise, we're actually gonna wait for that promise to resolve and test to see there, right? Same exact style. Same set of results, same environment. There's no... Callback mode doesn't really change anything to the end user. If you have to write tests, you can write mixed callback and non-callback tests like our unicorns. The one thing to be aware of here is that callback mode has to always end with test.end. If you don't, your test will hang and it will hang infinitely. So yep, you have to use t.end. If you don't, bad things will happen and your computer's fans will spin up and turn into a giant melted piece of metal. So, ABA has plenty of methods. It has plenty of things to check. You can check for false or false, which they're not the same. You can check to see if things are regexes or not regexes. This is really useful if you're gonna test a wide variety, a wide gamut of things in JavaScript. You can force tests to pass with test.pass even if you know that it failed. Please don't do that. You can check to see if a test is gonna throw an error. One of the main things that you may have is if you're gonna create an object or create a new class and you wanna see if this is gonna throw an error if you don't pass the correct thing, t.throws will trap that and give you what you want. So, that's great. Testing is really easy with ABA. You can do all kinds of stuff. You have a big API, a big external dependency on another service, another library, another sort of node module or another piece of JavaScript, right? And you wanna subvert it. Basically, what requires subvert allows you to replace a very narrow piece of functionality in any part of your stack with something else, sort of a mock, a stub that can do what you want to do. You can change the behavior of something. Requires subvert is incredibly simple to use. It just takes one parameter when you start and that's a special key in Node that just tells you where you are. Stir name. This helps require subvert know where all of your modules are, knows your path. So, let's come up with a really easy example here. This is request.js. And we're going to have one tiny method that gets Google.com slash foo. And this doesn't exist, right? I don't think it does. So, let's test to see if this works, right? Here, we're gonna require lib request and we're gonna call request.git and check to see that the status is 200 and the value is the little lightning bolt emoji, right? As you can expect, it didn't work and we printed that nice error stack trace, right? Google slash foo doesn't exist. So, it's bad, right? We need to make sure that, you know, if your API is down, if you want to pretend something else exists, you need to subvert that piece. So, here we are. This is an example of using require subvert and if you remember, in our original example, we require axios, this is just a HTTP client. Great. What we're gonna do is we're subvert axios, right? And is anybody familiar with how, like, module dependencies happened in Node.js at all? Cool, some people. So, basically, what we're gonna do here, and require subvert allows you to do this, is to subvert any small slice of a module. You could, what we're doing here is require request, which requires axios, but we're gonna subvert axios. So, here we're not going to subvert just request, because then you're not actually testing the functionality you have. Require subvert can mock and subvert any piece of your dependency tree all the way down to the smallest piece, even something you didn't even write that's in, you know, ten modules deep and sort of... But we'll see what happens here. This should work, right? The status is 200 and the data is this amazing lighting bold emoji. Here's the thing, that's not gonna work. In require cache, you need to tell, require subvert to replace axios before you require the thing that's going to require it. So in this instance, lib request has to come after axios, right? If you have a whole bunch of files and you need to subvert them, this has to happen in this order. This is kind of a little bit of a pain point for doing this. So, essentially, this subversion, because we know how axios works and we know what the call signature looks like, we can essentially just return a promise which does what it's supposed to do. And you can see, there you go. And then this isn't, again, this is not a callback style API, this is just a promise based one. And you can see, there we go, we're still running these tests, they all work. And now we know that Google.com slash foo returns a lighting bold emoji. So let's take a look at the next piece of this puzzle, right? And that is getting coverage report, right? If you don't have coverage, like insight into how much of your code is tested, how much you need to do, how much is left to do, what is available for you to write tests for, it's going to be really hard. You might write duplicate tests, you might do too much work in one area, not enough in the other. That's where NYC comes in. NYC is all about instrumenting your code. It's about providing feedback on what you've done so far. It's about providing feedback on where you need to focus your efforts on writing tests. Istanbul, which is what NYC is based around, is here to help you understand what you've already done. It's a ES6-based line counter, essentially. It looks at what you've done on each line of code and returns a count for how far you need to go. How many times does this run? Has this been run? You know, your test-expertiser code base, this is going to tell you how much it's been executed. So it's very easy. Just put NYC in front of AVA in your test-package.json, and there you go. You're set. This is extremely easy to set up. I can't stress this enough. This is really useful to see what you're doing already. So I'll just give a quick example here. You can see here, I wish this would just pause. But this is just a report. You can see that we didn't actually cover all lines in this test originally. There's not full coverage. There's some uncovered lines. You know, this is not very good coverage, right? So NYC and Istanbul also provide a HTML interface. This is code that's generated after each pass. Each piece of coverage exists. This is that request.js. This is a fully covered example, but as you can see, it knows that we required Axios. It knows that we tested this exports and this git. We can see that this happens each time. This is 100% coverage. Good job, Self. But let's say we add something else. We add a post method here, and we don't write any tests for it. All this is basically going to do is show you what parts of this is not covered. This post method is not covered. We don't check to see whether or not we went and got foo.dev. That doesn't exist. Extend this even further. Post data. We don't check to see if post data is an object, and we don't even check this return statement here. Every time you make a change, the statements, branches, functions, and lines covered change. This is a per project. We're going down here, only 75% statements, 50% functions, and 75% lines. Here we're getting even worse, and they go. Each time you can see, each time we required this file, this four times, we require this file in our test four times, we check that if statement three times and all the way down on that return just once. This gives you a really good insight into what your code is doing, what, you know, how you can better test your code. 100% is important. So, next up, we'll talk about Nightmare. I know it's really scary. Nightmare is a high-level browser automation library. It's not a web driver library. This is a JavaScript interface for, like, Chromium and Electron. That's the name. Electron is what Adam's based on if you're familiar with that, but this helps you write tests that exist in the browser. So, let's just take a quick look here. Nightmare is a really super simple API. Every method is just a plain, a simple command, right? There's not a complex syntax. There's not complex callbacks for each piece. You can, you know, do things very simply with Nightmare, and the goal here is to just expose a few simple methods that allow you to interact with a browser library. If you've ever used Phantom, you know, you kind of get into this deeply nested callback environment, and that's sort of unfortunate and unfun. So, basically, just some simple examples. It's go to, refresh, click, type. These are all just methods that Nightmare supports. There are more, but I only put a couple of more up here. Basically, go to a page, refresh the page, click on an item, and, you know, check to see its type, right? These sort of basic English commands allow you to easily interact with the page and, you know, get what you want without having to write really complex examples. So, if you're familiar with Phantom, this might seem a little bit strange coming up, but hopefully not. Nightmare is pretty easy to use. Like we said, go to, we're going to go to the page on triple.org for this session. We're going to go and evaluate, allows you to run JS on the page that you've just requested, right? And so, we're going to go out here and pull the title of the session here, right? We want to see if this actually, if the session title is what I think it is on the page. And that blinks real fast because the Wi-Fi here is amazing, but they go, we load the page and we pull the session title down and it's JavaScript unit testing, not advanced advanced JavaScript unit testing. Is that pretty easy? That makes sense to everybody. So, this is an actual, oh, yep. No, it's not Selenium. It's a, this is a JavaScript API to communicate with the electron, which is the wrapper for Chromium. It's just, so that's a good point. This is just for Chrome testing or Chromium-based tests. It's not going to test IE, it's not going to test Firefox. Hopefully, we can get there at some point. So, that was pretty easy. But, you know, one of the things, one of the problems you might run into is if you have a library that exists on the server and in the browser, right? How do you make sure that those two pieces are the same, right? And you want to test that automatically. So, Nightmare can allow you to do that, right? We can start a local server. We can, if you're familiar with Waterwheel, if you aren't, that's great too. Basically, Waterwheel's library that works in the server and the browser. In the browser environment, we want to test to make sure that it behaves the same way when we create a new instance of it, right? So, here we're going to use AVA and Nightmare to test to make sure that that happens. And so, we're going to create a new instance of Waterwheel in this server page, and then we're going to check to see if the tables that get returned are the same. And then we're going to end because this is a callback style test. So, start the demo server down there in the bottom tab and go ahead and run this test. And you can see this past. That page is pulled. We start a demo server automatically to host this sort of test page content and then run this test. And this test exists outside of those other ones. There you go. It's very easy to get really complex examples. And once we're done here in the slides, I can show you some more examples. So, Daydream is sort of a browser tool that allows you to browse pages and record nightmare actions and events. So, kind of like, I think there's one for Selenium or something that allows you to record actions as you go. Daydream has that same functionality. So, you can click around your complex website and complex interactions and get those pieces and then paste them in the nightmare and just go forth. This is pretty cool. I use that actually to make this demo. So, here we go. This is all the tests altogether. This shows everything we did up to this point. You can see we got to the point of 100% test coverage. We spawned the server in the background. We went to the demo page and we have 100% test coverage across the board. So, all of this is great if you can run this on your local machine, but what if you want to get to the point where you want to run this automatically? You want to run this on every PR, every branch, right? Travis, CircleCI, any of those tools, they can all basically do exactly the same thing. So, Travis runs JS tests automatically. It'll start NPM tests for you automatically. You don't have to set up anything special. You can just run this by default, right? But the one thing that Travis doesn't do very well is handle Hellis browsing. It doesn't handle nightmare very well. If you have ever tried to run any other phantom JS tests, you might have run it in some issues. So, this is an example of the Travis from another project. One of the main things that you need to make sure you do here to get nightmare running is X Virtual Frame Buffer. It's a virtual X Windows environment that Electron can spawn in. This is pretty important because otherwise your tests will just completely crash and not work at all. So, we can set the window size, do that. Yeah, X Virtual Frame Buffer. This nightmare is pretty new and this kind of only came around a little while ago, but this needs to be installed in the Travis container. If it is, this will work great. Any questions so far? This kind of ran through these slides quite a bit. Yep, it stands for X Virtual Frame Buffer. It's a virtual X Windows graphic user interface that's installed in your container so you can start a nightmare browser instance virtually on your test container. Yep. Do you have the... There's a mic right there. I didn't realize it was on. Thanks to you. So, I guess the question basically is what's my experience with X Virtual Frame Buffer? That's your question. I haven't had any problems with XVFB so far. We use this on another project and pull requests and all the commits all spawn the tests on Travis and haven't had any trouble so far. It might be more stable now. I don't know when... We only just started using it, so... Okay, so the question was, why do we need another test runner slash assertion library, right? So, the main reason why I would say this is here is all those are great and I think if you're using them and you're writing tests with them, that's fantastic. That's better than not writing tests at all. I chose ABA because of its sort of process isolation for its spawning of child processes to run tests in sort of their own isolated environments. I like the syntax of ABA personally. I think it feels more natural than another library. There's nothing wrong with any of those other tools. I think it's great if you're writing tests. This just feels good. So, let's say a big open source project needs to have some JavaScript tests. How stable is this tool set? How stable is this stack of pieces? I worked on another project for a big company that used a big open source project, and we had 4,000 tests running on every commit using this exact stack right here, and it worked great. It takes a lot longer than like three seconds to run, but yeah. Is it Linux only? What was that? Linux only. Does it run on Windows? Windows, I don't know. I don't have a Windows computer. I would love to know. But yes, Linux, Mac OS. Yeah, it will run on those easily. Yeah. So the question was if you have, I think that if you have Windows and you have Node.js, it should run, yes, technically it should. I don't know if Expertful Frame Buffer will work with some other kind of adapter if you wanted to run an automated Windows test bot infrastructure. But I don't have Windows. I'm sorry, I can't answer that question more specifically. Anybody else? So the question was whether or not I had to do instrumentation to do code coverage. And the question is this, come on, this happens automatically. He does the instrumentation for you underneath the covers to return these values. The HTML reporter is just one of the reporters that's available. You can get the Linux coverage, LCov format if you want. But this instrumentation happens automatically. So, yep. Is there any way to specify in your unit tests that you're testing a particular function or method? So you don't accidentally mark something as covered just because you're passing through it? If you do not... So, yes, but it's not always clear where that specific method's being covered. If it's being called by something else and that method then gets called by another method, it could potentially be tested just not directly. You could go to sort of an indirect test. Hopefully... Yeah, okay. Any other questions? These are good. Nope? Okay. So, I put this presentation up on my own CDN. Feel free to download it if you want. It has everything you'll need. All the stuff's in there. So, come to Sprints. If you want, please feel free to evaluate. I'm going to switch off the presentation and just show some more examples if that's all right, if everybody wants to see. So, where's that? Where's my mouse? Hey, thanks. Thank you. Thank you. Make this, like, huge. There we go. Yeah, yeah, yeah. So, if you're not familiar with waterwheels or projects I work on at Acquia, it's a large JavaScript library for doing interactions with the REST API in CoreRest. And we use this stack to test that project. And this runs a whole series of tests. We run build environment tests, and we run a lot sort of... Does this work in the same way in the browser that it does in the server, right? Because that's a big piece. This is sort of a JavaScript library that's spread across both environments. Yeah, this... I think this is important and storing tests. So, is there any other questions? Nope, yep. That's fine. I'd be really close. The output of the tests of nightmare tests is very similar to Selenium. And basically, I see you can use it as a test runner and just use it pass or fail and basically keep going. But did you try to actually use it for something else, for example, test reports, in terms of sending them by email or using them, you know, aggregating them, providing statistics, like actually extending the report of the tool rather than just relying on it as a one-off test runner? So the question was, have I extended the reporting from nightmare? No, I haven't done any sort of advanced reporting from a nightmare's environment. Basically, we use nightmare as a sort of browser control environment to run our unit tests in the browser again just to make sure that we didn't change anything during the compile time process. So that's it. Yep. So I think the question was, why use this test reporter? Or how do I get to my computer? Yeah, there are a number, so you don't need to necessarily use the test runner or the test reporter format that comes with NYC. You can plug in whatever format you might want to or additional actions that can happen at the end of your test execution environment. So if you want to do something real fancy, like send emails or send pager messages or something like that, you can. Any other questions? Yep. For mocking. Oh, again, let me bring the slides back up real fast. ABA specifically doesn't have anything for mocking, but there was a tool here. Code requires subvert. This tool allows you to hook into the require cache in any layer of your JavaScript project and replace that piece with your own code. And as long as you can write JavaScript that matches whatever signature of the thing that you are mocking, require subvert will very happily replace it for you. Sorry, yeah, it's... Yep. Yeah, it's... This is sort of like a bunch of complementary tools. They work basically mostly the same way. Nodes require caches, you know, ridiculous, this helps you wrangle that piece. Any other questions? Sorry, on the same vein, you're recommending require subvert over sign-on, both? You can use both of them. And what's the difference or the gain? Sign-on, so what I just mentioned, sign-on is a library that allows you to mock things like timers and time-out sessions. You can use both of those here. There's no requirement to use sign-on or require subvert. Here, this is just like a simple example of a simple subversion from the require cache. You can use both of these pieces without issue. Did I answer that? My recommendation is just use require subvert. I think it's a simpler syntax and a simpler library. Any other questions? Yep. Absolute JS beginners? No, is it like an absolute JavaScript beginner or like an absolute testing beginner? Yeah. See me after this. I can point you to some resources. But yeah, definitely. These are the tools that I have been using for like the last year, eight months. You can kind of go simpler down the stack and do less pieces, but I can point you to some right places after this. So awesome. Anybody else? Okay, great. Thank you.