 So, as Anne said, I'm Trevor Landau, working for New York Times. I'm on the mobile team there. And I'm here to talk to you today about test-driven background development. I also want to thank Boku and sponsors for giving me this opportunity to speak here. It's my first time speaking, it's really exciting. So, I'm just going to give you a little background on what we're going to be talking about. Let's imagine we have a brand new app, your boss sits down with you and says, this is what we're going to build. So, you start building the project. You write code, and then you test the browser to see what's working. Sure, it's working the first time you check it out. Continue to build the project. Checking the browser, not working. So, you see where the code is failing. You're going to fix the code. It's still not working. You go back and you fix the code. So, it's fast forward a few months. And the project's ready. You go into development or production, rather. And everything's going well. So, after this, a couple months go by, and a lot of users are using the site. And suddenly they want these new features. And your boss comes to you and says, 90% of our users are asking for this brand new feature. I need you to implement it. Let's get it in now. So, a lot of work is done in, let's say, a back door model. So, you make a change to your model, and then you look up the app, and you realize it's not working anymore. So, you check out your code, and you say, this looks all right. It's still not working. And then you look at your stack trace in your dev console, and you step through and say, oh, okay, we have no problems here. It's all right. This is a real problem. So, you're like, what? What is going on? I have no idea what is breaking here. We can get around this. We can fix this. We'll mostly fix it anyway. Through testing. So, what exactly is TDD? So, TDD is test-driven development, and it's a direct quote from Wiki, a software development process. The idea is to write our tests, followed by writing code, and then we refactor. Here's a basic workflow chart. In more detail, we can write our tests for a specific unit. So, as imagine we have a back door model that we're testing, we write our model, and that's all we have. So now we go to write our tests, and we can say, this model should be doing this thing, this should be doing this thing, and et cetera. Once we have that in place, we can run our tests, and they should all be failing because we haven't written any code yet. After that, we write the minimal amount of code to get these tests passing. Once they're passing, we actually want to go back and refactor the code and make it work better. So, why TDD? I kind of explained it already. However, it sounds like coding is going to take a lot longer, but we can balance this out because you're going to spend a lot of time debugging code that it's not tested. TDD forces you to understand the problem before you actually solve it, but also encourages smaller units of code. If you find yourself writing large functions, it's going to be difficult to test because it does so many things. And this, in turn, reduces debugging efforts. Code becomes easier to understand also because of how you write your tests. So it's, in a way, self-documenting. So how do we TDD? You want to begin by employing a test runner. The test runner provides functionality to allow you to write in that test. These functions have specific verbiage to write the tests. There is a behavior-driven development, or BDD, a way to write the tests that I prefer to use because of the readability. In JavaScript, there's a ton of frameworks. Most popular ones are Mocha, Jasmine, QNIT. Probably I'll heard of these. So how do we get started writing a test? Some tests we'd come with assertional libraries. Assertional libraries is where we actually test if a return value from some function is working. And some require you to include yourself. Of course, you could always write around. Here is a basic assert function. It expects a Boolean value and some sort of error string that will get outputted. The Boolean value is not true, and we simply throw an error. And you can even use this in your code, and it's actually quite useful instead of actually saying, if some value, then throw an error. So to give some credit to TUDU in VC, it has popularity, so a lot of people know what it looks like because when they jumped in the backbone, this is where they went most likely, if they'd done it recently anyway. And it's small and easy to understand. Well, it does not work for a large-scale app. That is true. So it seemed like a great place to do some testing. And so I've written tests for the backbone to do your VC app. As I mentioned before, there's some frameworks. I like to use Mocha. This allows you to use TDD or BDD Burbage, but just as a reminder, I'll be discussing the BDD Burbage style. So, as I said before, we get a set of functions that's provided by the test runner. And the first one is Describe. Describe is a basic function that accepts the string and the function. You don't have to worry about functionality under the hood. The first bit is describing what exactly we are testing. So in this case, we're testing a TUDU model. And we just say TUDU model, and that is it. But you don't necessarily always want your assertions described in one block. So we can actually nest our Describes. And this is really useful for readability in terms of we have a TUDU view and I'm testing the initialized method. The way I prefer to do it is usually do we a dot to represent that it's a method of this class and not a class instance or a class static method or something. But these can actually be nested infinitely, but best practice says you should probably not go to two or three levels deep and you get like that callback hell look, but it's a little more straightforward. So the next function is before, after. But the purpose of these functions is to prevent code from leading into another test. So let's imagine you have a large data set that you're trying to test your code against. One of the tests may actually modify this data, but it needs to be set back to its original state for the next test. So within before each, you're actually going to create the data there so that way it's reset every time. It's also an excellent place but frequently executed code for each test. Imagine you have a collection, which we'll actually see next. So we have our Describe of our to-do list and before each of our tests we want to set an instance, like some instance test variable to create a new collection. This way for each of my tests I don't actually have to go back and say new app to-do list, new app to-do list. I can just reference this.collection. Let's actually look at a more complicated example. So here we are in describing two views. This is actually an upper piece of the code and then the lower bit. So this is one before each, which will go for the entire test sweep the to-do view. We create a model and a view that uses that model. Then down below in a nested Describe we test the close method for it. So if you look at the to-do MVC to-do view, its render or rather its close method requires the input instance variable to be ready for you. So in order for that to be ready the view must be rendered. We don't want to call view render for every test that we're running on close. In the before each, this will just run for the close tests. We'll call render on the view. Now where the real work of the test actually occur, isn't it. Sorry, no picture. It was not easy to find one. All assertions should live with inside these functions. So let's look at an example. Please notice that I'm only testing the tag name in this specific test for this view. We only care that it's going to be an li. If there's something else that's or if there's another view that sets it differently it's actually going to break this text. It won't work. Also notice that how I've written the its syntax I have it very readable in a very readable fashion. It says it should be a list tag. This is how your test should read when you actually see the output when you run your test running. So we set the assertion that basically like a function I wrote before we say tag name should equal li and this will pass if it is in fact that. We can actually make this more readable using Chai.js which is a test library. It provides an API to help you write tests in a more readable fashion and also gives you more functions to help you test more easily. So we can look at the exact same test we saw before. It should be a list tag. Except we're going to use a bit where we can say it should be a list tag and expect the tag name to equal an li. The second example it should be set completed to false by default and we expect this dot model not completed to be false because we haven't just created a blank model for this one. Let's take a look at one more complicated example. We have the next order method actually discovered a bug here and ran tests on it. In the first one we should return one if the length is zero and so first we want to test to make sure that the collection is actually of length zero and then we'll test the method for sure. In the second one I actually had to set order to one by default if you test this method on its own otherwise it does break. I think it says nan or something. So bonus points to whoever goes and clicks that to do MPC. So this is all good and well but we want to be able to test more and with basic assertions we can't test everything. However, sign in JS gives us a lot of more ways to test our code because we want to be able to test codes that we want to be able to test code such that a method calls another method. For instance if we call save on backbone we know that sync is called under the hood and perhaps we have some code that we want some pre-programmed output. Sign in provides spies, stubs and mocks which allows us to do this. First we have spies. A spy captures state information such as arguments pass to a function or turn values and actually has a lot of things on it. Common usage of spies is to test the method you used inside the unit under test that was called. Let's take a look at that. So here I'm testing the comparator method. Please note that I'm not actually testing backbone, what backbone does with comparator under the hood just that it does something that I'm expecting. So if you look at the 2DMBC code we'll actually see inside its comparator method for the collection it calls get and order on that model. So we create a to-do model we create a spy and we spy on get so this will actually a spy is almost equivalent it's just basically a wrapper around get with all these extra bits that I talked about before. And then we call the comparator method with said to do. Then we can check it against the spy and assert against that it was called once and it was called exactly with the order string. And this will pass in. Now you know that your comparator will always be working as you expected to. If you do find a bug with a comparator you can of course spy on the backbone. So here's a more advanced spy usage. So we're looking at the initialize method. Then we create a spy on the view and listen to. So we're expecting every time we initialize a view that we're going to be looking at three different event buffers. One is for change, destroy and visible. And the other is for make sure that we're doing this. Be aware that you are testing in three different units here, not all of them at once. So a good purpose of these is to enforce these implementation details as well. Next we have stubs. Stubs are functions with pre-programmed behavior, but they also support that full spy API. So if you want to use a stub you can also say it was called once or called with certain arguments as well. A good reasoning to use this is to find a specific method for being called. Maybe it does something like an Ajax request and through our tests we don't want to be testing our Ajax requests. We actually just want to simulate them because we want to be able to say I have good data for this test and I have bad data for this test because we want both of those to work. So here's an example of a stub. And this is the toggle completed method from the view. So we simply create a stub on the model because we don't care about what it's doing. We just want it to go away. Maybe it's going to the server, we don't know. We know that we're going to use it within that call but we don't want it to actually do anything. So all we do is ensure that it was called in this work here. And we couldn't do this with a basic assertion only with sign and kin. Here's another example, toggle visible. Except this time we're passing a third parameter to the stub where we actually return a Boolean value. So is hidden method actually has some more complicated logic where it does other get calls and whatnot, but we don't care. We just want to know that it's returning true in this case. So we create the stub and in this case we also have a spy on the element for toggle class and we're just overriding this method. We're not actually going to use any of it and use it at all except for that case. So we call the toggle visible method and we expect the call called exactly with the hidden string and the value is true. Let's look at one more example for stubs. In this case I actually had to create an interkey constant that to do MVC uses but I reset it each time. So this is actually testing a, which is normally a DOM event or a click event but we don't really need that, we can just give the event what it expects which is just using the which property object. So we can call that and just expect our stub to be called once for the first one. And notice in the last one that if it doesn't receive the right key we don't want it to be called. So chai gives you the not operator which will just flip the assertion value to the other way. Last we have mox. Some people would argue that these are fairly similar to stubs and maybe mostly use stubs instead. In fact the example I'm going to show does not give true justice to it because to do MVC just doesn't have a good place for it. So the reason to use a mox is because you want to state your expectations up front and by that I mean if we look at the previous example we're checking the stub last or we're asserting the stub last as opposed to using a mox we can actually state it up before. And we'll look at that in a moment. Directly from the sign in website the rule of thumb is if you wouldn't add an assertion from some call specific don't mox it. Use a stub instead. In general you should never have more than one mox in a single test. So let's see it in action. So in this case I'm just creating a mox on the to do model and we're expecting on the sync operation. So the toggle method actually calls save so this didn't really work too well so you can't really set a mox for that but so let's just ignore that back so don't normally test the backbone thing here. So we can actually go into the assertion here and say we expect this to only be called once no less or no more. We call a method and then we verify set and mox. And I think this is a good way to help organize your test because you say exactly what you're expecting up front followed by the simple model call and then we verify that it did in fact work or not. And if it doesn't work it will throw an error and you'll know in your tests. Let's actually run the tests. Since we're working with the backbone imagine those people here do on the front end and all these tests I wrote for 2MBC so they're on the front end as well. So if you navigate to your browser to the URL you have included we can actually get the full test run suite. This is just a screenshot portion of it. And an advantage to running these tests in the browser is also being able to run it on all the browsers such as Internet Explorer and its slew versions so we can make sure that things like if we're using function.binds implement it on 6, 7 and I think 8 is missing as well. But we can also drill into these tests so we can click onto this. This is actually only the to-do model one. It will actually direct for that file and just show those tests that you care about. You can actually get all the full readings from it including the code that you've written for it. However when your tests are passing it's not terribly useful but it's incredibly useful when your tests are failing. So in this case we were expecting it to be false but it wasn't that true. So we can dig into this by clicking on the assertion error itself and it drops down what the code is there. And in this case it was expecting to be true even though my it statement says it should be false. But we can go beyond this even and we can automate this but we can do that using grunt.js. If no one's used grunt.js before it's basically a task runner in which you can do many many things like testing, templating files or what happens you can basically do anything. So this is a basic setup for it using the grunt.mocha plugin the website is actually quite good on this and it's pretty easy to get going you simply set a source and you have some options you can set. Notice I have bail true bail set to true and this means that if any of my tests fail the first ones that come across are just going to immediately stop and you actually don't have to worry about it when your test suite grows to 500 tests or so. This is what the actual output looks like as we saw before we had 36 tests and here again we saw 36 tests it was just the important information instead of actually digging into it so if you do need to go further into it I would run it in the browser but otherwise just run it in your terminal and in fact you can configure run to watch your files and any time those files change it will automatically run your tests so you know something breaks right away it's really great and so here's what it looks like when it actually fails I'm not sure why it says one of NAM maybe because it's the first test again bonus points for anyone who fixes that but here we see again the exact same output we saw in the browser which was the to-do model should be set completed to pulse by default we expected pulse to be true and we get one of 36 tests and it just fails because it stopped immediately beyond that which is great for teams and community projects is continuous integration so we can actually make use of runs put continuous integration in and every time someone makes a commit to the github for instance your continuous integration server will actually pull in your code and run your tests to make sure they work this is especially useful for projects where people make pull requests and break everything so continue before I go on Travis CI is highly used on github and that's the one I think people go with for the most part but Jenkins is another one that I believe Node.js uses is also open sourcing free so back to this so what not to test I touched on it before we shouldn't be testing third party code for instance we should not be testing backbone we shouldn't be testing jQuery this all rolls up into network calls we don't care about the server we don't want to test the server's output we want to know exactly what's coming back it's going to the database we want some consistent data so there also are disadvantages to testing as I mentioned earlier it does take extra time and boss might not want to do that so you can convince him and you can say if I do all these tests when we come down the line the project gets very large if I make any changes I might break something as I mentioned before you can just easily write tests and you can see where everything breaks down the line now one thing you might not be able to get around is maintenance overhead you make any changes to your code you're going to have to go around and fix all your tests if something big, a huge refactoring all your tests will also have to be updated also there's blind spots that can occur so the same person who's writing your test is also writing your code so if they miss let's say there's an assertion the number is supposed to be the type of first parameter that comes into a method if that's skipped it's also going to be missed in the test you can get around this with a good code review the GitHub interface is really good for that I'm sure it's mostly caught by there but can I always TDD? it's not always easy especially if you're into a new realm of something you haven't done before you can't just start writing tests on something you don't know what you're doing especially if you're prototyping frequently that prototypes have terrible code because you're just trying things out and code changes frequently and you don't have time to spend on it so don't always enforce on yourself or feel like you're doing something bad and you're not doing TDD and be on unit testing there's integration testing as well which will take components and see how they interact with each other so you can actually run tests in the browser itself and you can simulate real clicks and see that a real modal pops up for instance I don't have anything to show on that it's too much to go into I definitely highly advise going to look into integration testing after your unit tests are written that's pretty much it questions? we're just curious you're using mocha chai and sunon have you is there any reason why you would recommend that combination of tools as opposed to jasmine which has all of those? yeah the what I actually discussed earlier with sunon this morning was the fact that when you need to test something asynchronous jasmine is a huge failure it is a set of three different functions and boolean flags or some sort of count it's really quite a pain to do NBC didn't really have a good opportunity for anything asynchronous but by simply passing a done parameter to your it statement and after each that automatically makes it asynchronous so imagine you write it and the first parameter into it the function is done and that's a function that gets injected into it through function two string and at the end of your test when asynchronous things are done you can just call that function you can even pass errors to it and if there's an error that occurs it will call an error on and break your test this could be useful if you have events firing maybe on a view so you can do this dot model on change and when that change actually occurs just call it down go ahead so you said we shouldn't test a server which is true so normally it has to take a snapshot of the data and then test against that but what happens if the server people are having a fun night and they change the API but then there's no server output so have you done anything along the lines of automatically generating that snapshot data every so often or some way of keeping that sample data in date personally I haven't had that problem all my data has been pretty same but that's a good point integration test could help with this you'll know your code will just break because your app won't work that's when the real data will come in so you'll just get a nice surprise I guess but there's nothing like you can always set up run to do that sort of thing however you know it's local to people's machines so maybe you make that a task with people's warnings say hey run this and make sure you get the most up-to-date fixtures of some sort and yell at your back-end guest we're not telling you so sort of a follow-up question now do you recommend any code coverage tools and are there any tools that you've seen that are focused on integration testing in like a node phantom environment okay so you have like two questions I was trying to think of one and I lost track of it so what was the first one again? code coverage I've used JS coverage before just run your tests I think that's the same vision media guys it's a pretty nice output actually I haven't come across any bugs for that in terms of integration testing I've mostly played with a lot of people use selenium obviously it's a really popular one but also coming out of AngularJS was test-tacular but now Karma which actually they have a lot of work it bundles with Jasmine for Angular but it's totally independent of that and you can run a little bit in there and I'd give that a try and it's actually just a bare test runner in general so you can have it even run your unit tests but the Angular I'll talk to you later about that if you want to catch me I can't remember the exact name of the library that they have but it's actually built directly into AngularJS you can actually run like click events and stuff or even notes on AngularJS any suggestion on the assertion libraries that can check DOM manipulations compare DOM elements be more specific well you made a distinction between integration test and unit testing but in a lot of cases your JavaScript code is specifically geared to modifying DOM and basically mousing out you actually want to run it in the browser and check that the expected DOM that you receive equals to what you expect so basically checking all the DOM elements so that that is a part of integration testing and again that Angular bit or Selenium is a good way to test those sort of things in practice what grunt tasks do you run with mpm packages do you recommend them also grunt mocha is the one I typically use I've used I didn't use in this project I also used grunt template and that allowed me to generate my test file dynamically so if I add a new spec file it's automatically put into it for me when I run my tests that's really handy rather than adding it yourself and on a project at work we actually are running a coffee script and I run my tests the same way by using grunt connect which just makes a connect server for you and you can just pass in your middleware there so this way to actually generate my coffee script files on the fly and I don't have to worry about running them in the browser or anything I kind of want to respond to another question actually one question was about integration tests and sign on I don't know if you use it it has like a server component so you can like think out the server to swap out SHR so you can send back fake data or bad data or just whatever you need and somebody asks about DOM manipulation and chai.js has a jQuery plugin that will test like a DOM make sure like classes are set and text is changed to different content and stuff like that so oh, that's useful yeah I've used the xhr one for sign on before I find that most of the time I just need to stop the data if I just return some fixture anyway but the xhr one works well and even works for a JSON file yeah this is just another response on the integration testing because we we use the exact same tools at Coursera so it's mocha, chai, sign on but then we also use JS DOM and JS DOM fakes the DOM which means you can do really fast DOM tests so I consider a unit test to actually include our DOM because it's like a function that DOM changes like it just you know some sort of different unit tests but it runs really fast so we've got like 500 tests that test the DOM in two minutes which is a lot faster than you could do for personally anyway I've heard disagreements about JS DOM because it's not actually a real DOM it's not a real browser and obviously it does not go across all the browsers you might need to test so that's that is a good solution but it's not like the full blown way to do it and the nice part about backbones you can actually render reviews in memory so you can actually unit test them as I showed before with the event I don't actually care what the DOM is doing I'm just going to write a number and test it that way the thing that's gaining a lot of application is the functional testing approach which is testing the consequences of series of function or series of behaviors as opposed to testing atomic units of logic within JavaScript is something that we're actually trying to implement right now we have a hybrid approach but it doesn't really make a whole lot of sense in both cases there's a lot of things that functional testing doesn't apply to a lot of things that unit testing doesn't apply to you seem to be pretty directly geared toward unit testing why is that? do you feel like there is a place for functional testing or not? I don't have too much experience with functional way against that but unit testing is just like the basics of what your application is doing and you can really boil down to exactly what's going on after that that's why I really like to test I try to write as many as possible sign in definitely once you write a ton more than you could but you don't have the opinion that unit tests represent the entirety of logic that needs to be tested or well not in any case that it always needs to be like the entirety like it really is the entirety but it doesn't have like the interaction between integration testing right? especially when you go to different browsers I'm not sure if that elaborate on that I have one last slide has any more questions I have a repo out for this improve the test if you can or if you need to practice delete all my certions that I have in there and rewrite them until they pass it's a good way to get practicing also if you have any reason to try copy script testing is a really good place to do it because there's not a whole lot of new syntax that you actually have to use basically some skinny arrows and some functions I recommend doing that and it's actually a lot easier to read as well because it's so streamlined down in a few indentations it's really nice to read your code without all the clutter same with run tasks I recommend copy script for that too that's it