 Well, my name is Tamar Sala. I wrote a testing framework called SHOTA and I worked for Thoughtbot. So that's the introduction screen. All right, next slide. All right, so when I was in high school, my English teacher always taught me to write hamburger paragraphs. They kind of cheat. But it's good to give you guys expectations about what I'm going to be talking about. We're going to talk about BDD, kind of the new buzzword in testing. We're going to talk about SHOTA and we're going to talk about testing in general. So first off, what is BDD? When BDD first came out, I and a lot of my coworkers were kind of confused by the whole thing because TDD was already there. And when we heard people talking about BDD, our first reaction was, well, yeah, that's what we're doing. Isn't that kind of what everybody's doing, you know? But it's a new way of thinking about test-driven development. It forces you to look at it as describing behavior instead of just describing tests. And in doing so in practice, you write these short specifications that describe one piece of behavior at a time. And this is a quote from behaviordriven.org. But the important part is that it says that behavior-driven development is a rephrasing of existing good practices. What is not is a radically new departure. And for that alone, I think behavior-driven development is a useful terminology. All right, so what is SHOTA? I think it was Joe O'Brien in a presentation earlier was talking about how he wanted to be able to test a single line of code with a single line of code. If I only have to write one line of code to get some behavior from example Rails, then I should only have to write one line of code in order to test that behavior. And that's where SHOTA was born from actually that exact same test helper that he wrote just came out of Thoughtbot. Probably around the same time. What it evolved to was nested context and readable test names. And it had to be fully compatible with test unit because we're a Rails consultancy shop. We've already got a lot of applications that are already in the wild and we weren't able to retool to use a completely different testing framework. And like I was saying, you want to be able to test things as simply as you write them. And since most of the work that we do is Rails work, SHOTA comes with some active record and action controller macros. We're looking for about 80% coverage here. And then as REST became a bigger part of Rails, SHOTA also embraced that and tries to make it as easy as possible to test your RESTful controllers. All right, so let's look at the basic building block of SHOTA, which is just a SHOTA statement. We've got the important parts of this code is that this is a normal test unit test case. It has normal setup. It's got a regular test unit test there. And it also has a SHOTA statement below it. That SHOTA statement, it's just a normal test. It just generates, you know, just uses a defined method to create a test. It's magic that we're all used to. SHOTA also comes with context. Context let you, you know, envelope setup information so you can wrap certain sets of tests in common behavior. Once again, these are just normal tests. Context can also be nested, which was a big thing at the time that we needed. When you nest context, you can really increase the readability of the test and you can really shorten up the test that you're writing. But that's it for the SHOTA gem. SHOTA comes as two parts. One's a gem, one's a plugin for Rails. If you want to use SHOTA in regular Ruby projects, you've got all that functionality. It's a very small piece of code. The plugin actually includes the gem inside of it. So you don't have to use both if you're using a Rails project. Now I want to take a second and talk about macros and dry code in general because recently there's been a little bit of a backlash against dry. And it's understandable. A lot of people have been doing cargo culling and cookie-cuttering, cutting and pasting of code in order to try and make it as dry as possible. A lot of people have been overusing Ruby magic, making un-maintainable, unreadable code. And so it's natural that there would be a backlash to that. But I want to remind people of why dry exists. It's not just to reduce the amount typing that a programmer has to go through. Dry code, if it's written well, is faster and easier to read. And that's the most important part to me of maintaining a code base. I want to be able to look at the code and immediately understand what's going on. Dry code also reduces programmer errors. And I think that's probably obvious to everyone. It's the same reason that, you know, methods exists and class exists. It's just to reduce error, right? Dry code also distills programmers' best practices. So if you have, talking about Shuda in particular, if you've got a programmer on your team that really understands one aspect of the code that's used in multiple places, you want to encapsulate that so they can be tested the same way over and over again. They don't want to have each different programmer write their own version of the tests that are essentially testing the exact same behavior. So Shuda comes with some active record macros to make it easier to test those one-liners and that simple stuff that Active Record does. If it's easy to write an Active Record, it should be easy to test. And it covers the most common 80% of the active record macros. So this is an example of kind of a contrived test case that we've got. Each one of these statements here is just a Shuda macro. And it does the best practice for testing each of those things. So this specification is incredibly easy to read. It's very concise. It gives you all the information you need to know about the user. It tells you that the user requires name and phone number, that it needs a unique name. It'll take these values for a phone number, but it won't take that value. Or sorry, it won't take these values, but it will take that value. It protects the admin flag. It has a profile, many dogs, a bunch of messes, and belongs to a lover, right? And Shuda generates tests for each of those. As you can see, some of these statements generate many tests. Now, controllers, we've got the same philosophy. We want to be able to test the most common things that you do in controllers. We want to make it as easy as possible so you can read this and not be daunted by 500 lines of test code for a single action. So this just uses a context. It says on get to show, and then in that setup, it actually does the get. And then should assign to user, which just creates a test that asserts that the user is being, the user instance variable is being assigned to. Should respond with a success, should render the show template, shouldn't set the flash, and any other should statements you want inside. There are nested contexts or whatever. Here is an example of most of the things we've got for controllers right now. And we're constantly trying to add to these lists. So as Shuda evolves, your tests become even easier to use, or even easier to write, and even easier to understand. Now, restful controllers are really interesting. Because I don't know how many, actually can I get a show of hands here, how many people here work primarily with restful controllers in their Rails applications? Okay, yeah, so most of you, right? And I'm sure that all of you have realized that all of your restful controllers look almost identical, right? And that's always a bad smell when you realize that. And because the restful controllers all look almost identical, there's been lots of attempts to try and make the restful controller code itself dry. You got auto rest, make resourceful, resource controller, resources controller. I'm sure there's more. That was just a quick Google search to find those. Shuda takes the same approach. It has the same philosophy that if you've got all these controllers that are almost identical, it pretty much means you've got all these test files that are going to be almost identical. And actually with tests, it gets worse. Because the controller behavior is almost identical. But when you're testing it, you've usually got these different scenarios you have to test it in. You want to make sure the controller acts restfully in these different scenarios, such as like when you're logged in, logged in as an admin, not logged in, when you're nested under a different resource. So Shuda tries to make all of those as compact as possible. So yeah, you should be able to make assumptions about the basic actions of a restful controller. The index show, new edit, you know, the crud. And you should be able to do that for HTML and XML. That's another problem that I see is that when people try and test their restful controllers, they might be doing the response to and maybe they're like, yeah, the XML is not so important. And the HTML, I want to make sure that's all tested. But the XML, it's just response to, I'm sure it works just fine. But there's a lot going on with the XML that you should be testing. And with every format that you test, Shuda is the code base that's designed to be extendable so that you can add JSON, YAML, any other type of restful testing to it. So we added a bunch of magic to Shuda to get this to work. The statement should be restful generates on the order of, I don't know, 50 to 200 tests depending on what you tell it to do. But those five lines there, test an entire restful controller. And what I'm specifying here, the only things that you need to specify are to create and update params because there's really no way for Shuda to figure that out. Everything else Shuda figures out. It looks at the name of your test class. Sorry, here's an example of the test that it produces. And it can be configured like that. So you're yielded a resource and you tell that resource aspects of your restful controller. You can tell it what the class is, what the object's name is, if it has any parents for nesting the resources, what actions you want to test, what formats you want to test those actions against. Create and update parameters where it should redirect on the different actions and what flashes should be set if it's successful. I think this code example might be a little bit out of date. There's probably more you can configure now that should be restful. Now I'm not sure, honestly, if should be restful is a good idea. It's always good to make tests easier to write. And generating short, simple tests is always better than having one big test with a bunch of assertions just for your one show action. But, you know, generating 50 tests with five lines, it's important to understand what's being tested. To that end, I've made the source code inside should be restful as easy to understand as possible. So I encourage you if you're going to use should be restful, you should look inside there and understand exactly what it's doing. And you'll almost invariably have to write your own tests around should be restful. If any of your actions do anything that's outside of rest, or if you have actions that don't belong inside crud, you'll have to test those on your own. Obviously this is not going to cover that. But should be restful comes with a little word of warning that there's a lot of magic going on there. Okay, so now I want to show you guys about the should of internals, right? First I want to tell you how it used to work. We then had to refactor for a bunch of reasons and I want to tell you how it works now. How the macros themselves are written and how to write your own. So the first implementation of context was incredibly naive. We just wanted to get this going as fast as we could. It was used internally in Thoughtbot and we just wanted something that would work kind of a proof of concept. Should context set up and tear down, they were all defined directly on test unit. There were no classes involved, there were just four different methods, right? And that's a problem because of namespace pollution. Anytime you've got that much stuff being defined on test unit, it's just not a good situation. They used a bunch of class variables to keep track of the context. So if you did context in a block, all I did was set a class variable saying hey, you're inside a context so that the next should statement would be able to look at that. And then Rails recently, I think 202, 203, or 202 or Edge, added a setup method that takes a block. And that broke Shoulda because Shoulda was defining set up on test unit, which arguably it shouldn't have been doing anyways. So, you know, we hunkered down and did the rewrite. And now everything is much better. We have a context class where we have Should set up, tear down, and everything defined on that. And then we have two methods on test unit, Should and context. And all those do is they build context classes or instances and they delegate to that. So we've gotten rid of namespace pollution. We're compatible with Rails Edge now. And everything in general is much cleaner. So a Should statement, like I said, just creates a one-off context with a single statement. And inside the context block, it records the name and the block. So when you do should, you know, should be really cool and you give it a block, all it does is records the name and records the block and then it builds the test at the end of the context. And then it runs those, it runs the set up and tear down blocks around that as well. So here's an example of one of the active record macros that Shoulda comes with. This is the one for protecting attributes. It's actually, there's a little bit of complexity when you, when it's parsing the options. But the rest of it is very simple. It just loops through the attributes that you want to protect it and then creates a Should statement for each one. It figures out what the protected attributes are for the class. And it asserts that the attributes in there. An important part about this, this test method, is that like it was discussed earlier, we try to avoid testing the framework whenever possible. You don't want to actually interact with Rails and assert that Rails is protecting the attributes like it said it would. You trust that Rails is going to do what you told it to do. What Shoulda really wants to test for you is that you're instructing it to do what you think you're instructing it to do. If that makes sense. You don't want to actually try and do the mass update and then see if things changed. You just want to know that Rails understands that you want this attribute to be protected. And that's the philosophy we try and follow with all the tests in Shoulda. Now the nice thing about Shoulda is that it really encourages writing your own macros. They're totally simple to write. They're just methods that contain Should statements or context. That's it. Here's a really common one that we use. So you would define this in Test Helper. It's just called LoggedInAs. It takes a user which maybe it's a symbol, maybe it's an instance. It depends on how you want to use it. And then all it does is log the user in. That's an incredibly simple macro. But it also makes things incredibly easy to read when you're looking through your functional tests. And you see LoggedInAs admin. And then there's a bunch of Should statements or should be restful statements. There's just a bunch of tests and everything inside that block pertains to somebody who's logged in as the administrator. So this is where writing simple macros and drying up your tests can really make them easier to understand, easier to read, and easier to write. So now I'd like to talk about just some general testing goodness. About mocking and fixtures. White box versus black box testing. How to avoid brittleness in your tests. And how to keep your tests just generally effective. Mocking. Now mocking has been talked about a lot for the past I think six months to a years when it really seemed to get picked up in the Ruby and Rails community. Now mocking has a ton of really good benefits. One is that it keeps your tests focused on the code at hand. It allows you to test integration with your external resources. Long ago before I knew about mocking or before I really trusted it, we had to integrate with a credit card service and I actually wrote a small camping application to pretend to be that credit card service for the tests. So before you could run your unit tests you had to launch this camping application and your application would actually test against it doing real HTTP transactions. That worked but it was a pain in the butt. All my developer friends hated me for it because now they had to know how to launch this camping server whenever they wanted to work on this application. So we quickly refactored that to use mocking instead. It's a no brainer once you understand how useful mocking can be. Mocking can really improve the readability of a test. Especially when you're dealing with unit tests that have complex object graphs. You can, without mocking you end up instantiating tons of extraneous objects and having to deal with validations just be able to test one small bit of functionality that has nothing to do with that. So if you use mocking judiciously you can really clean that up. Mocking has its downsides as well. There's over mocking which can quickly create brittle tests. We'll talk about this a little bit more with white box versus black box testing but in general the holy grail of tests in my opinion is that your test suite should not break if you do a refactoring that doesn't change the behavior of your application. If I'm doing a functional test that's using mocking to assert that I'm using, I don't know, find or instead of find by ID for example and then I change it to find by ID raise if I don't find the associated record my functional test shouldn't care exactly how the controller goes about it as long as the functional test that it's finding the right record and if it can't find the right record it gets a record not found. But if you're doing mocking in your functional test it does care exactly how you do it. You have to mock out the call to find or you have to mock out the call to find by ID so if you refactor, you then have to go in and change your tests. So a refactoring that does not change the behavior of your application will still break your tests if you're mocking to, you know, if you're using over mocking as I call it. It can also give you a false sense of security. The benefit of mocking is that you're not testing behavior of associated objects, associated classes. The downside of mocking is that you're not also testing the actual integration point between your code and the associated code. You're assuming that you're calling that associated code correctly. You could have a bug in that but if you're mocking it in the same way that you're assuming you're calling it correctly then your tests are going to pass and you've got incorrect code. This is a very contrived example. Nobody would ever believe that this was correct SQL but you can imagine that there's plenty of SQL that looks correct to you first glance. You would write the tests to mock that out thinking you're going to get the same thing that you're expecting to get but you're not. This test would pass just fine and your code would have this in there and your application would have a bug. Does anybody here raise your hand like fixtures? All right, you two guys. You're all alone now. Sorry, yes, Rails fixtures. Yeah, Rails fixtures. Not some hypothetical fixtures, that's correct. Yeah, I am at a Ruby conference. I know this. Okay, so I absolutely hate fixtures. They're incredibly brittle. I have no idea what they are. I don't know what they are. I don't know what they are. I don't know what they are. I don't know what they are. They're incredibly brittle. I can't tell you how many applications I've walked into that use fixtures extensively in their tests, and I have to change one of the fixtures to get one of my new tests to pass or I add to the fixtures in order to get one of my new tests to pass and I've got 200 test failures all of a sudden. I have to go through all those tests, find out what they're assuming about the dependency graph inside that fixture setup and fix them all because it takes forever. It's absurd. Fixtures bypass validations. Now, by the way, I know there's been some new work on fixtures in the recent releases of Rails and there were some plug-ins that addressed some of this, like fixture scenarios. Everything I've read so far, I still don't like it. But I might be out of date, so if I am... Yeah, we started using fixture scenarios and this really resolved a lot of that pain. And for what we're doing with fixtures has been just awesome. But, yeah, you know what I'm trying to do? That's a project that we're working on. Isn't the foxy fixture enhancements? Yeah. Which is sweet. And in the case where we add a new fixture or, like, well, this is the one example that's once a specific case, we create a new scenario that doesn't affect any of our common fixture environment which we've carefully designed and it's been very handy to run data Okay, well, I'm actually going to address a couple of those points right here on the slides. Okay, so what he was saying was that you use fixture scenarios and also foxy fixtures. You have a bunch of different scenarios that you maintain for the different sets of tests that you write. So if you're going to add a new fixture to it, you really just fork off the scenario, create a new one, add the fixture in there. Is that about right? That's better, absolutely, than the way Rails was doing fixtures before. The problem with that, still sticks in my head, is that A, I've got all of a sudden a bunch of fixture scenarios, often some other files that are nowhere near the tests that I'm actually working on. When I jump into a test, I don't see the stuff right there. I still have to go into all these other different fixture files and kind of build up the object graph that this is actually looking at. Even if you do have multiple fixture scenarios, that problem, the disconnect between your fixtures and your tests still exists. Having multiple fixture scenarios to me still says that it's a lot of maintenance. I'm going to have all these extra files inside my fixtures setup that I'm going to have to maintain and make sure that they all work still. Generally, yeah, fixture scenarios better than the existing fixture setup. The foxy fixture stuff, isn't that where you can specify relationships without having to hard code IDs and that sort of thing? That's fantastic. Still, like, still, there's the problem of validations and the inexplicitness of it. I'm sure that's not really a word. This is what I was talking about, the disconnect. Even if I'm using fixture scenarios, how many associated posts does Bob have? I'm in the middle of this test and I suddenly have to go searching somewhere else to find this information. That slows me down. To me, it's just not good. Fixtures encourage these complex dependency graphs. Like you actually said, you guys have really worked hard to craft your dependency graph inside your fixtures. I would say that feels to me like fixtures are encouraging you to have a complex dependency graph. I'd rather see that be a little bit more explicit. Well, we need a data set to test off of. That's how we do our integration testing. You gotta have something and some usable data set would run our application and run things through. Fixtures just really make sense, I'm open. Well, I'm actually not going to give you a solution. This is the end of my presentation. So anyways, I think fixtures are generally un-maintainable, right? Oh, well, okay, I did add this in last minute. Okay, so alternatives to fixtures, inline object creation, nested contexts make this much more maintainable. This is not just a plug for shoulda. I'm going to step back and say I don't care what testing framework you use, what you use to make your test better as long as you use something to make your test better. I've had to work on 2,000 line test files that were just doing straight test unit and it killed me. I mean, our spec, I believe, now has nested contexts and this same argument applies there. So with nested contexts you can slowly build up your dependency graph so it's only there for the test that actually need it, right? So your first context would create a user. It would have a bunch of tests for that user. Your next context would say with a bunch of posts and it would have some tests that test that user given it has a bunch of posts. In the next nested context might say where some are approved and some are unapproved. Extensive mocking is also an alternative to fixtures because like I said before it can save you from building that complex dependency graph in the first place. It's really only applicable to unit tests. You still want to do your integration tests with a full real set of data, right? And like I said before there's obvious dangers to extensive mocking. Some friends of mine over it, I think it's OGC consulting or something like that produced the object data or the object daddy pattern. I encourage you to go visit that. It's too much for me to put in this presentation right now. But it's a very clever way of generating active record models quickly inside your tests. I've actually for my applications I just did a small factory thing. The nice thing about that object daddy post is that it actually lists through the various approaches to doing factories. And they're all trying to deal with the same problem which is that it's too hard to create valid active record objects inside your tests. So this is what I just use in my tests. This is not part of SUDA. It's part of a way of getting around fixtures. This is just a classic factory module. There's a comment there it's kind of hard to read against the white but it says you shouldn't have to change stuff down here. So once you've got this file in your application for every model you add one of those params methods that just set some default parameters for that model. And then you can call factory.create give it the model name maybe give it some params that you want to be different but other than that it's going to give you kind of a default valid model. And so if you have like a context where you want an administrator with a bunch of posts you don't have to A remember what the valid attributes for posts are and B fix all the valid attributes for posts in every test application. You just change it in your factory one spot. And this makes it very easy to build up those object graphs inside your context. It's one line per model and you can do it in loops and that sort of thing. So this right now is my preferred way of dealing with data without using fixtures. I pulled fixtures out of most of the applications that I work on just use this. Let's get to the race wars. So we got white box testing. White box testing test the internals of your of the code that you're working on. So usually it does that through testing your private methods or by mocking out the internal stuff that it's doing. The benefits of white box testing is that you can get your tests to be very short and much more understandable because you're really only testing that one small part of behavior that you care about. The rest of it, by stubbing it out you're saying I don't care about that. It's also arguably easier to attain high test coverage because your tests are short and you can very easily switch the little internals. You can say when it tries to connect to this other object give it a timeout. So it raises a timeout exception or something like that. But it can lead to over mocking and it can lead to brittle tests because by definition with white box testing a refactoring of your code is going to break all those tests. You have to re-engineer those tests in order to account for the new implementation. So the opposite of that is black box testing where you only test the public API you call the method results. The good part about black box testing is it ensures that you're actually testing those integration points. Like we talked about with mocking you can get a bunch of false positives if you think you're calling the associated object correctly you're mocking that out you're not calling it correctly or the associated object's API changed so you were calling it correctly at one point you're not anymore but black box testing changes you're running all the way through you're not using mocking you're going to get a break black box testing won't break if you refactor your tests and you're only changing the internals if the API doesn't change your tests won't change the downsides of black box testing is that your test can get very long even with nested context even with test helpers and that sort of thing there's a lot of setup in there all right brittle tests brittle tests kill me coming into a new project and the developer on the project only uses rake I start using autotest has anybody had this problem they come into a new project the developers never heard of autotest all their tests pass you run autotest suddenly you got like 20 failures please okay raise your hands if you're using autotest everybody just go check out autotest gem install zen test it'll change your life with testing it's amazing I love it and it catches brittle tests he actually puts a randomization in there autotest you're editing your files you've got autotest running it's constantly checking your files for changes when you save something it runs only those tests that pertain to that file it also randomizes the order now because he hates brittle tests so much fantastic brittle tests break when you do trivial changes you're changing a method's implementation and you've been doing white box testing it's going to be brittle it's going to break you're changing unrelated parts of the application for example you add a new fixture somewhere way over here some tests way over here break it's brittle you change another test I know it sounds like this can't happen it happens you change another test mocha right now has a bug where if you mock out or if you set an expectation on a private method on an object it won't restore that method so if you set that expectation in one test it could be five tests later you're going to get method missing or you're going to get completely incorrect behavior you change one test that one breaks I think if you're not using transactional fixtures and you add data in one test that data could persist through to another test farther on down the line this is the kind of stuff that auto test helps you catch because it might just randomly work with break because break is going to run it in a fairly predefined order auto test isn't it will catch these things running your test in a different order if it breaks when you run them in a different order your tests are brittle problem with brittle tests is you can't trust brittle tests whole point of tests is that you feel confident that you're expecting it to if it breaks on one guy's machine and not on yours and they've got the same environment your tests aren't trustworthy so what causes brittle tests well white box testing like we said it'll break your test when you refactor it might be worth it there are obvious advantages to white box testing just got to be aware being overly explicit in your tests these are just some examples I don't know if it's still this way but the Rails book and actually I think the scaffolding for ActionMailer essentially just has some emails in the fixtures and the tests just say hey is this exactly the same it's a really poor idea if your client asks for a specific piece of copy and you put it in there and they're really insistent on it yeah maybe do an assert select on that and make sure that that's in there but don't just go blindly testing the exact contents of the email with a fixture file the order that things come back in in the email changes to any of your context above anything will break that too much assert select if your assert select is using a path that's too explicit so that if your designer goes in and pulls out a div and your test break it doesn't make any sense that shouldn't happen right it does happen to us all the time it's just the programmers get lazy or they think they're doing a really good job doing tests that's another hard argument to make with them they're like well I'm testing every little element inside this view so my test coverage is amazing yeah it is and I'm going to make you fix it every time it breaks yeah we are looking at this money we need to find a way to automate it if you watch a really good railroad test yeah this is another one that gets me you're trying to test a search method now granted testing search methods is always difficult not found a good way to test advanced search methods where you're doing lots of additionals building up an SQL string and then getting some stuff back but the worst way of doing it is to just have a predefined list of objects that you expect to get back and search and see that they're the same I mean there are so many reasons why this is wrong but an obvious one is if you're doing an ordering on it and some of the objects could come back in either order and it's still valid you're ordering by the number of posts that they have and they each have three posts your tests will break 50% of the time it's really good fun okay another reason for brittle tests laziness test order like we talked about using data explicitly using data loaded from prior tests I've seen lots of tests where they just forget to load some of the fixtures but the tests have been passing because there were fixtures loaded from a previous test once again you run it in auto test it's going to catch that stuff if you're doing things on the file system sometimes you'll want to mock that out that's a perfectly fine situation sometimes you'll actually want to do the file system access just keep it in a sandbox but you want to clean up after doing it there's many different ways of being lazy these are just some examples but all these cause brittle tests really do you know who wrote that it's a genical file sandbox is just a block within the block it cleans up the directory afterwards fantastic I'll look into that file same cool thank you that's great okay so avoiding brittleness you want to make no assumptions when you're writing a test you shouldn't be assuming anything about how it interacts with another piece of code you only want to describe the important behavior the exact order and the exact contents of the array that you get back from the search is not really the important behavior if you're searching for approved posts that are older than you know three days ago you want to make sure that you get approved posts you want to make sure that they're all older than three days ago you want to make sure you don't get any approved posts that are younger than you know before three days and none that are unapproved right you want to actually test the attributes of the return set not just not all this other stuff like exactly the order that they're returned in you want to keep your test short and it's fairly self-contained which is why you shouldn't be nesting context too far if you find yourself with like six levels of context it's just going to be too hard to read you want to try and describe one piece of behavior at a time I don't understand people who write functional tests where the test is just named deaf test show and assessing all this different functionality about the show action it's just not the right way to do that you want to try and break it up in like one aspect of what the show action should do and don't use fixtures because what I was just saying earlier fixtures force you to look outside of your test file it's not self-contained you want to be aware of over mocking and you want to favor black box testing I understand the arguments for white box testing and they're good arguments but if you want to avoid brittleness the only way to do it is to stay away so down to it writing effective tests you want to be mindful of what you're testing you want to specify one piece of behavior at a time the names of the tests matter it helps you think about what you're testing you want to describe expected behavior implementation details and you want to avoid brittle tests writing tests is as hard as writing the application code most of the time writing tests is harder than the application code that's normal if you're doing that and you think it's wrong you're on the right track what tests save you is those hours and hours of mindless debugging afterwards you have one tested code that suddenly broke and you have no idea why so you have to be very mindful about how you write your tests and this is behavior driven development if you're doing good TDD if you're writing effective tests then you are already doing behavior driven development alright so that was it for testing in general just talk about should again for a little bit some of the future directions we're going to go we want to improve the active record macros we want to add some more support for JSON and YAML in the should be restful maybe replace should be restful it feels like a big configuration block and that's not very good and I'd like to invite some other maintainers to try and get the development on shoulda at a good pace and maybe along those same lines use get if you want more info about shoulda it's got a homepage some R docs we've got a Google groups and a lighthouse and I wanted to say thanks to Thoughtbot Thoughtbot is a great place to work it's based in Boston they allow me to work on this type of thing on work hours and it's just an amazing family community to work there and we are hiring by the way so Boston work offices if you're looking for a job in that area just come talk to me so does anybody have any questions because I do love questions alright yeah what I try to figure out is what the value of testing problem is because you're buying that they can't work if you delete a private you can't you can't have a good definition of what it means but if a method is private my definition can't it can't be regret well actually when I was referring to private methods I actually meant writing tests for your private methods was that clear so I don't I don't even see why you care people shouldn't be writing against private APIs shouldn't be enough notation for a private API right so I'm trying to read your question you're saying I think there's like people actually care a lot about testing private functions and I think it produces a lot of grief when people break private functions why would you test private functions and yeah that's your question I have actually tested private functions before and I still feel that I had a good reason to do it I don't do it often but if I have a particular piece of business logic it is very involved and hard to understand I'll break it up into many methods as good practice write the methods in a way that they should have one small responsibility they're still private because no other object in my domain needs to call those methods they're being called by the main method and I might have failing tests at the moment for the overall public method that is using these private methods and in order to really nail down why that's failing I will write tests for the private methods using the send hack just to make sure that like okay I know this one's working right I know this one's working right okay there was a bug in this one I think it's back or something okay that's probably yeah I mean in general I don't think you should be testing private methods I really just use it as an aid in trying to debug why my other tests aren't passing I probably whether or not I'll leave those private methods in there or a private test in there afterwards like I probably won't because I don't want what we were talking about to raise that and to break it yeah I said documentation so no one would make documentation right and I think so yeah that makes me nervous I think you can tell a little bit what the hell is this really complicated private method doing oh it does this, does this, does that right that's a very good point if you're I mean if you don't want to have a contract then it has to do that the only contract that you want to have is that it makes the public method work but if documentation is enough you can also document not just the contract but how it works now so that maintainers are not responding there are comments I think getting overcast there are comments that the chain verifies it doesn't Merck doesn't Merck have the concept of the public API the semi-private API and the really private API not just because so we want to allow you to semi-public things or testing purposes like there might be something in the controller that you don't really need to use but we want to use it to test the control yeah as you said that when he's hacking on Merck that he might completely break the private that's fine but there's the gray area where it's not really supposed to be part of the API but he'll be careful when he's treading around yeah basically the semi-public API if you break that it will never break someone's production act it will never break someone's production act I'm sorry yeah when you have a public method that's composed of many private methods and you want to test some specific functionality that that private method does I agree that you should test like every private method but sometimes it's a better solution instead of mocking because the test of the big method you have to lock out basically every other private method well what you're talking about is white box testing if I'm going to if I've got this public method that I'm going to be testing it uses a bunch of private methods to do its thing I would mock out the private methods test that the public method worse the way I'm expecting given the values from these private methods and then I would write a bunch of tests for the private methods right I'm saying you could I think you might disagree with me on that I don't know when you're arguing over whether you should test that private method that's the smell that your class isn't cohesive and what I would do is pull that private method out to its own class test what you need to there and then your functional test at the public API which uses that other class as you would get it yeah I've heard that and I actually agree with that I think that's a very good technique for doing that two things even though I may not necessarily cheat test for the right I didn't do the best part I would write a test for the right I'm sorry to say that you might not keep them you feel better about it yeah just you know you want to understand depending on complexity you want to build it with a test of respect whether or not you keep them you know what I'm saying is that my approach to this is just not bothering the private methods and it breaks the government you just made them public yeah public methods should be I mean well we're kind of getting on track a little bit here but I think a lot of people do public methods as your API and once you've made them public you're kind of signing a contract saying this is how they're going to be this is how they're going to be supported you've done the project of course you've had your hand up for a while just to respond to the question why test private methods why write private methods a test guaranteed that something works in a certain way but for me it also the test is also it's something you give to the client of that code which might be another program and if a test shows how these private methods chain together to deliver the public functionality you know if you can change those private methods and the tests please or throw them away but if you can change those private methods so the public functionality is still delivered great just pull the tests out but to say why test private methods why write private methods well but to go along with that I think the best idea is to pull it out into its own class so that you know encapsulate those tests over there I think that's pretty good well they're not really yeah to do the refactoring I mean they're obviously not really private classes but they would be like utility classes yeah I'm not heading you to an architect I'm really interested in what I'm most interested in I was looking at maybe doing some kind of architect what kind of concern do you have whether or not I mean people won't know exactly what's being tested and then we met a program so how has that been working so far it's been I want to make it clear that it comes with a big warning that this was kind of a road we want to go down to see how it would work so far it's worked pretty well I haven't met anybody who's been able to understand how to configure should be restful right off the bat there's fairly good documentation but it's just a complex beast it can distill the test down to four lines but I've been trying to brainstorm ways of maybe only distilling the test down to ten lines but making it a little bit more clear a little bit easier to to adapt for example inside a should be restful block there's no way to like add a test to the show action context do you see what I'm saying so I use it all the time in my applications I'm also the writer of it so I understand how it works really well if you're not using fixtures what do you use to populate your development database and sample data you should never use various popular databases but the development database and sample data what do you use? we have a bootstrap directory it depends on the type of data but there is some data that needs to be there for the application to run you've got like an administrative user maybe a bunch of categories that are essentially hard coded but you've got crud for them anyways you want them in the database anyways that goes in your bootstrap directory that's just a pattern it's not part of Rails but it's a pattern that we've been using the other way of doing that is to put it inside your migrations which a lot of people frown upon I love it but that's just for production data as far as development database goes you should grab it from staging or from some known area and just pull it down in your database that's how I do that I just if I don't have fixtures that's just an easy way to get some data to play with the application what if it takes you 45 minutes to generate the data that you need to test a new feature and then you're testing some new feature you can click go and the feature you're testing is highly destructive the existing data well that's what database backups are for but I guess I've never had the problem I guess it's never taken me very long to build up some data using the application to test a feature you can also register it you can go on console and do it you can go on console and do it I've done all that stuff too but the application I'm on now I actually require a monumental amount of stuff to be in place before I can start working on features so I have a big add-in we have a lot of data that we use I mean a lot of data and we have a great particular app we had a little I suppose but it was a lot of memory data set by me that's the interesting the interesting solution he said that they actually have was it rake tasks or just scripts that will rake that will just load essentially database scenarios for the developer okay a lot of people do so okay you can have scenario dumps I missed the point that I was going to say about that I guess having the fixtures there to load into development it's a nice convenience but it's not worth the cost of fixtures and when I said you should never ever load your fixtures in development I was thinking of production you know developers who do that they set up I know I know people who do that and it got me going sorry about that do you have any other questions you know the experience of functional tasks and inversion and what I've seen are more checking the flow and not so much as when you get to a destination you know there's this control to this but yeah and if I did do that download the controls it seems to me not to be wrong because I would be doing a functional task is my logic here essentially it's some confusion about what integration tests are supposed to be testing functional tests are pretty much already testing that well first thing I need to say is that Rails completely I'm sure most people know destroyed the definition of functional and unit and integration tests unit tests are your model tests now functional tests are your controller tests integration tests like you said really are just to test the flow of the application you should not be duplicating your entire functional tests in your integration tests you know checking out a couple of scenarios you know like I got a new user signs in and you know maybe he uploads a photo and then maybe he deletes his account or maybe he befriends somebody or something like that right I tend to mock in my functional tests put the meat in the unit tests and and honestly I haven't written many integration tests I've written like two but yeah integration tests are fairly like they seem to mean more like a sanity check with the way that Rails tests are laid out they seem to mean more like a sanity check that's right any other questions okay well I think I'm out of time anyway so thank you very much I hope this was helpful