 39 slides in approximately 45 minutes to do this in. So this is gonna be an adventure for all of us. If you have any questions at the end, I won't have time, so come find me afterward and I will gladly answer any questions that you might have. My name is Adam Cuppey. I am from Taiwan. No, I'm from the United States. I'm from San Diego, California specifically. I founded this company, we're a consultancy and we do web and mobile applications. In this talk, our spec design patterns came from dealing with a lot of different organizations where when we came into the code base and we would run the test suite, we found that they were really large and really cumbersome and really problematic. So this talk is specifically about that. Now before we get too far, you can find me on the interwebs. I am on GitHub at A-Cuppey and then you can find me on Twitter at Adam Cuppey and right now what I would like all of you to do who have a Twitter account, I would like you to tweet at me right now and tell me how amazing I'm doing. Okay, now that that's out of the way, moving on to better and gooder things. So the slides are not up there yet but they will be on speaker deck right after directly following this so you'll be able to get all the information. Of course it's on Comfreaks. Big shout out to Comfreaks by the way. Can we hear it for Comfreaks? Just for recording all of these. Yes. Big win, you can get tons of videos there and you can see all of these ramblings later. Okay, so RSpec, that's really what we're talking about. So RSpec for those of you who don't know, it was started in 2005 as an experiment. RSpec specifically focuses on behavior-driven development. So this is how the user or the, how to describe the functions of the application. Now RSpec has what's known as a declarative DSL. In other words, like as I define the function of something, I am going to leave the implementation within the scope of the application itself but being declarative I'm going to express or explain what specifically is happening inside of the app. Now here's the problem is oftentimes our test suite becomes a second class citizen, right? And what I mean by that is that when we build our applications, the application code that we write is often focused mostly on what the user is going to see or experience. The test code however, is this me? Hello, hello, how are you? The test code however, solver, the test code however generally comes after the fact for most organizations. And similar to that, the test code often becomes really cumbersome because we wanna make sure that there's good test coverage however when it comes to the actual performance of our application it rarely equates to either good or bad performance directly, right? So again, our test suite becomes a second class citizen. It's not the primary focus of our effort. However, it becomes really tough, really fast. Now, how many of you in the code that you write write tests? Raise your hand if you write tests for your code. No, how many, keep your hand up if you're lying right now. And there's some, when I said how many of you write tests they're like no. Oh yeah, yeah, I remember it just, right, totally. Yeah, so most organizations require, how many of the, either organizations or the code you write now require tests? Like you have to have tests written for your code. Okay, so that's interesting. Is that normally when I ask that question most people raise their hand saying they are required to have tests for their test, or for their code. So the fact that many of you don't is very interesting to me. Now, it is. The next thing is that oftentimes the tests are, and this is really the biggest problem, is that the tests are really hard to understand. Now, as an example, one of the companies we work with, they have a user model that is 6,000 lines of code, and a test for a user spec for that same model of 9,000 plus lines of code. 9,000 lines of code. When we did sort of an analysis of that to determine how often they were repeating themselves, in certain instances they were almost literally testing the same method almost a dozen times, in effect the same way. But because the test was so large, parsing that, determining what inside of that was already being tested was very, very tough. So that became a very big problem that they had to solve. Well, what do you do when your test is that large, that cumbersome, and that problematic? That's really the focus of this. So this talk is called Taming Chaotic Specs, and more specifically our spec design patterns. So this is what this is not. This is not what to test, that's not what this talk is about. There's some really great resources on what you should be testing inside your test suite. This is more specifically the patterns and practices that you can follow, suggestions, when structuring the tests themselves. So this is the design pattern of the test suite. Now, speaking of what a design pattern would be, just to set expectation as to what I'm defining it as, is first and foremost, it communicates expectation. So the design pattern, if I follow the pattern as written, I should understand, when I follow that a second time, what is gonna be communicated by doing it. Or similarly, if I'm looking at it and I know what pattern is there, I can probably derive and guess as to what the rest of the code is gonna look like. The next is that it encourages consistency. And this is hugely valuable, especially when it comes to like a 9,000 line spec, is when there's consistency amongst the code base, that I have a strong sense for what patterns are being used and what to expect and where to look and find and parse things. And the last is it reduces the mental load. How many of you raise your hand if you have looked at a test suite and immediately wanted to flip a desk? Absolutely, raise your hand. We all know that you've done it. We all know. Yeah, we've all wanted to flip a desk, right? So the reducing the mental load is a big, big thing. We wanna do this as much as possible. So I'm gonna talk about our first pattern. The first is called the minimum valid object. And if you're taking notes, there's a lot of stuff in here, but feel free to take notes. I'm gonna put all this stuff online, but I'm gonna walk you through this pretty darn fast. So the MBO workflow looks like this. The first is that you start with a quote valid object. Now inside of Rails specifically, there is the concept of what is a valid model. But a valid object, whether in Rails or just straight in Ruby, is whatever you define inside the domain. So you could say a valid object is something with a username. You could say a valid object has these attributes assigned. However you define that is perfectly fine, but the workflow starts with a quote valid object. The second step is that you make one specific change. So one mutation of an attribute on that object. And the last is you assert that the valid object is now invalid, right? So you say, I've got a valid object. I know it's valid based on certain criteria. I'm going to change one specific or very few specific attributes of that object. And then I'm gonna assert that it's now invalid, right? So that's the minimum valid object workflow, this pattern. Okay, so we'll start here and say, let's say inside our Rails application, we have this user model. And the user model's really, it has some class methods being called on it, but for the most part, the biggest chunk we're gonna focus on is the validations right here, right? So we've got validates the presence of a first name, the length of it, middle name, last name, email, and in a certain format, so on and so forth. We've seen this many times before, right? Now, here's our test suite. So we've opened up our tests, our user spec, and it looks something like this. And we say, well, we're gonna describe the user and we're gonna run our first assertion, set of assertions, and it's gonna say, well, it should be invalid, that's very descriptive. And we say, okay, well, we're gonna create a user and the user's gonna have this really, really super long first name because if we look back, we say, see the length has to be between four and 20 characters. So we're like super, really super long first name and then we're gonna see that the valid object is now not true, okay? And then down below, we're gonna do it again, right? We're gonna say the short version, right? Because again, it's either between four and 20 characters and in the first one, we're more than 20 and the second one, we're only three characters. So it's really short. So we're testing both inside the first thing. Now we run our tests and it passes, right? This is fantastic, it passes. But is it actually accurate? No, it's totally broken, right? This is a false positive. And oftentimes when we look inside of our tests, we're gonna notice this pretty quickly that we're gonna have a lot of false positives inside of our code. Well, what is false positive? What is the false positive? Well, if we look at when we factory up our user, right here, okay, and we create where the first name is really, really long, it is invalid, right? Because the first name is really, really long. But if we look back at our validations, the first validation, yes, is invalid, but the thing is is that all the other validations are failing too, right? This is the false positive. That even though what we think we're testing, although it might be functional in terms, or although it might be invalid, in fact is not determining that all the rest of it is also invalid. So we actually have a broken test, right? So if we look back at this again and we say, okay, what's really the problem here? Well, the problem is that we're not communicating enough, right? We're just, we have this kind of like blanket, sort of like crummy description, like it should be invalid, well, what should be invalid? We don't explain or express any of that, right? One of the objectives of your test suite should be that when you run it, or when you read it, it should tell you, it should help you rationalize and reason about the code itself, the implementation. That's the whole idea of being declarative, is that if I declare a certain set of criteria and a certain set of functions, it should come out the other end meeting that criteria, right? So if we see these two highlighted chunks, we say it just should be invalid, doesn't say why. And the second one is we've got this incredibly confusing sort of expectation, so it just is saying that it should be invalid, okay? So we'll start with this. So let's refactor this test a little bit. So we're gonna just start looking at the top and say, well, first and foremost, we're gonna describe our user object, right? And we're gonna use that. And the next thing we're gonna do is we're gonna take advantage of an RSpec method called subject. Now, raise your hand if you've heard of subject before or used subject before in a test, okay? So what subject does is it defines what is the focal point, what is the object that we are going to start to describe, okay? Now if you do not include this line right here, RSpec will automatically run effectively describe class dot new on the user model. It will just do that automatically. If you do not define this line, it effectively is going to just automatically generate that line right there, okay? But I like to define it for a couple of reasons. First and foremost is it communicates what it is that we're trying to test, okay? And more specifically, always name the subject. Most people don't realize you can do that, right? Most developers don't know that you can actually pass the first argument in there and say, well, this is what this is. The goal of the test suite is to communicate those things. So let's communicate what it is that we're testing and specifically that we're gonna describe the new class itself, okay? So we're gonna align these two. If you didn't know the user and describe class, describe class is a method that is built into RSpec itself. You can use it or not, but it's really helpful because describe class will reference the user object itself that's being passed into describe. And then we're also going to match the name down below, right? So we have three instances or three named instances of the same thing. This is very important, right? So we're communicating what the domain model looks like very quickly, right? Now we've got this. Now when we look down into our test itself, we've got these two very different assertions all being tested inside the same spot, right? So we're just saying it should be invalid and then we're gonna run this expectation in the same block, expectation block is the one down below it, right? Well, one is the long name and one is the short name but it's in the same expectation block. So if one fails and the other passes, they both fail, right? So we don't wanna do that. We wanna clean this up quite a bit more, right? So if we go back to what we've been refactoring, okay? And we add in a couple of contexts, right? Now the first context is with a first name that is over 20 characters, right? So this is testing the validation where it's too long, right? We're describing what it is that the function is and down below we're gonna do another context is with a first name that is under four characters, right? So we're not saying whether or not yet whether or not it passes or not. We're just gonna describe what's going on, okay? So this is what our test is starting to look like. Then what we're gonna do is we're gonna move into, we're gonna establish a let, okay? We're gonna establish a variable, a value that we're gonna then pass into our subject, right? So it's gonna end up looking something like this, right? Now I'm gonna go over the value of using let over instance variables in a little bit as a practice, okay, but this is part of the value in doing that. You can do this with instance variables, but I find it to be a lot easier otherwise. So this is the mutations we're gonna make, right? We're gonna say on my subject, which has a user as a first name, we're gonna define the first name and in the first context of over 20 characters, we're gonna define what first name looks like and in the context with under four characters, we're gonna do the same, right? We're gonna communicate what the expectation looks like. Now we've basically cleared off and refactored those three lines right there, which only leaves us with these two right here that we wanna change. We wanna make these a little bit more clear. Now there's a bunch of problems with this and if we look at this, this is the reason why the reasoning about this is really tough, right? Because this is what we end up with. We say expect user.valid to not equal true. In other words, we're saying is true not true? Make sense? Raise your hand if that makes sense. Don't raise your hand, okay. All right, so this doesn't work either. Well most, you might not know this, but our spec has support for predicate magic methods. In other words, if you have a method on the user object that ends in a question mark set like this, so user.valid, okay, you can write an expectation that looks like this, right? Where automatically it's going to take the ending, the method being called B underscore and then whatever that is and it's gonna, it's going to assume that that word right there aligns to a predicate method on the model, right? So this is valid, okay? You could also have something like this, right? It doesn't matter what the method is, okay? User is writing a monkey, okay? And you can just append it with or prepend it with B underscore and it will run that predicate method, okay? So what that means is that we can then refactor slightly to this, getting a little bit better, right? So expect user to not be valid, but we still really haven't solved our problem yet, right? We're still saying the true is not true, right? So we haven't solved anything. Well, we're in luck because Rails specifically has support for an inverted version of valid, which is invalid. Now this may not be the case in the library you're using. You may wanna write either a custom matcher for this or if there is a method on it, great, but we're just gonna use this as helpful. And as a result of that, we can get really, really clear. The expectation becomes really obvious, really fast. Now when our spec runs and it exports a formatted value at the end, it's actually gonna read all of this and properly format it so that as you read the code, once it's run, if it fails, it's gonna be really easy to understand, okay? So we've gone from true is not true to is this true, right? Is the user invalid? Make sense? Fantastic, okay. So moving along the lines here, okay? We're gonna check and see, well, have we, now that we've added in our expect user to be invalid, now we've injected those, have we eliminated the false positive? And the answer is no, we haven't yet. This is where a minimum valid object pattern comes into play because we haven't actually implemented that yet. We're just simply cleaning up our tests, right? So if we go back here, we're gonna inject a little code at the very top of the test and this is the pattern and this is how you use it. The first and foremost at the very top, you wanna establish what a valid object looks like. This is essential, right? Because again, as the test gets super long, that's okay. Because the goal of which is we wanna know at the very top, this is going to help define what is a valid object? What are the attributes and things that I can mutate on that object? And therefore, as I build out the context down below, I've got some structure to that. So if I need to add another test to the bottom, that's fine, I can follow the same pattern and I'll have some sense of expectation that this is gonna work out, right? So we're gonna set up the default first name and we'll just use my own name because it's an amazing name at that. And then we're just gonna assume it's validity, right? We're gonna test that from the very beginning. So if we run this test and the very first expectation to fail is the very first one, we know that all the ones down below it are less valuable. So we need to fix that very first state. We need to know that that is valid as it sits at that moment, right? So that if that is now valid, when we make the mutations down below and they pass, we know that we're actually testing whether or not this first name is the thing that's changing that validation. Make sense? Fantastic. So did we fix the false positive? Yes, we did, okay? We did at this point, right? Now, there's a little bit of magic that can happen inside of here, which is, and use it or don't, some people love it, some people hate it, that's okay. But in our spec, there's some helpers to make this a little bit more simple, where if you look at just the difference, right? If your subject up above is effectively, if you're doing this over and over and over again, you can use a quick method called is expected. Sometimes I use it, sometimes I don't. It's totally up to you, but sometimes it can just help this read a lot better. So it is expected to be valid. So the first thing we're gonna do as we build this whole thing out, right, is we're gonna build the subject. That's number one, okay? So we've factored up our user object. We've passed in all the attributes that matter. We've set the configuration variables. In other words, these are the attributes that we can change, and more specifically, as we read it. So if I built this test out, and you come into my test suite and you read these, you know these are valid values, right? You can assert that right away, and then we're gonna assert that it's in a valid state. Now, when we change the first name, it's gonna map up to the subject line at the bottom, so that's the mutation, and we're good. So this is a fully incomplete spec at this point. Now, pretty quickly, you can see how we can truncate the spec really fast. We could take a 9,000 line spec, probably down to a lot less than 9,000 lines. I know that much. Pretty fast, okay? So this is one, minimum valid object. Pattern number one. Pattern number two is permutation tables. So here's what the workflow looks like. First thing is, you define a set of data. You define the output of each set, and then you assert that the method creates the output from the data, input output. So I put this input into the method, this output comes out, and I should see that. So let's look back at our user model for, this is our example. So looking back at our user model, we've got a method that effectively concatenates all of the various name segments, right? So we've got first, middle name, and last name. Those are the different name segments. If any of those are nil, then it's gonna strip it out, and then it's just gonna join them together with a space in between, right? So this is the implementation of our method. Now if we were to write tests for this, it may end up looking something like this, right? Where first and foremost, we're going to define a full name, and I wanna just, I'll just hone in on this for a minute, is that again, we're gonna follow the same pattern. So we're gonna describe the full name method. We're gonna set the subject to full name, and we're actually gonna call the method. Now most of the time, the subject is reserved for an object to be returned. You don't have to do that. In fact, this can be really, really helpful, but it's really essential that you define that you align these three, right? That when you say what you're describing, that it's the same as the method being described as well as the name of the subject. Make sure that all three are the same. If you don't do that, it breaks apart fast. The communication goes down, okay? So as we build out our spec, right, we start to just add in these various permutations, right? So in the first one, we're gonna say if the name, if the first name is nil, then it's expected to equal the last name, right? And similarly, if the last name is nil, then it's expected to equal the first name, right? And it just keeps going down the list. Well, then the question becomes, well, is anything missing? Now, raise your hand if you've ever seen specs look like this, where you have one or two values that are changing, okay, and you write all of these various, like assertions that test that same thing, yeah. Okay, so the reason this is why we saw 9,000 line spec was because they had in that user model, which they shouldn't have had in the first place, but they did, they had a big authentication model. And the authentication model tested, well, do they have, if this value is there and that value is there, then it should equal this. Well, if the first value is there and the second model's not there, value's not there, then it should equal that. And it just ran into this big issue where there were all these permutations, all these variations to the same thing. And they were trying to test them all at once. And as a result of that, they were testing the same methods on the same object multiple times because they couldn't keep track. So the permutation tables is a pattern that you can use to be able to prevent all of that. But it comes down to this kind of core question, well, am I missing anything? Well, let's look at this a little differently. And the answer is yes, we're missing a lot of different variations, but we can fix this really fast. So if we look at this first example, we've got a couple of problems. The first is, if the first name is nil, then it's expected to equal the last name. Well, we don't know why, it just says that it is. We don't know whether, we don't know why that's true. Well, if what we do is we first build this into a hash. So this is our data set. Remember, the first step is to create a set of data. So if what we do is we have this very basic hash, where the key in the hash is the set, so nil is the first name, the middle name is James, the last name is Johnson, and then we say that should equal a full name. So this is the set, that's the expected output. Well, if we take all the examples we had from the last model and we plop them into here, then we can pretty easily see and follow what things are missing. So can we see if anything's missing? Well, yeah, we're missing about three different options. Well, it's pretty easy to determine what those are, right? Fixes a test pretty darn quick. So now we can scan through this and say, okay, well, we've got all the variations where first name exists and when it doesn't, so on and so forth, right? It's really easy to track. This is the input, that is the output, simple enough. Now, what we'll do next is we're gonna basically iterate through each of these sets, right? We're gonna get a name set, that's our data. We're gonna get an output, that's the value in the hash, okay? And we're gonna move this, and this is an option which is into a shared example. How many, raise your hand if you've used a shared example before. Okay, so a shared example is very similar to a mix-in, okay, it's very similar. You can in fact use a mix-in in place of this. You don't have to use a shared example. But basically what it does is it allows you to define common sets of expectations, okay? Without having to duplicate yourself, right? So if we go back, we'll go back a little bit to this example. We're running the same type of expectation over and over and over again, right? It's all very common, it's all very much the same. Well, a shared example is a way to, we'll go back. There we go. A shared example is a way to make, to dry all of that up. So basically we're gonna say, well, in our shared example, okay, actually we'll start down here, we're gonna say it behaves like, which is a method inside of our spec. We're gonna say it behaves like a full name. And the first thing we're gonna pass it as the name set, the second argument we're gonna pass it as the output. Well, it comes in in the form of a block. So the first thing is, is the first, middle, and last name, okay? And then the output. So we're gonna say the subject is a full name and we're gonna run our common method. We're gonna set these three different attributes as the first, middle, and last name. And we're gonna say that it matches the output, right? This is our shared example. So that took the entirety of that spec and truncated it down to this. Now what happens if we wanna add in a fourth argument, right? So we've got three different name types. What if we wanna add in a fourth? Okay, fine. Then you just do that, you add to the table and you're done. You don't have to do anything else, right? So if you are building out a test suite, this is logic that you no longer have to change. You're simply saying, okay, I've got values, I've got data and I have an expected output. I just wanna line the two up, okay? Solved. Boom, all right, permutation tables. Number two, all right. And number three, golden master. So golden master testing is broken up into these, or it actually solves these problems. Okay, the first is, drink of water, for the win. Okay, the first thing is, is golden master testing can be a great way to backfill untested legacy code. Number two is it can be really valuable when it's uncertain what the expectations, or when uncertain expectations require visual confirmation. In other words, when the output is kind of complicated and hard to parse. And so oftentimes what I have seen in this is, like JSON payloads that come back, right? So JSON payload, and then we have to like, we parse the JSON to determine, is this key with, exist with this value, and nested three, it's just very complicated to visually parse. Well, a golden master test can help address that. The next is that the code complexity is significantly exceeds the current domain knowledge. In other words, if the code is so complicated that it's really hard to reason about, it's really hard to express in that manner, then a gold master test is a really simple way to expedite that because at the end of the day, it may just simply be a matter of, well, when I put these values in, do I get this type of output? If I get this type of output, it's good, right? Or if the JSON payload is really complicated, that's okay. I just need to know that they're the same, right? How many of you have opened up a test suite and seen something like this? Anybody? Yeah, absolutely. These are the worst. I kill people for this. No, this is the worst stuff, right? You get through and it just says pending across, and you're like, thank you for the help, die. Okay. So here's how the gold master testing workflow works. The first thing is it takes a snapshot of the object and it actually snapshots the object and it puts it into a file, an actual file in the system, right? Then what happens is you verify the snapshot manually, literally you view the file, open the file, look through it and say yes, that's exactly what I should be expecting, okay? Visual testing, right? And then the last is that from that point forward, you compare future versions to the master, the verified master. So what it would look like is it would be okay, here's my object, I'm gonna snapshot it, it prints it to a file. I open up the file and go yes, that output is exactly what I'm looking for. Save that, record that, okay? Then every time you run the test in the future, compare the output to that file's output. If they're the same, passes. If it's not the same, doesn't pass, and ask me to verify it again, okay? And if it might be better, new, whatever, it happens to be. Well, instead of going through how we implement this, I'm actually gonna tell you about Katrina Owen's amazing gem called Approvals. And you can find it here at catrics slash approvals. And basically this implements the golden master testing across the board. You don't have to reinvent the wheel with this, all right? But here's how to implement that utilizing her gem. So again, the code samples you're about to see are a combination of our spec and also the approval's gem. All right? So here's an example of it. I love this description, it works. Okay, it's very helpful, apparently, okay. So this comes directly from the documentation, but oh, we'll go back. So the first thing is you'll notice that there is now a method that takes a block that works inside the expectation, right? So inside of this assertion block that is called verify, right? And it passes and you can pass in a format and say, okay, this is the type of data you're gonna get out of that file. And then there is an option where you could actually pass in some, a string of information. You don't have to, but you can. This is one of many examples on how you would do this, okay? Then what you do is you CD into the app itself and there is a CLI for approvals that you can then go through and manually verify. And this is the process where I said you visually test the file and say, yes, that's exactly what I should expect and it's verified, boom, okay, and move on. So you run through this process and you manually verify the snapshots, okay? Now, this is not a good, in my opinion, this is not a good practice for large applications where you have a lot of snapshots that you have to verify. This is a great practice where you have a couple of isolated examples, a couple of specific things that need this type of testing. If you're doing this type of testing all the way through your test suite, it probably is a smell, a code smell, that something's not right in the way in which your test and your code is implemented. This should be done in a very isolated capacity. I'm gonna go over some best practices. Now, I know that many of you may not like the term best practices, so we'll call them better practices. But sometimes they're not really better so they're just good but not required practices, which isn't really it, but we'll just call them other ways of doing things and what might be helpful ideas. So these are just other ways of doing things and why it might be helpful ideas. The first one is let, the use of let and not instance variables. This is another way of doing things but not a requirement, but I'll show you why. So let's look at this example right here of code. So we have this instance variable that sets a value called full name, right? And we have an expectation that runs that expects that the full name is to equal Adam Cuppey. But when it runs, we get this issue, which is that Adam Cuppey, it expected Adam Cuppey but got nil. So immediately you flip your desk, you dive through your code, you determine, you look at your full name method and you're like, none of this makes any sense. I've got first name, middle name, last name, concat, what the heck's going on? Oh, maybe it's the values that are being said. But you fail to recognize that it's actually spelled wrong, okay? This is one of the biggest problems and one of the biggest reasons why I highly recommend to utilize let instead of instance variables because even though a test suite running quickly is of great, great value, at the same time as if it takes more time to parse through a test suite to use it and you spend more time trying to debug the test suite, it doesn't matter how fast it could run another way. It's a piece of shit, okay? Right? So use the tools in front of you. If you're not using them, I've heard the arguments, I've implemented the arguments against it and I've found consistently that this is not a long-term solution, right? Because if you do run it as a let, it actually creates a method that can then be used so when you've spelled it wrong, you'll get this error instead. It's a heck of a lot easier to debug. In other words, you're spending less time in your test suite and more time writing better code. So this is the value of let, okay? There's many other values but this is the primary, the one that stands out the most in my mind, okay? Now the next is being descriptive in your naming. So how many times have you seen something like this, right? Where the first one we've got, user one is to be valid, user two is to be invalid, okay? The first one's passing, the second one's failing, okay? And we ask ourselves, well, what is wrong about them? And then you've gotta dive back into the test suite and figure out like, okay, when I factory up user one against user two, when I factory up user one, it looks like this, when I factory up user two, it looks like that. Like you've gotta determine all that reason. However, if we describe it a little differently, you at least have a starting place, right? So if I say user is to be valid and then I say expect user with duplicate email to be invalid but it's not, I at least know where to start. This is the value of descriptive naming. Now if you feel that shorter methods run much faster, cool, I appreciate that but you might be wrong, okay? Sometimes this is a very valuable way to do that. So when we look at our test suite and we end up with something like this, right? Where we've got our user, where again, we're utilizing subject with the user, we've got an email, we're passing it in there, we expect it to be valid. Now I know that this is not a fully flushed out model, I get it, but just bear with me, this is for the sake of example and demonstration, not accuracy, I know. And then when we define a new user with a duplicate email and we use the one before it, then when we run and expect this to be invalid, it's really easy to reason and understand about. Yeah? Fantastic, okay. The next thing we're gonna talk about very briefly is extracting common expectations, right? So being able to build in there is, our spec has great support for this. Ruby itself has a lot of support for this and so you can do all of the same things inside of test unit. All of the things I'm talking about can be done in test unit. So if you utilize that, all the same patterns apply. You don't have to be using our spec, but our spec was a buzzword that I knew would draw people to a talk. Just kidding. So the first is custom matchers. So you might not know this, but it's actually really easy to design a custom matcher, right? Our spec has a class that you can tap into to define one. I have heard many times that this is gonna be deprecated, but regardless, it still illustrates the point that if you would like to define a custom expected to have a custom value or a custom type of matcher, if you're writing the same logic to test that thing over and over and over again, then maybe writing a custom matcher for that will help make this a lot easier, right? One of the most horrible experiences is when you're going through a test suite and the logic is the same, so it's not very dry. You change the logic that tests one thing, but don't for another. Those tests begin to fail or even worse off, they continue to pass and you don't know where or how to look, right? This is directly out of the documentation by the way, so here's a link to that. But again, custom matchers might help that out. The next is to think about factories over fixtures. So you may have already heard about this. This is used all over the place and that is Factory Girl is a great gem built by Thoughtbot. There's many others like it, so it doesn't have to be Factory Girl, the conventions are the same. You've seen throughout the examples today the use of user directly, so we're calling this directly. In the previous examples I was using described class, but it's one and the same. But what you can do is actually utilize Factory Girl to build an object. Now, the value of this is a couple of different things. The first is that you can lint these objects. So the minimum valid object pattern actually came from utilizing Factory Girl and specifically realizing that Factory Girl is built around the idea that a factory, when you factory up an object like a user, it should represent a valid object and only a valid object. It should not be more than that. When you add more to that, then you would utilize in Factory Girl the notion of a trait, right? So these are additional traits of that thing. So as an example, let's say you have a user and then you have an admin user. A traditional user is just a valid everyday user. An admin user has the admin attribute set to true. Let's use that as an example. Well, you can have both. The fine and factory girl is really easy. But when you run Factory Girl lint, it will go through and run all of your factories and make sure that they all work. And if they all work and factory up a valid object, then actually you can expedite the entire process. And in the MBO example I was giving before, you don't even need the valid true part anymore. Once it builds the subject, it will actually tell you right off the bat whether or not it's valid. Pretty darn helpful. And again, you can find us at Factory Girl. All right, closing up, because I'm running out of time a little bit here, is here's a bunch of things that you can resources to read. Again, I'm gonna include all of this. So, one of them is betterspecs.org. Not all of the principles in betterspecs.org I agree with. However, many of them are a great starting point. So, if you're not familiar with some of the practices in terms of how to structure and utilize our spec and the kind of conventions that are done inside of it, this one is really great as a good starting point. And if you follow these, you're gonna end up in a good spot. The next one is actually a blog post, a set of blog posts written by Randy Coleman. And there's a category, and he wrote I think 14 posts called Getting Testy. And it goes into not just how to structure tests, but more importantly, excuse me, the many things that you can write inside of your tests. So, the added value to this is, he's actually a member of our team, right? So that's pretty cool. But bottom line is, even if he wasn't, he actually is incredible at describing and expressing how to structure these things. This is a blog post series that you should absolutely go through. It's incredibly valuable and incredibly well-written in structure. And of course, I'm proud to have him as a team member. It's pretty great. The last is, if you have not read it, practical object oriented design in Ruby, phenomenal book. Raise your hand if you have not read this book. Okay, this is a required reading. You have homework now, okay? This is absolutely an essential reading for anybody that wants to write really quality code. This book is incredible. This is well worth it, hands down. All right, so there you have it. There's everything. Again, oh, and of course, we're online at codingzeal.com. We are a consultancy. So if anybody is looking for extra help, by all means, let me know. But I am a total resource. You can find me on Twitter. You can find me on Facebook. You can find me just about anywhere. I'm more than happy to answer questions. I'll be here the rest of the week. And plus, so, I am available to you. Tweet at me, do whatever you like, all right? All right, thank you very much. That's the end of that talk for the day. I'm gonna give myself a hand.