 Okay, I'm going to start because yeah, because Sam is now possibly waving at me to start or either to that or giving me the thinker I can't tell because of the lights. So hi, how's the conference going for you all so far? Good? Yeah? All right, cool. I can fix that right now. So sometimes this happens to me at conferences. Thank you, you made the exact same sound in the audience. You made the exact same sound in the audience actually made when this happened. It actually is still this laptop. It survived. It's going to be put to pasture in about two weeks. So this is his last conference or raw, so I'm very excited. So this is the talk about test doubles. It is not the talk from the head of test double that was before lunch. That was Justin. If you came here expecting to see Justin, you were like about two hours late and I'm sorry. Justin is not here. He couldn't stick around. So I am allowed to mock him, but in keeping with the theme of the talk, I'm going to mock him in our spec syntax. So I'm just going to say you can all expect Justin to receive this talk and return polite dissent. Justin is actually tremendously polite to disagree with, so I recommend it if you ever find that situation. If you ever find yourself in that situation, I recommend disagreeing with Justin. That's the first. So unlike Justin who made a joke about writing a book, I actually wrote a book. I've written a couple. And particularly this one, which you should buy, it's great. It's not the point of the talk. But the point of the talk is that I wrote this book called Take My Money, Accepting Payments on the Web, which you can get at prag.com. Not that that's the point here or anything. But the thing about writing a book like this is you have to write example code. And code for a book length sample project is this sort of weird version of software engineering where some of your constraints go away, like the constraint that this actually has to be valid in a production environment. And yet some of your other constraints are super strong, like you have a certain constraint around you want the code to be extra clear and extra idiomatic because it's a teaching exercise or an explaining exercise. So you have an extra burden towards that, at least I feel that way. So the relevant point for this talk is that pursuant to writing this book and writing this example, I wrote some tests. I am a Ruby engineer. I am known for writing tests. I felt like I needed to write tests to go along with my sample application. So I did. And I thought I'm going to do this the super purest way. I have these middleware workflow kind of objects. They are not the active record objects, but they deal with the active record objects. I'm going to write tests that use a lot of test doubles. And I started off writing a bunch of tests that kind of look like this. And we'll talk about the details of what this actually does in about 10 minutes. But essentially this is rather than, I think you might be able to get it just from the first few lines of code where you have a couple of ticket objects and this discount object. And rather than them being active record objects, there are spec spies, their test doubles. And this worked fine, but eventually as the sample application progressed, I wound up kind of unwriting them in the sense that if you look at later examples in the book, and I don't really make a point of this in the book itself, but if you look through the code samples, those tests eventually wind up looking more like this, where I'm actually creating active record objects. And these are not the actual, these are somewhat simplified examples. Do I wind up creating the actual active record objects and they're not doubles, they're actual active record objects. And I'm doing what would be considered like a more standard test. And this is, this talk to some extent is about why I do that. Like why I start off writing, trying to test double all the things. And why I often, especially in a Rails context, wind up pulling back and replacing them all with real objects. Cuz I do this a bit. I have been somewhat ambivalent about how to use test doubles for years. Sometimes people call my writing about the stuff non-dogmatic, which is kind, or pragmatic, which I think of as polite way of saying ambivalent or unsure. But I can actually prove this because like I said, I write and I actually have a paper trail here. This was written in 2011, it was published in 2011, written in probably 2010. And it says, as much as I love using mocks and stubs to cover hard to reach objects and states, my own history with very strict behavior-based mock test structures hasn't been great. My experience was that writing all the mocks around a given object tended to be a drag on the test process. But I'm open to the possibility that the method works better for others or that I'm not doing it right. Three years later, I did not intend this, when I actually proposed this talk, I did not intend this to be a tour through my entire library. But it is, and you're all just like captive audience. Three years later in 2014, I rewrote the entire book basically top to bottom. But I said basically the same thing. My opinion about the best way to use mock objects changes every few months. I'll try some, they'll work well, I'll start using more mocks, they'll start getting in the way. I'll back off and then I'll think, let's try some mocks. This cycle has been going on for years and I have no reason to think that it's going to change anytime soon. I hope, I suspect that this describes at least some of you in the audience. Don't clap or anything, you can nod or at least look up from your laptops or something like that. So why does this happen to me? Like I'll watch a Justin Searles talk or I will read something, a book like Growing Object, Growing In Software. And I will think, mock objects, test doubles are really the way to go for this testing and then I will try them in practice and I will roll back. And so this is what I'm writing, and more to the point, what can I do about this? Like is this a useful process? Should I pick a side and stay there? Like what should I do? So this is my talk, test doubles are not to be mocked. My name is Noel Rappin, I work for a consulting company in Chicago called TableXI, and you can find me on Twitter at Noel Rapp. And there's some other URLs there at the bottom of the footer here that might be of interest to you. So to be clear about one thing, there's one usage of test doubles that I consider not controversial and is not what I'm talking about. So a really common use case is this is, again, not a real test. But I'm using a test double in the first line to wrap a stripe charge. Which is Stripe Payment Gateway, this is wrapping an interaction with a third party API that would normally be called over the network. So I'm doing this, I'm using this test double to avoid having to either make this complicated network call or to put this system in a state that might be hard to actually replicate line by line. So I'm actually using it as a pure stub to replace something or to help me get in a certain state. That is not the controversial use that I'm talking about. So the non-controversial piece here to replace a heavyweight object or to specify failure states, that's great, I do that all the time. I expect to continue to do that and that's not really the focus here. But I wanted to back up before I show a couple of examples here and introduce sort of a framework for this discussion. What we talk about like tests being useful, tests being not being useful, like what makes me want to pull these tests back, what makes a test good? And I have a kind of a tortured acronym for this, SWIFT, straightforward, well-defined, independent, fast, and truthful. Straightforward means that you can tell what the test does from looking at it. It's easy to read. Well-defined means that it will basically replicate itself. It will always pass, it's not dependent on state, sorry. Well-defined means it is adeptenent, basically that it will continue to return the same result no matter how often it's run. Independent means it doesn't depend on other tests or other state of the code, fast I think is self-evident. And truthful means that it is an accurate representation of the code. If the code is broken, the test fails, if the code works, the test passes. So that's how I'm gonna evaluate the test samples on the show. And in the long term, the priorities for tests, so one of the things about tests is that they are in aid immediately in development. But then they stay in your test suite forever. So something like fast might be a much more important priority later in the test suite than it is when you're actually developing code. And ultimately, a good test leads to well-designed code, which is a whole other rabbit hole about what well-designed code is, which I'm not gonna really get into very much other than to say, I would consider well-defined code here to be clear and easy to change. And tests enable us to get to well-defined code in kind of three ways, more or less. Tests let you do domain discovery. This is the test-driven development sort of theory. You write your test first, it lets you learn something about your domain as you write the tests so that you understand it. The act of writing the tests cause you to understand the code that you're about to write. Also, obviously, tests validate behavior, and then they act as a safety net when you're trying to change your code. So you know that you can change it without breaking anything. So, I wanna talk about two tests that more or less do the same thing. The code under test here calculates the total price of multiple tickets given a discount, this is the example code. It's for a small theater, they're selling tickets. They sometimes give out discount codes. There's a calculator that needs to put that all together into a number. So one way to write this test is to write sort of a, I hate to say traditional, but of like a standard state-based test. Create two tickets, I create a discount code. I pass all of those off to my price calculator, tell the calculator to go. And at the end of it, I expect the calculator to come out with, in this case $30, two $20 tickets with a $25 discount. I think that most of us would not blink if we came upon a code review and someone wrote this spec. This is a fairly standard way to write this spec. Another way to write this spec is to say that the price calculator should not depend on actually having tickets or actually having discount codes, we can use spies. So this is using our spec syntax, our spec test double syntax. Instead of creating two tickets and a discount, I'm creating three spy objects. I am passing them to a real price calculator as before and I'm doing the calculation. And I actually am making the same expectation here. But more importantly, I have an expectation in the last two lines that I'm expecting the calculator in the part of the course of its action. I'm expecting the calculator to call this price check method on the tickets. So there's two differences that I want to, in addition, sorry. In addition to that difference and how the tests work, the other difference that I want to call out here between the two tests is that in the first test, I'm setting a base attribute, the price of the ticket. And in the second test, I'm setting the expectation of a derived attribute, the price check, okay? Because I don't actually care with the, the price calculator doesn't actually care what the underlying actual cost of the ticket is. All it cares about is what comes out of when I call this price check method, okay? And I actually could make this final line here, the price sense line. I actually could make that another behavior expectation. It just got really kind of convoluted. And so in the, in the, keeping this a little bit clear. So this test is, is making the same check of what the behavior of the price calculator is. And it is also making it, it is also checking that the calculator does certain things along the way. So the state test is using real objects and it is making an expectation on the state of the world at the end of those tests. The spy test is using test doubles and it is making expectations on the behavior of the calculator along the way. So that's a difference in philosophy, but does it actually, how does that actually affect the way that these tests live in our test suite over time and the way that they interact with the rest of our code? So I wanna go through this, those five swift checks really quick. Straight forward, which test is more straightforward? The state test, I think. I think most people will have an easier time understanding the state test and the test double test. Unless you've done a lot of testing with test doubles. That's generally been my experience. Well defined, they're both, they're both equally well defined. They will both equally behave, they will both equally replicate themselves over time. The spy test is more independent because the, the state test depends on the behavior of the ticket object and the spy test does not, we'll go on that, go talk about that more in a second. The spy test is also faster. The way that I have this written, the state test is actually creating database objects, which makes the spy test a lot faster. If it wasn't doing that, the spy test would probably still be a little bit faster. Truthful is interesting though. So one of the things that I like to do when I think about what tests I'm writing next or what tests I should be writing is to think about the circumstance in which the test fails. When I write a test, what will cause this test to fail? And in the case of these two tests, they will fail under different situations, possibly. So if the actual, that actual price calculator has a bug. So if there's actually a bug, after it gets the information from the tickets, and before it converts that into a, into a final price, both of these tests will fail. But if the ticket has a bug, if there's a flaw in that price check method that the ticket is using, the state test will still fail, but the spy test will pass. Because the spy test doesn't actually touch that part. Okay, is that clear so far as this? And even if the ticket class doesn't exist at all, if that code has not yet been written, the state test will fail, but the spy test will still pass. And this is where a lot of people jump off the mock object bandwagon. A lot of people say, this is, this is a terrible situation. The code can be wrong and the test will still pass. And this is a huge problem. And a lot of people I think my experience is, a lot of people give up on test double testing at that point. But what I wanna argue here is that saying the code can be wrong and the double test will still pass is actually kind of a matter of perspective. Because the double test implies the existence of other tests in a way that the state test necessarily does not. So if I'm saying that this test that uses test doubles to test this calculator object but doesn't test this ticket object, I'm implying that somewhere else I'm gonna write tests that do touch that ticket object and do test that behavior. And that those tests, in other words, I'm asserting that that is a different unit that's gonna have different unit tests. And so another way of looking at the truthfulness of these two tests is that if the discount logic were to get more complicated, if they were introducing double coupon Wednesdays or something like that. Then you get a situation here where the ticket API would change and the double test would still pass and the state test would fail. And this is weird from a truthfulness standpoint because in the situation I'm describing, the only changes were to the ticket. The calculator, neither the calculator test nor the code that it is nominally testing will have changed, right? But the test will now fail. And this becomes a problem in long. So the test fails, even though the test is nominally right, the breakage in the code is not in the thing being tested but the test still fails. And this can be a real problem on larger test suites. If the tests are not independent, you can write some code and you get a bunch of test failures on the other side of the planet. And it is very hard to track down where they are because the test failures are not in the things that are nominally being tested. So this leads to kind of a design goal that is one of the things that sort of pushes me into using more test doubles. Which is the idea that a failure state causes exactly one test to fail. This is impossible in practice, in part because you probably have integration tests and unit tests that cover the same ground. But it's a good way to think about isolating tests. If this test fails, I know to look here for this, for the remedy to it, because this test is isolated from the rest of the code base. Another way of thinking about that is that the test double version of the test tests both the behavior, both sort of the code and also the design of the code. In that I'm making strong claims about the design of the code that need to be true for this test to pass. Sometimes one of the complaints about using heavy use of test doubles is that using test doubles makes it harder to refactor because you're changing the design and that's breaking tests. I would argue at least potentially that is, I was gonna say that's by design, but that's a confusing, overloaded term. That's actually part of what you're getting when you use mock objects. You're making claims about the structure of the test. If you make good claims about the structure of the test, then you're refactoring won't necessarily have problems if you limit the things that you stub out to public API methods or something like that. But if you make poor choices about what you're asserting in the design, then you're gonna have pain in much the same way that you have pain if you make poor choices about what parts of the status, the state of the code that you assert on. And the double version of the test encourages the creation of additional tests. It encourages more isolated unit tests because of what we already said, that the tests can pass even though the code is incomplete. It encourages you to write tests for each individual unit going on. If you start with just the state test that I had before, you might start putting a lot of logic in the ticket class and not writing a new test for it because it's nominally already covered by the existing state test. And that can become a problem. It leads to slower tests. And again, it leads to a lot of not super well unit tested logic, because those units happen to get called by another piece of test. So creating additional tests is a good thing, because it kind of encourages good design practices. One of the things that I think I'm saying here is that, in a lot of the same way that Betsy talked about boring code leading to better tests, the structure of the way you test your code and the things that you are looking to make easy to test have really strong implications on the way that your code is designed. And if you write your tests in such a way that you encourage the creation of smaller units, then you will get smaller units. If you write your tests in such a way that all you have are integration tests, then you don't have a design pressure pushing you to create smaller units, for good or for real. Writing a lot of unit tests, writing extra unit tests in this case is bad. Like I said, the design might change. And I think a lot of people find it hard to drive the next failing test. So they get to that situation, they write that test with the test doubles. It passes, but the code isn't complete. And a lot of time in my experience, people have trouble like getting to the next step, understanding what the next step is. They need to, the things that are stubbed out by that double test, need to be themselves written and tested. One of the advantages of what gets called outside in testing, where you start with an end-to-end integration test, and then fill it in piece by piece, is that then you have that failing integration test that helps you drive the next piece of unit testing that you need to write. So one of the things about writing a lot of tests with test doubles is it implies a lot of test and code isolation. And I think what this comes down to actually, is that it implies more isolation than many people are comfortable committing to up front. I think, and this is again my experience with teams and object-oriented design, is that a lot of teams are reluctant at the beginning of a project to commit to really elaborate object-oriented structures. Even if they really do seem like they're gonna be necessary later on, there's a sense that adding all of these individual units or adding super and subclasses or adding indirect, indirections as classes is really, it really feels like overkill in a small project. And I feel there's a certain amount of like, a lot of people have an intuitive sense that that's too much at the beginning of a project. Even though there's also a sense that getting those things in at the beginning of the project will be beneficial in the long term. And in that sense, I think that test doubles testing is like a design canary. If you are actually listing all of the collaborators with a given method, if you actually have to enumerate all of the collaborators of a certain method in the test because you need to stub them one by one, then you are gonna be very, very sensitive to the amount of collaborators you have. And it's going to cause, having extra collaborators, having extra dependencies is going to cause like noticeable pain in writing the test. The test, writing the test is going to get longer, it's going to be more set up, it's going to be clear that there's a lot of set up. And one of the ways that people react to that is by stop using test doubles. And one of the ways that people react to that is by making their designs, making the code into smaller design units. And I think that both of those can be the right thing in certain situations. I think that a lot of set up can mean a poorly factored design, or it can mean that you really do have a lot of complexity in that the things that you might do to isolate those actually wind up making the code less clear. When DHH had the whole TDD is dead thing and they had those conversations with Kemp back in Martin Fowler. One of the, I thought, one of the best points that David made against TDD, even though I tended to not like most of that argument. One of the best points that he made against TDD was that the possibility that strict TDD, strict test driven development, would lead you to code that had what he called low cohesion. Meaning that responsibility was that the code was so isolated that responsibility was spread so far across so many small pieces of code that it would become very hard to understand that there was no center. And I think that that's something, definitely something to worry about, and definitely something that I kind of find when I use doubles heavily, they push me in this sense of a lot of very small methods, which I tend to like because they're isolated. But then it also becomes kind of hard to look at the code and say, this happens here. Because what then tends to happen is part of this thing happens here, and part of it happens over there, and part of it happens over there. And that can be also very hard to explain to new people learning the code base for the first time. And, wantly, if you are the only person on your team who cares about this level of isolation and this level of doing testing, you are in for pain. That is a bad place to be. He says, not at all, from experience in any way she performed. If you are the only person who is isolating code like this, this is a completely separate story, actually. I once walked into a project that had, it was an 8,000 line Python script. And the first line of the Python script was if colon, and then it was 4,000 lines, and then an else, and then 4,000 more lines. And they were all heavily duplicated, and I thought, I'm a software engineer. I am going to break up this duplication so that this can be tested. And I wrote modules, and I wrote methods, and I wrote classes. And the end users of this tool hated it. Hated it because they liked that they could walk through this. They liked that the shape of the code matched the way that they thought about the thing that the script was actually building. It was building a configuration file. And they found that the 8,000 line Python script matched their mental model in a way that made it really easy for them to follow. And then once I split it off into what almost everybody in this room would consider better code because it was split into methods, they didn't know where anything was. So design can be a little subjective, I guess is the moral of that story. And again, I've been on a case where I was the only person on a team who was writing tests, or the only person on a team who was trying to use test doubles. And if nobody else is doing it, then nobody else is building up the level of design isolation that you need to write this kind of code without a bunch of setup. So you come into a project and there's no way to create this object without creating seven other objects. So you already have 20 lines of test setup before you even get to the point. That's hard. Also, certain third-party frameworks can make using test doubles hard. Many people in this room use such a framework. It's called Rails. Rails is designed for developer simplicity in many places. This is a wonderful thing. I have used Rails professionally for most of the last eight years. I like it for the most part. It is not, it is explicitly not designed to a level of encapsulation that makes it easy to test double. In particular, association proxies, when you do an active record association, those are nearly impossible to use test doubles to cover without losing, without isolating them in such a way that you take away most of the Rails functionality. Also, Rails encourages, one of the things I tried to do in the book examples, the first round, the first draft of the book examples, was I tried to get really cute about only saving objects at the end of a process and not intermediate saving things. Rails doesn't like that. Rails really kind of wants you to use their update methods and things like that. Rails really makes it hard. And the reason I did that was so that I could stub one save method and completely isolate from the database. Rails doesn't like doing that. So this kind of thing can make hard. So this is kind of what happened. I wrote those spy tests and I got to a point where either the Rails piece or the complexity of the underlying logic made it hard for code isolation and I threw out my hands. I was unwilling or unable to make the changes to code isolated to keep the doubles in the tests. And I might say I was unwilling to because it made the code less clear and that particularly in a book example, I did not want low cohesion. I wanted people to be able to see things. Somebody else can easily look at that and say I was unable to because I'm not a good enough software designer to actually do it the right way. I can't actually say no to that. It's entirely possible that somebody smarter than me might have done a better job of that. But ultimately, I wound up rewriting them. And this is again, this is emblematic of a process that happens in my actual code development. If this is easy to do, but it breaks encapsulation, but it's easy. And at some point, that's a design tension, right? We talk about, generally I love encapsulation, but it definitely has costs, okay? So a couple things to take away from this. I have no idea where I am on time. So I'm assuming time has continued to flow forward during this talk. Possibly, for some of you, maybe it's flowing very slowly for me very fast. So one of the takeaways of this is Conway's law applies to tests. Conway's law says that the structure of your code will match the structure of your organization. It's sometimes implied that if you have four teams working on a compiler, you get a four pass compiler. And it applies to your test, Becky was making, Betsy, sorry, was making a similar point this morning. That the way that you approach your testing and the structures that you bring to your testing are going to have implications about the way that you design your code. Because you're going to be designing for testability along certain axes. That's a good thing. Designing for testability, I think, is a perfectly valid thing to do. But you need to be aware that that's going to be one of the things that's going to happen. I think it's really important when you're writing tests, especially if you're doing test driven development, to divide the test up by thinking about what will make the test fail. Not what will make the test pass. So this test will fail under this certain circumstance. This test will fail under this circumstance. I think there are definitely cases where it's worth trying doubles. In cases where behavior is more important than state. I don't write a lot of Rails controller tests anymore. When I did, those were almost exclusively doubled tests. Because the ending status was not at all important. And the important part was that they actually called specific methods on my active record classes. That's a great idea if you have service objects or things like that, which mostly do traffic flow. Those are really great places to do test double testing. And I think everybody should try, really try once to write a small toy Rails app and try to write all the tests completely isolated from database and try to put all the code isolated from active record. It is a very interesting exercise and you'll probably come out of it understanding a little bit more about the good and bad parts of Rails's design and the good and bad parts of isolating your test design. And it's definitely worth taking a couple hours and trying it. Also, if you're going to use heavy, a lot of test double tests, you should have an actual integration end-to-end test that makes sure that all the pieces line up nicely. That will help you have a test failure to continue to drive to know where the next unit test needs to be. That's pretty much all I have. There's my lovely book again. I have a couple things to just self-indulgent again. The current new book is called Take My Money. It's about payments, payment gateways, administrating payments. If you do stuff where your application takes credit card data, you should look at this. It will save you time and stress, I hope. You can find me online on Twitter. I'm AtnolRap, also atnolrapping.com. Like I said, I work at TableXI. If you want to work for or with a cool consulting place in Chicago, you can find them on TableXI.com. We just redesigned our careers and our job page to be way more representative of the people there and the kind of work we do and the kind of culture we have. So I encourage you to check it out. I was hoping to announce a new project today, but it is not ready. But I am announcing a mailing list at tinyletter.com slash nolrap. If you go there, you can also get there from nolrap.com. Then you will find out when I announce what the project is, if hopefully some of you will care. Because if none of you care, then it's going to be a very, very short of a project. And anyway, the book is available at praggprague.com. And our web pay for the take my money book, Rails Test Prescriptions is at NR Test 2. Thank you for indulging the last minute of shameless self-promotion. And for sitting here, I hope you enjoy the rest of your conference. And I have ten minutes for questions. Wow, I did go short. Everybody's been going short at this conference. It's really interesting. So does anybody have any questions, comments, thoughts? So the question was, can I expound on what thinking about how you think about what's going to make your test fail? And I think that a lot of times when you approach, I need to write this test because I need to cover certain error cases. And so just the way, I just encourage thinking about that in a slightly inverted way. This test will fail if I don't have a handler for nil. This test will fail if the code doesn't handle this particular edge case correctly. This test will fail if the collaborating object doesn't work. This test will fail if this third-party thing will down. I think there's a tendency to write tests and think about tests in terms of what will make them succeed. And I think that it's kind of beneficial to think about them in terms of how they will fail. Does that make more sense? A little bit? Right, so yeah, so the question was whether using LAT or other R-spec tools lets you have a lot of collaborators without realizing it. And I think yeah, it does. There's a tension here between trying to make yourself, trying to make your test easier by solving the underlying problem or trying to make your test easier by putting a band-aid on the symptom. I use LAT a lot and it's there to make setup easier. And most of the time, that's a good thing. And sometimes that's a bad thing because it's blocking you from seeing some sort of problem in the code. Another thing that's worth trying there a little bit maybe is to try and write a test or two in R-spec Given, which will, it has a lot of the same features of LAT, but it's a little bit more explicit about this is a setup, this is setup, this is setup. And it can sometimes, you might find that it makes it easier to see. On the other hand, that could lead to you writing with a test suite that is like half an R-spec Given and half a normal R-spec, which I can tell you from personal experience, is not a good long-term look. How do I feel about verifying test doubles in the R-spec, like the R-spec verified doubles? Yeah, so R-spec added relatively recently this concept of verified doubles, where you say this is double should match the API of a given class or a given instance. And what that does is it prevents a really common failure mode where the double refers to a method or an attribute that doesn't exist. The code under test also refers to that method, but the whole thing doesn't work because that method or attribute doesn't actually exist. I think that for the most part, they do solve that problem. They also introduce a partial dependency, because you're now partially dependent on what, there's not a whole lot of difference between an R-spec verified double and factory girls build stubbed, which takes an active record, Sam just almost had a heart attack. But in practice, you're getting the same place. You're getting that API, but you're preventing the most dangerous or the most time consuming database actions from happening. I see them, I understand that in an underlying sense, they're very different. I see them functionally, having very little difference between them and how the tests get written. So I think that they do solve a problem. I think that the problem that they kind of solve is a little bit of a straw man, because if you have an actual integration test, you'll still have a failing test, even if you have that misspelling. So I think they're fine. I don't have a problem with them. I use them. To some degree, they solve a problem. They solve a problem of bad use of doubles. But if you were using the doubles well, you probably wouldn't have that problem. Now Sam agrees. I can stay here for another minute. So the question is, where do you draw the line on knowing what to test in integration tests? So one thing I didn't really get to in this talk is the idea of the testing pyramid, which is that you have a lot of relatively fast unit tests and a relatively low amount of slower integration tests. And the way that I operationalize that often is that I generally do the happy path through the integration test and handle the errors in unit tests. Because it's often very, very hard to set up error conditions in an integration test, and it's easier to set up error conditions with a well-placed test double or something like that in a unit test. So the question is for integration tests, in a Rails context specifically, for the higher level stuff and to the controller level stuff. I actually think of this as being kind of a multiple layer thing where I will write, so the way that I often write my Rails applications is the controller has almost nothing, most of the business logic is in the service object, and then there's active record stuff. So a lot of times I will have an end-to-end happy path case that will use the Cappy Bar or whatever higher level syntax. And then I will have an inside out a sort of integration test that will test the workflow and do some error checking there. And then also tests at the unit level. So I think of this as being something that might have multiple layers rather than just like integration test unit tests that sometimes there's a middle ground there. Cool, thank you. On behalf of all of the whole testing track in SAM, thanks for those of you that came out to any of the talks this morning. We really appreciate your time.