 This is Intro to Mocking, while unit testing doesn't have to be so hard. I am Daniel Davis, a little bit about me. I have been a software developer for about eight years now. I am a senior consultant at Accela Consulting in Washington, D.C. And a fun fact about myself, I recently, well not recently, in December I ran the Jingle All the Way 5K, dressed as a giant gingerbread man. If you want to see what that looks like, this is a candid picture of that, it was awesome. I have some recommendations on things you should not wear when running a race. Felt is one of them, so, all right, let's do a quick survey. Who here in the room has ever been frustrated by an experience writing a unit test? Show of hands. Yeah, everybody in this room here, myself included. Thank you all for raising your hands. I think, I came to this with the same sort of problem you guys have had. I sort of struggled to write unit tests for so many years. It was very, very frustrating and hard until I finally learned about mocking and sort of that made the lights, you know, the light bulb go off in my head. Everything sort of clicks. I think I have a rendering of that. That was sort of how it felt for me. But my point is this, I think a lot of us here in the room are in the same boat. We all kind of want to write better unit tests and, you know, I talk with a lot of people here and they just say, I really wish somebody would sit down and write a presentation on this stuff and like just tell me what I need to know. So I wrote this with that in mind and if that happens to be you in your particular situation, hopefully this helps you. So let's talk about unit tests. All right. Hopefully you've seen this at some point in time. If not, this is Martin Fowler's pyramid of testing. The idea is that he tries to quantify how much of our testing should be unit tests versus integration tests versus like UI or manual testing, that sort of thing. Obviously the significant majority of it is unit testing, which is kind of ironic because many of us are really bad at unit testing. We don't write lots of unit tests like we should. When I ask people about this stuff, I get all kinds of stories about why they shouldn't. For example, they tell me like, well, you know, it's really good when the problems are easy, but I run this problem when, you know, I find these other things have to test and then I have to write tests for my tests and the code gets really complicated. I call this the rabbit hole of testing. It's kind of like going down different layers and it becomes really, really hard and so, you know, mocking can help us with that. No worries. Another thing I hear is that people tell me I spend too much time writing these tests, right? I write lots and lots of code and then I have to write more tests for my code and then I have to write tests for my tests and just it's a big problem. Mocking can help us solve that. Lastly I hear people tell me that we can't really write tests for some things. There's just so much stuff you can't unit test. This is sort of a half truth, right? I admit that there are things that we cannot cover simply by using unit tests, but I think we way overestimate what that portion is. It's actually a very, very well-known and small fraction of things that we cannot test. Unit testing is an incredibly powerful tool. Mocking helps us make that more valuable and helps us fill that space. So mocking makes unit testing easier. So what are mocks exactly? What is that? Mocks form this sort of strange subclass of a thing called a test double in testing. The idea is we typically have test stubs or test spies or mock objects, that sort of thing. You'll hear me sort of use these, I'll call these things mocks interchangeably, but understand that maybe in an academic setting this might be a more important distinction. For the purposes of this presentation though, I'm just going to call it a mock. So what are those things specifically? Well, you have a test stub and the idea is that it provides a canned response to a method call. Maybe a spy, which is a real object that behaves like a real object until a certain condition is met, that it does something else. And you have a mock, which helps us to verify that something was called or verify a behavior. So here's the thing. Does anybody in the room feel like after I've given that explanation that they understand mocking significantly better and they're ready to go test? No, of course not. For me, I think what helps when we talk about mocking is understanding the types of problems that mocking helps us to solve. So let's talk about the things that mocks help us solve. First problem, what happens if you have a dependency? So here I have a sample method foo, everyone's favorite. And depending upon the value you get back from method bar, it's going to impact how the method foo behaves. So this is a dependency. We could write this down as foo of x is going to depend on bar. The problem is that in order to understand how foo works, I need to also understand how bar works. And that creates a big problem. If I could eliminate that dependency, if I could figure out a way of gaining certainty with bar, then I could test that more effectively. So mocking helps us with this. Second problem, what if I have a method that has no return value? So I have this method here foo. And depending upon the value of x, it's either going to call bar or it's going to call something else. But how do I know if bar is called? Now sure, we could write a unit test for this. And that unit test could say, well, depending upon the side effects that bar has, we could sort of introspect it or figure it out somehow. This is a really bad way to write tests. It's really, really cumbersome and painful. If I had some way of just saying, did bar get called, I have much greater certainty and no better. So mocking helps us with that. Lastly, if I have a situation where I want to generate an error or an exception, but that exception is really hard to generate. So not talking about a type error or a file not found. So in this case, maybe something that would be very difficult to generate, like a memory error. If I could technically generate this somehow, like by adding load to the server or whatnot, but that's just, it's not a good way to test. It's very hard. If I could just reliably call that, make testing a lot easier. So that's what mocking helps us solve. A couple other things. Let's say you're happening to use a popular web framework that insists that you have database calls for all of your methods and spins up a test database like Django does. If you didn't have to have that test database, your test would be a lot faster. In some cases, mocking a model makes a lot of sense. So this can be very helpful there. It also helps us to reduce complexity. So if I'm writing simple tests, it's really easy to understand what those tests do. And it's really easy to write them because it's quick and simple. So mocking helps with that. One other thing I want to point out is that we don't think about this, but if you have people who are working with you, collaborating, and somebody hasn't written their method yet, if you can mock those methods that haven't been written yet, you can write unit tests for them. This is way better than if you had to wait for all of those things to be done, you'd have to wait for all the components to be finished before you could start testing it. So this makes you more productive and efficient. You can test things earlier, and that's better. So great. You're sold. You want to use mocking. How do I actually do this? Fortunately for us in Python, we have lots of options. The one we're going to focus on here is the mock or the magic mock library. It's very popular. It's extremely powerful. And good news for everybody, if you're using Python 3, it's included in as part of the standard library. So that's awesome. There are some other options I want to toss out there in fairness. So if you are familiar with Ruby's FlexMock, there's FlexMock. If you've ever used EasyMock in Java, there's Mox, Mocker, there's Dingus, which has an incredible name for a framework. Fudge, if you've ever used Machito, coming from like the Java space, or if you're a big fan of DocTest, there is MiniMock. But for the purposes of this, I'm going to focus on doing this with Mach. Just know that any of the examples I do here, we could probably do those in those other frameworks as well. All right. So let's do a sample problem. Let's actually put this into code and do something with it. It really helps to think about those types of problems. So let's create a little problem space for ourselves, a real-world situation. Let's say we wanted to build a Tinder competitor. That's a great example app. Tinder is the popular dating app where you get a random picture of somebody, and you either swipe to the right or to the left to indicate your preference. But we want something to appear to the software development community, of course. So you've got Tinder's already taken. You've got Grindr. Let's think of maybe, I don't know, let's call it Docker, OK? So the Docker dating app is what we're going to use, because I think this is a great idea. So not my only DevOps show here. OK, so let's say you're building your Docker dating app, and we're going to create a method for this, right, where we get a random user. The idea is that I want to be able to grab a random user from the database and show that to the current user. The only criteria here is that I can't see the same person, and it can't be somebody that I've already swiped on. So very simple. We could write a simple implementation for that, so something like this. So getNextPerson is going to call the method getRandomPerson, and then we just go through a loop and says, if we've already seen that person, just keep grabbing random people until you get somebody you haven't seen, then return that person. Fair enough? OK, now, those of you who are RestutePythonistas will probably notice something interesting about this. There's a bug in this code that, of course, we found when testing, which is, of course, surely no one could have seen everyone in the database, right? So if you've seen everyone in the database, then it gets into an infinite loop. We're going to ignore that for now, because it makes the problem more complicated and makes the example less pretty. So let's assume that our database is sufficiently large to do that. So let's represent the relationship here. So getNextPerson is going to call getRandomPerson. So we could write a unit test for this, right? Very simple, very easy. So here's my unit test. And it has a general setup, right? We have an arrange method, our preconditions. We just say it's a dictionary of people that I've seen, it's empty. And then an expected person that I want is Katie. My action is going to be to call getNextPerson. I store the result of that. And then I just compare the expected result to the actual results. Very simple and good news. This totally works. It works. It's great. That's so simple, right? Except it also doesn't work. And sometimes it fails. And that's not cool, because like 60% of the time works every time. So what's going on here? Like this is a problem. So what happens is that getRandomPerson obviously picks a random person out. So that means that there's no way for us to really have any certainty to it. Even if I knew the implementation of getRandomPerson, I could not write a unit test for this without mocking. But what if there was some way we could fix the value of getRandomPerson? What if we could make that certain? So how do we do that? Easy. We're going to mock all the dependencies. Yes. That's what we're going to do. Here's how we do that. There's a very simple method here. There's a very simple decorator here called patch. And the idea is that inside of patch, we're going to pass in, especially, the module.attribute. So in this case, my module is application. And then the unbound method is getRandomPerson, right? When I put that decorator on there, it's going to pass an argument into my test method called mockGetRandomPerson. And then all I'm really going to do here is, there we go. All I'm going to do is call this thing called returnValue on it. I'm going to set the value of that to be a fixedValue, in this case, kd. So what this does is that whenever that randomPerson method gets called, it's actually going to call the mock method instead. And it's going to return back that fixedValue. And that's it. Then we have certainty in our method. Now we know how to get that value back. And good news. It works. Every single time, every single time, it works. You can call it over and over again. Even though that's a random method, you fix the value of it. You can test it reliably. So that's great. So let's take a little bit further. Let's do some variations on this. We rarely work with unbound methods. What about a class? So I have a class. Again, I just restructured this so that method is inside of a class called application. How do you mock that? Same idea here. We're going to have this thing. It's now going to be patch.object instead of just patch. And you pass it in the class name. And then the method that you want to mock. Everything else about this is exactly the same as the previous example. So it's very easy if you're using a class or if you're not using a class. But what if you're like, I'm really kind of new to Python and I'm scared by decorators. I don't really like decorators. They're kind of magical and weird. So yeah, we don't have to have a decorator. We can get rid of that. So here's another example. All we're doing here is we're just going to make a direct assignment. This is awesome because Python allows this. So you can literally just override that method with call to a new object. You could basically setting that method to be a mock. And then we just set the return value. So that's kind of the idea here with this. But what if you're like, OK, I'm actually a bigger fan of context managers. I really love context managers. Well, good news. Context managers too. You can use your with statement. That's great. All right. So that's the general idea behind sort of mocking for a dependency. But what happens if I want to call this thing multiple times? So here's our sample code. But what if I want to test the while loop inside of this? So the while loop, I'm going to get back a new person each time. So I write a unit test for it. But the problem is is that when I set that return value, I don't really know who I should set the return value to. It's actually going to enter into an infinite loop if I do this, where it's going to be over and over and over again because it's going to keep returning the same value. And if that same value is in the list of people I've seen, it just keeps going. So how do we fix that? So there's a method called side effect. And the idea here is we just make a slight change to our method. Call this thing here called side effect. And it will take in an iterative or a list. So if I pass in a list of the return values that I want, each time that method gets called, it will just return back the next thing in the list. So first it will return back Mary, then Sarah, then Katie. So if I'm testing that and that method gets called multiple times, it turns back those values. So that's really all there is to dependency management here. It's really not particularly complex, but let's recap. So we can use mocking and patching to sort of bring certainty to all of our dependent methods. We can eliminate those dependencies in the code, even if those dependencies are unfinished. Notice that I didn't point out how GetRandomPerson was implemented. It didn't matter. That code could not even work, but I was able to still write unit tests for this. Also, lots of different ways to do it. Pick your favorite. Show hands in the room. Who likes doing it without the decorators? Anyone? See a couple of hands. Yeah, a token group of folks. I find there are always like a handful of people who are like, this makes more sense to me, this is better. And it's great. In some cases it makes more sense to do it in that style. In some cases it makes sense to use the decorator. It's up to your personal preference. All right. Second thing mocks help us with. Mocking to verify behavior. So let's set up a problem space for us. Again, let's keep on with our Docker dating app. But let's talk about the matching system here. So when a user swipes to the right, if the other user has indicated that they like them, then what we want to do is we want to send both of them a message. Send both of them a message saying you're a match. If the other user has indicated they dislike them, then we want to sort of let them down gently, let them know there's other fish in the sea, there's other opportunities. And if they haven't evaluated you yet, then what we want to do is send the give it time message. So that's just kind of the general setup here for how we react to someone doing the action of swiping. So a simple implementation for this, very easy. We could just have an evaluate method, takes in two people. If person one is in person two's likes, then we send both people an email. If person one is in person two's dislikes, then we call the let down gently method. And if person one is not in your likes or dislikes, then we're going to call the give it time method. Simple enough, but there's sort of a problem with this. How do I test this? It has no return values. How do I know that this is functioning properly? We all agree that this has logic in it and it needs to be tested, but how would I do that? So let's focus in on the middle section here and let's try and write a unit test for it. We can do behavior verification with mocking and it's very simple, it's basically the exact same. We're just gonna tweak it slightly. We have that patch decorator. So again, application dot let down gently. It's gonna give us a mock of the let down gently method. In my arrangement here, I just have person one is named bill and person two is just a dictionary of the people they like that the other person has liked or disliked. I call my action to evaluate it. The only thing that's different here is in my assertion I'm gonna call this method called call count. What's cool with mocks is that they actually will record every time that they've been called and it will allow you to verify and say how many times have you called and you can use that to verify the behavior. So if it was called one time, we know it's functioning correctly. It should have been called given the situation I've created here. So that's nice, but you might say, well, a more robust way of doing this would be like, what about checking the parameters instead of just how many times it was called, right? That makes sense. So we can do that too. Same exact setup here. Only difference is we're gonna call this method called assert called once with, which is quite a mouthful to say, but all it's doing is it's checking two things. It was called one time and it's checking that it was called with the appropriate parameters. In this case, person one. So this will allow us to verify that that method was called with exactly what we expected. Now one variation of this is you might say, well, that's nice, but shouldn't we also check the other methods to make sure that they're not being called? So for example, those things, right? We might have a bug in our code somewhere that says, you know, maybe we're calling everything or whatnot. So how could we check against that? So that's great. That's a good idea. Only problem is we're gonna run into having to mock multiple things, right? So can we even do that? Well, of course, in Python, we have the ability to stack decorators. So if you need to patch multiple methods, you can do that. Just stack them one on top of the other and it works great. One thing I do wanna point out about this though, is that if you stack the decorators because of how Python evaluates those, you have to be very careful about how the order in which your arguments come in. So for example, the mock give it time is the first parameter as opposed to the third because Python evaluates them from the bottom up as opposed to top down. So people can find this kind of counterintuitive, but just something to keep in mind so you don't end up getting those mixed up and calling the wrong mocks. Everything else with this is pretty much the same. We basically just gonna look at the call count and make sure that it's set at zero for those other methods and verify our arguments with the one we meant to call. So if you're concerned about possibly mixing up the ordering on those decorators, you can try using patch.multimal, which gives a little bit more of our rigor and structure to it. It'll make it a little easier to kind of fill those out and then it comes in the order that you sort of expect. So it's just another variation of how you can do this, just another thing you can try if that makes sense to you, if that's what you'd like to do. Everything else pretty much stays the same. Okay. All right. So what about testing things that have been called multiple times? We talked about verifying the behavior of being called once, but what happens if we have a case like here at the top where we call send email twice? We can't really evaluate the parameter that the parameter is getting passed in, which call are we talking about? So let's take a look at how to fix that problem. So in this case, we're going to have a thing called, we're going to call a method called call args list. And that's going to basically record every single time that the method was called and the parameters that were in that. It returns back the list of call arguments. The idea is that I can assert that against, it's like a little wrapper. So the call is just a wrapper around the parameters of your method. So we can evaluate that against person one and person two. So if we call it twice, say the first call, we're looking for person one, second call, I'm looking for person two. Okay. Everyone with me so far? Shake hands? Okay, cool. All right, almost done. I know this is the end of the day for you guys. If you need to take a brief kitten break, I've got a kitten for you. If you guys are maybe not cat fans, maybe you're a dog people, I have some delightful corgis to make you happy. Okay. So we talked about three things that mocking helps us to solve, right? Mocking helps us to solve dependencies. Mocking helps us to verify behavior. And mocking also helps us to evaluate exceptions being thrown. So let's set up a problem, right? Let's say in our awesome Docker dating app that we wanted to have a payment system, because of course, adding a premium feature totally works every time. It's great. In this case, I'm just gonna use Stripe because it was convenient and easy, but you can imagine doing any other number of services. So I've created a simple submit payment method. The idea here is it takes in a Stripe token. I've literally just copied and pasted this from their tutorial, more or less. I have an API key that I set and then I create a sample charge. In this case, it's gonna be $10. You can imagine this being in your Django view code or maybe in one of your forms or something of that nature. So even though I'm not necessarily pointing out Django-specific stuff, I'm just sort of pulling these things out to keep the examples clean, but this is very easily something that could be in your view code. In the case of the charge going through, we'll just return back the charge. If it ends up failing though, like so let's say the card gets declined, well then it'll generate a card error and we wanna catch that and maybe in this case we'll just pull out the body of the error and we'll pass it back to the user. Some sort of default implementation here. It doesn't have to be particularly fancy, but that's kind of our setup here. So they wanted to write a test for this. How would I go about doing that? If I go to Stripe's documentation, they have this really weird thing where they suggest putting in different card numbers. The idea is that I have different card numbers where we'll return different values or different types of errors or different things when it's in test mode. And so if I pass in, two, four, two, four, two, four, two, four, ten dollars, oh, a payment-declined exception, that sort of thing. So that's sort of their recommended way of testing this. That's sort of ridiculous. It's sort of crazy that that's how we would have to test this thing. It's crazy because it would require us to make an external call to our API when we write our unit test. So that could be problematic. What happens if that API is down? What happens if that API doesn't work or what happens if we are just running into all kinds of connectivity problems? It's gonna make our tests fail intermittently and it's not good. It's also super unclear to people who are maintaining this what those numbers mean without them actually looking at the documentation. So imagine six months down the road you're looking at these unit tests or someone else is looking at these unit tests. They'll see those card numbers and they're like, what does that mean? What does that do? Doesn't make any sense to them. They have to go to the website and plug those numbers in. So you wanna make sure that our code is really maintainable and that's not gonna work for us. Also, we run the problem of the Stripe token. This is the one that's the bigger frustration for me. It's not just a dictionary of credit card fields, it's actually an encrypted token. So if I were to pass this thing in, I have to actually reverse engineer and kind of figure out what that value should be and it's some sort of like encrypted hash. So I have to generate it and then I, so talk about something that's like unmaintainable and unreadable. It's a little token of some kind but I don't know what that thing is. So that's no good. So clearly there's gotta be a better way to do this, right? We all know better now. Everybody here in the room knows there's a better way to do this, right? We tried mocking it? Sure, you know, our same idea here we're using pretty much the same general pattern. We're gonna call that patch decorator and we're gonna actually patch the Stripecharge.create method. I'm gonna set up a sample card error and the idea here is I'll just pass in whatever I want to kind of set it up however I'd like to with whatever error messages I wanna put in. And then I'm going to call our good friend side effect. Now you guys might remember this from earlier in the presentation. I said that side effect takes in a list but it actually kind of does double duty. Side effect can take in an iteratable object but it can also take in an error or an exception. So in this case, if you give it an error what it will do is whenever that method gets called it will raise that exception. So it raises up and then we can catch it. So that's really nice because then I can call my method and verify that it actually has the correct behavior. I just pass it in that card error and that's it. So it's a very simple, very easy way for us to generate exceptions in our code and test all of those branches. So let's talk about some of the takeaways here, right? Something easy, just wrap this up for us. If there's nothing that you guys remember from this presentation it's that Mocking makes, it solves three very specific cases for us. We eliminate dependencies, we verify behavior for things that have no return value and we can generate errors on the fly if we want. The only thing that's kind of holding us back is we just need to practice with this. Lots of examples, hopefully you guys can make good use of that. If you wanna try it out on your own you've got the read the docs, very good documentation there, lots of lots of information. I pulled most of the things from that. If you're using Python 2 you can just pip install it. If you're using Python 3 it's already built in so it's already there which is great. The way I learned was basically creating a bunch of simple test classes and trying it out and writing unit tests around it just to get practice. So let's go out and write some tests. Yay, do it. All right. Questions? Cool. Yay. I went up to the mic. Woo. Okay, so thank you very much for doing this topic. I've been searching all of the web, all the material I can find that is not specifically Java related. I've been trying to absorb it, but it's specifically about mocking and especially because when we're dealing with frameworks like Django, and I mean it's obviously not alone and I know that there's that whole new magicify Django initiative, but there's still a lot of magic. Yeah. And so what I've run into because I'm dealing with Django and I'm not dealing with the hot dog stand. Yes. Like I said, which is great for practice, but we're talking about, all right, well, there might be this image field type that is a part of another cloud model. And so it's just, how, what is a good source for looking for examples for mocks that would really kind of need to be a bit in perspective to do the right thing? Because I mean, you have the situation where a model field, for instance, in the Django universe is it can't be empty or you can't stand change. Right. The model to actually do some tests on it. So. Sure. Actually, I've spent a lot of time thinking about this sort of problem. This is a great question too, because it's sort of, I work with a lot of more junior developers and they say like, well, at what point in time is it like in Django land or like in sort of generic custom code land? And I ask them, what thing are you trying to test? Are you trying to test the integration of how Django fits together or are you trying to test some custom logic that you wrote? In some cases it makes sense to use Django's test runner and sometimes it makes sense to use Django's client. In other cases it doesn't. For like mocking out models that have particular fields, you can put in like a, you can give it a spec when you create your mock and that at least helps somewhat. But in other cases it almost makes more sense to just use like Django's test runner, test client and grab data from a fixture. So it's sort of like, they're like, oh no, you're saying you said to only use mocks ever, you know, it's like, well, be pragmatic, right? If I'm trying to test something that I'm getting out of a database and it has a complex relationship that I want to test and I wanna make good use of, that's not necessarily logic per se, it's custom, but we should be using, we could be using the stuff that Django has built in that it makes sense there, you know, and it's sort of, it's more of a murky area, you know, there's not a good, clear and hard rule on like, you need to just use this particular thing or only do it in this case. It's sort of, if it makes sense and it's pragmatic, do it that way. If it causes you a lot of pain and effort, then, you know, don't do it. Does that make, does that answer your question? But I mean, sort of, the only thing so much is that it's like, there are those scenarios where I really feel like it's my lack of knowledge, how to do it, because there are those cases where, no, I really, I would like that to appear as it needs or as it's expected to be, but I'm not really trying to test that. If I do not have to then muck around with dumping files onto the disk or, you know, shutting stuff into database, because that's not the point where I'm trying to test, but I feel like it's like with that, with the framework that I almost get pushed towards that kind of almost, you know, hybrid integration unit test, but it's not really a unity. Right, exactly. So it's because it's making calls to the database, because it's making calls to bits and pieces of Django, like I'm grabbing a setting or whatever, like, you know, it's something that isn't really a unit test, it's more of an integration test. And at the end of the day, like, it's not the end of the world, like it's good to have integration tests and it's perfectly okay to do that. I mean, you're writing tests, that's good no matter what, you know, you're making your code better. But I feel for you, I have the same problem and I don't really have like a great solution for it, you know. I know, I know, next year, we'll work on this, but it's a great question, something I've been thinking a lot about. Other questions? One question I would ask is, I've spent a lot of time recently dealing with a lot of SOAP APIs or very heavy external reliance on services that aren't always documented for how they're gonna behave in certain instances. Do you have any kind of tips or pointers on how to use mocking in order to make some of that stuff easier? Who, this is, and this kind of gets at like, one of the things that's kind of tricky with mocking to is that with a mock, one of the downsides of using a mock is that it will just allow you to call whatever on it. It just, it will say like, oh, that method, even if it technically should exist or should not exist, it'll say like, oh, that's fine, you called it. I mean, I don't know if it's there or not, but because it's a dynamic language. When working with something where it's a little more like unknown or it's a little bit confusing of where, you know, what things should be, the best advice I guess I can offer on that is, if it's uncertain, then make it something that's explicit in your test. So if the idea is like, I don't actually know if these fields are there. So I find this a lot when I integrate with an API. I'll integrate with other people who've written an API and it's like, well, it kind of works this way, but they don't have really strong documentation. And I want my stuff to be bulletproof. I want my stuff to be very, very, you know, concrete and well-defined. What you want is your test to fail if that method or that thing doesn't exist there, right? So hey, this method doesn't, you know, I was expecting that this thing would be there. I was expecting that this method would be available, but it wasn't. And then that forces a discussion between you and either the people who maintain the API or it raises a red flag for you and you go, oh crap, this thing was supposed to be there. It, you know, is an assumption that I had made that it would be there, but I'm, you know, I now have to rewrite some things to make sure that I accommodate that. Does that make sense? Cool, other questions? Go for it. A create autospec keyword argument, which can help to prevent it from accidentally. Exactly, so yes, thank you so much. The create autospec will allow you to like, it'll figure out what the methods are supposed to be on there. And then if you try and call one that doesn't belong to that class, then it will raise an exception instead. So yes, excellent point. Thank you. Anyone else? Yeah. Do you often practice like TVD in your, you know, just building out stuff or do you tend to write the code first, write the tests? I tend to, to be honest, I tend to write my code and then I write the tests, but this is sort of, I was talking about this with a junior developer and he was trying to learn Django, trying to learn Python, and the challenge he kept running into was he would try and write a test first, but he didn't really understand or know what the expected result should be. So because he didn't really know or understand what the expected result should be, it was hard for him to write a test because he spent more time sort of noodling around and figuring it and what I wanted him to do was more experimentation and sort of, what happens when you call this, what does this do? And then verify your assumptions with testing. I like that approach and I think that that makes a lot more sense since less topsy-turvy, you know, it's easier for me cognitively to think about that. Like, but in the TDD vein, I would say work on a small section of code, stop and write tests for it to verify that that thing works and then move on versus sort of writing the tests first, you know, and then trying to write your code. So it's sort of, it's not exactly TDD. I'm not an evangelist for it, but if you are writing tests in any capacity, I think you're at least doing something better for your stuff. You're quantifiably making your code high quality. So I think we're sort of getting close to this question with your last answer, but one of the challenges with this kind of approach is that it makes it, it's very like, you have to know about the implementation in order to know what to mock, right? So you end up having kind of brittle tests that are associated with the implementation way more than you might like. Do you have any strategies that you might recommend for avoiding that kind of problem? You say like brittle, can you clarify? Well, I mean like, you know that get random person is being called by that function, right? So you have to know exactly what to mock inside the implementation of what you're testing. I don't know, like I've thought about this a good bit on, it seems to me like when I write tests, some of it, at the end of the day, I have to know how that method functions somewhat. The actual details of how it gets implemented can shift, but when I write a test, I'm sort of like saying, I'm setting in stone what I want this to be and how I want it to behave. So it's sort of, it's that point where the rubber meets the road. So I don't think it's necessarily a brittleness. It's sort of like a, okay, I am writing down in code exactly what I think this method should do and exactly how I think it should behave. If I think it should call this method, it's gonna call this method. And when somebody changes it to not call that method because they refactored it, that test will fail. And that's good, that forces that like, oh shoot, I changed something, was this a valid test, is this still a valid test that we need to go through? So it's not necessarily, obviously we're trying to capture some of the logic of it. So in the examples, it's, I know these methods are being called, but they fill in the logic of my decision tree, the things that I'm trying to test and the logic that I'm trying to put in. So it's sort of a, it's a gray area of sorts, you know? I guess in regards to some of the brittle problems, something that I've, it depends on the problem, but a lot of times I'll have a fixture and my mock will load a fixture that I've generated and I'll use a make file to document how I generated that fixture. But my question for you is, do you have any experience with mocking daytime? I've gotten it to work a few times, I always get it to work. If you just use mock on daytime, it won't work because it's a C thing. Right. But there's a library that helps, I've always managed to get it to work without resorting to installing another library. Just wondering what your personal experience is. No, actually I haven't worked with that surprisingly. So thankfully I have not been involved in time zones and that sort of stuff, so I unfortunately don't have any wisdom there for you, but best of luck. I feel your pain, you know? I used arrow to get around that problem. Oh, arrow. Cool, I'll have to check that out. So arrow for using time zones. Cool, thank you. Thank you guys.