 Yes, as at this point you I'm sure know, I'm an engineer at Weedmaps, but I am lucky enough that this is actually the third company that I have worked professionally in Elixirin, getting code out onto actual servers. And what that means is that I've got at least three places where I've made huge mistakes and even better as I'm part of a team, you know, I'm always part of a team and that means we get to make mistakes together. We also get to be successful together and one of the interesting aspects of speaking at a conference that's local to where I am is I'm looking out in the audience and I'm actually seeing a lot of those people that I've made mistakes and had successes with and I'm about to share a bunch of those things and I hope they still like me afterwards. So but that's a big part of what we do, right? We make mistakes, we figure out what we did wrong and we move on. One of the big tools that we have to help us with that and to help us prevent mistakes in the first place is testing, right? But testing is hard and a lot of us, most of us have come from another language where we had an idea of what we were doing, we knew what we were up to and then we got into this other, this new language where everything's concurrent and there's crazy stuff going on all over the place and we can do anything. I mean we spent the last two days having people tell us how awesome this stuff is and showing us these ways that we should be leveraging this awesome language and the beam in ways that we haven't thought about yet and that's great but I feel like a lot of us were still finding our way, figuring out how we should be testing. The downside is I'm not going to give you any magic, you know, bullets today telling you up, this is how you test and everything will be good from this point forward but what I can do is talk about how we get better at that process. So let's take a step over and talk about the title of this talk. What is an insert noun here whisperer? So popular culture gives us such things as the dog whisperer. Somebody asked me if I was going to have pictures of him and I promised I did so and then the other one I'd heard of is, you know, the horse whisperer so I found some images of people who go by that title, there was a movie about it and that has moved on and then that kind of started taking on and we got just whispers of all sorts, right? But what does that mean then to be a test whisperer? So it really comes down to that's what you name your talk when it's right before the submission deadline and you haven't thought it through very thoroughly and it's kind of a bad title. More importantly, as I did more research as I was building out the talk after it had got accepted, I started realizing that one of the big aspects of the whisperer phenomenon was more about talking or, you know, whispering, communicating to a thing and less about listening and what I'm really focused on today is listening to what your tests are telling you. What is it that we get back? Not, you can tell your tests all you want. You're the one writing the code. Getting it to listen is, you know, to you is probably not too complicated. So it's not the best title. But when I mentioned listening to tests, tests are just code and there's a common concept that we have of listening to our code already. It's called the code smell, right? And that's when you come across code and, you know, you just sit there and go, oh, because you've learned over time to recognize that that seeing that pattern or that anti-pattern potentially is gonna lead you to future issues or it means that there's something already there that you kind of know that you're gonna have to go refactor before anybody realizes that you've done that thing or that somebody on your team or your team as a collective has done that thing. You've got to go take care of it, right? And so the same thing actually applies to tests. But the tests, and we've got a concept of a test smell. But a test smell isn't really about, you know, your test telling you there's something wrong with your test. The same way the code is. It's when your tests are telling you there's an issue with your code itself. And so with that, I'd like to start over. I'm Jeffrey Mathias. I'm an engineer at Weed Maps. And I want to talk about being a test smeller. I'm guessing that it may not have gotten accepted had I done this. But now that I've got you as a captive audience, and that's exactly what you are. Now that I've got you as a captive audience, we're switching over. So tests in general, what are they good for? We can either do this interact or I can just tell you it's up to you. Who wants to tell me why they like tests? Sorry, somebody? There you go. It tells us how to use the code base. It helps us to find what's supposed to be going on. It keeps us from forgetting corner cases. Regressions. Oh, I love that one. So more importantly, it keeps us from breaking things that we're already working, right? So there's lots of reasons that we do it. Fortunately, I think I'm talking to a room full of most of the people here probably write tests. Some of you do it beforehand. Some of you do it after. I'm not going to get into that today. That's not what this talk is about. You should do it before. But the point is that we recognize that tests are really good. We also know because we write tests that they interact directly with our code. I personally love testing. I have a background before I got into coding. I was sculpture, product design, automotive technology, which is just a fancy way of saying I went to school to be an auto mechanic, only to find out that I was a terrible culture fit. But those things, I didn't, with the sculpture and the design, product design, I was making these things and I was pouring all this energy and time into it. And then I would put it in front of somebody. And it either worked or it didn't. And what I mean is it worked for them or it didn't work for them. And I either had communicated successfully or I didn't. With cars, I had very clear guidelines on what was supposed to be happening in that car as I was diagnosing and fixing it. But I wasn't making anything. There was no creativity. So coming to code, I get both of those things. I get to create a new thing and I actually get to know that it actually works before I hand it over to somebody. Can you imagine if the first time we ever knew that a website worked was when we pushed it live and told the world about it? It'd be awesome. And so from that standpoint, that's one of the things I love about what I do here. Don't get me wrong, I do like those other things still. I haven't completely abandoned them but I love people who work with me will tell you I'm way too big into tests. But that's because they're so useful for helping us work together effectively as a team as well. So one of the things that I found early in my career was, when I say career, I mean specifically my coding career was this quote from Charles Kettering. Charles Kettering is an engineer who ended up being the head of research for GM, but he invented the automotive distributor. That's the thing that for up until the last couple decades where technology got way better, basically let your engine run. It's the thing it's the system that delivers the spark from a single coil to each of the individual cylinders, the spark plugs to fire at the exact right time so that your car runs. So you don't know it, but we all owe a lot to this guy being pretty smart. And what he said was a problem well stated is a problem half solved. And I feel like that applies really well to test to the part of the reason I start writing tests first is because defining tests I guess helps me getting get to know what I'm building. And so that doesn't mean I don't go play around and build stuff first. But then I go in to find that test. But so what we're going to do here today is we're going to look through places where we're coming across tests gave us smells. I'm going to go through three code examples about that. Again, everything I'm doing today is not like, okay, this is the thing that you go out and apply in the world. It's more about trying to get a sense of what it looks like to recognize these things. And then you're going to have to go fail a bunch. So with our first example, and yes, I'm aware there's nothing on the screen right now, I just want you to look at me and listen to my awesome voice. With the first example, I'm going to set up a little bit. First off, none of the code that I have here is and some of it's going to be very obvious, none of it runs. These are me reconstructing problems from code bases that I no longer have access to, but the patterns are there and the concepts are there. So when you recognize that I've got a type, you're like, no, I wouldn't compile. Don't worry about it. Just pay attention to the general pattern. So for this first example, and in fact, a lot of this stuff comes out of the fact that every company I've worked at, we're typically moving really fast. For whatever reason, the companies that I've worked for have wanted to make money and getting things in front of people has been part of that. Priorities, I know. So with that, that means that a lot of times we're not paying attention to what we're doing or we think we're paying attention and we miss things. So the first example I have comes from when we were working again, we were scrambling. I had a junior engineer come to me and say, hey, I'm building out this new controller, but I gotta tell you testing here just sucks. Like writing these tests, the controller code is easy, but writing the test is absolutely miserable. So what are you doing? You say, I'm looking at one of the other examples, and I'm just kind of copying that and then modifying it for what I'm doing. And so I looked at the code with him, and this was his code. And honestly, what's in the test themselves? Nothing about this is super important except that we see that we've got a post and we've got, I don't know, a pile of test against that single post call. And I said, OK, well, show me the code that you're looking at. And it looked like this. I'm sorry, the example that he was pulling from. And it looked like that. So it looks like a lot of copy and paste. So there's a couple of attributes that are different, right? And we'll just jump back. So it's effectively the exact same test, but that meant that getting down into those tests and identifying the little bits that were different was really quite painful. I said, OK, great. Well, let's go look at the actual controller code. So that's the controller code for both of those controllers. OK, so there's something going on here. We have some macros going on, right? And so we're going to go kind of dive into what those macros are doing. And we'll take a step back and look at the rest of it. And that's, oh, and so this is the top part of it of the file. We are going to just actually just focus on specifically the create function and what it's doing and what it's calling below. There's a bunch of code below that effectively is getting called. And then that expands out. But the catch is, is everything we're talking about today is going to apply to every one of these actions, every one of the functions for these macros. So when we go actually look at the function below, this is effectively, you can just put it in the context of whichever controller you're working with. This was one of the controller actions. And in this else, each of these actually had a good bit of handling logic in them. And so the happy path was relatively straightforward. But so it's robust. It covers a lot of cases of what the behaviors could be, right? But the problem here, and I'm sorry, I realize I now have a slide that highlights the things I was just pointing at there. Just repeat whatever I said in your head, but with this on the screen. So this is not a conversation about whether to macro or not macro, because at this point, this was really just somebody using macros, I should say, this was us using macros to dry up our code, right? And there are other ways to do it. But the point is, is that we had picked this particular place to build our abstraction. And don't get me wrong, I think macros are really cool. And I don't want them to never be there. But also sometimes they're not the right place. Again, this isn't really that conversation, though. Use macros. But what we can do is we can look at the fact that the way that this controller was written forced all testing of those behaviors to cover all those use cases all the way from the controller down the stack and back. And the reason, and the issue with that is that this controller, and if we go back, we look at it, it's actually mixing HTTP issues and business issues, business ideas. And so we did this really fast because look at how fast we could knock out identical controllers. We had a bunch of them that were very similar. It really sped us up. But look what it was doing to us from a testing standpoint, right? And so by instead looking, re-looking at how we're doing or re-examining it, it gave us the opportunity to recognize that like what we should have done is we should have split out the controller specific logic from the business logic. You've got some service layer. You can call that thing whatever you want. Everybody's got an opinion and I'm sure they're happy to tell me why that's not the right name. Good. You pull it out into a thing, right? And you separate just those two layers of that logic. And that gives you the ability to then not have to deal with decoding in all of your tests. Dealing with the JSON payload, abstracting, making it more confusing about the arguments that are being passed in through having to do it through the testing framework around the controller actions. And then it also, and it removes it so that all of your tests don't have to be through that controller to be testing out that business logic. And so if we go back and look at what that would have looked like then, so this is what we had and what we'd do instead is we'd get back down to, and I know the code format is going to love moving that do on me, but that's just a thing I like to do and make it more readable. But, you know, so this gets us back down to significantly simpler cases. It's not golf, so I don't care that we removed X number of lines of code, but the actual cognitive load of just this controller test is significantly lower for having done this. And the tests themselves for that individual action actually get down to just success case, fail case, and then it looks like we had a second fail case because there was some authorization. And depending on what you're doing, you may even be able to handle that authorization at a different level, but we were focused on where the code was right now. So the good new, oh, and then as far as like the service layer goes, testing that stuff, you still potentially, if all that logic is still going to be used across all the different resources, you potentially can do that, but you're going to pull that service layer. You're going to test it individually as a macro. If you're from the Ruby world, you know, it's about testing your module or your mixin' it on its own, pulling it into a generic class and making sure it behaves the way you expect, but you don't need to repeat those tests over and over again. And there are actually different ways you can build that out or handle that solution. The big catch, though, isn't that, like, this is how you should build your code. The big catch is that I had a junior engineer who, by speaking up and saying, hey, this sucks. Like, I don't know what's going on here. You guys are the ones teaching me stuff, but like, I hate this. That caused us to reexamine and look at what we had done and recognize that there was, like, yeah, we built that wrong. We were moving really fast. We thought this saved us time and, you know, by not just slowing down ahead of time. But so that was really helpful, right? So speak up. If you see stuff, get somebody else's opinion. If you're not sure if it's a smell, get somebody else's opinion. The next example is a little different. It's not necessarily a thing I came across once, but it's just something that I look out for. And this this evolved out of another conversation with yet another coworker, because that's effectively what I do is grab their work and talk about it on stage. But it is here. This test is actually, you know, pretty clean. It's got a mock. It's using the mox library. If you're not familiar with that, check it out. It's newer. It's from Jose. And it's a nice way of doing things along that great platform attack blog article that came out a while ago. We've got a single assertion, which is nice. So it's a pretty clean looking test, but to me, there's a red flag. And there are reasons why somebody would add a capture log on there. The biggest reason, and it's one that I like, is that there's nothing more gratifying to me after working on a bunch of code and watching those little, running my test and watching those little green dots line up and just seeing this big field of green. Just it, you know, I'm like, yeah, I know what I'm doing. And so I can absolutely understand why somebody's capturing log here. But the problem is, and this is something that I prefer, that I like to do, is I'll go remove that. And sometimes you can, you know, you see what's going on, why they put it there. And other times you see additional stuff. There's something else breaking while that's running. So in this particular case, I'm guessing that this warning, which is intentional logging, and it's probably an error case test, right, this intentional logging is happening. And they were trying to capture that to get clean output. But what they ended up doing, and they meaning, well, me in this case, what they ended up doing was hiding the fact that there was something breaking in the background. Now, in this particular case, this example, I think it's just an issue of setting up the test or the database sharing, connection sharing correctly in a test. But the catch is that we do stuff that's very multi-processed. We spin things out all over the place and watching, you know, seeing processes die, seeing red showing up on your screen amongst all the green dots and then getting to, you know, zero failures at the bottom. There's something, there's a cognitive disconnect there that does not work for me. And so what I'm going to tell you to do when you do see that and you come across situations where this is going on is instead just switch where you put that capture log, move it down into your test itself and explicitly assert for the error that you're expecting to see. And that actually, in this case, will give you output that looks a lot like this if something else comes up. And then you can just remove, you know, comment out the assertion or that capture log if you need to get the full story of what's going on. But that way you're not hiding the fact that you've spun some stuff up. And that's one of the biggest things about this, right? Is Elixir gives us all this ability to spin this stuff out and do new processes. And it's not that we shouldn't be doing that, but from a testing perspective we need to figure out that we're isolating the right things that we're, and that we know what's going on. We need to know what's happening in our application during the time that we're testing. So fortunately something like this actually helps us get back to our pretty row of green dots, right? I got good news for Johnny. I'm going way too fast. So I think I bought you some time. So I actually only have one last example about this stuff. And it's about a time where I was actually writing a test case for existing code. And I was getting back there. I talked earlier about, you know, testing first or last and what not. But the fact is, is that even if you're a test driver, you're ultimately going to be at some point writing code or writing tests around code that already exists. In this case it was a bug, but when you're doing features there's already code in place, right? Like it's just not, there's no such thing, except for that very first time you're green fielding a project that you're looking at code that you're looking at tests that don't exist with some sort of existing, interact with some sort of existing code. And so in this particular case it happened to me that I was dealing with a bug. And that bug came up because we were processing things off of RabbitMQ and we happened to have a service that was dropping messages into RabbitMQ, had a bug in it and it was duplicating messages. And in the case of deletes we were getting, we learned about this through Honey Badger. We were, you know, we were doing our air collecting and we started seeing all of these stale, can't delete stale entry errors showing up. And it's because we were processing effectively the same message twice, not because, because there were actually two identical messages and we were processing them right at the same time. And so what was happening is one was getting it, we had two servers up. Well, I'll talk about that in a second. The whole thing though is that I needed these errors to go in. I needed it to go away fast because everybody was pointing at this like, hey, this is a thing. You guys have a bug and that's bad. And so digging into it, you know, we saw that we had those identical messages and I was like, ah, that's cool because Elixir, I can, I can write, I can use Async so easily and just duplicate or create this race condition. Right. And I was pretty pleased with myself. I was like, look how easy it was to write this test and duplicate this race condition. And I was working with one of the, one of the engineers who's newer to Elixir. And I was showing him like, hey, let's do some lessons time here. Like, look at this task thing. Let's talk about how that works and whatever. And he's like, yeah, but why is there a race condition? Like, no, shut up. Look at what I did here. I'm Asyncing stuff and it's really easy. Look what it looks to let me do, you know, and take that to your whatever language you were working in and stuff it. And he's like, yeah, but ultimately the thing is, is he had a really good point. Why was there a race condition? But because I was running fast, I was more focused on just fixing that error. And again, Elixir made it really easy for me to replicate it. I did this, I added, and that gave me an excuse to test drive, putting a database lock in, and I was done until he asked me that. And so, yeah, now we'll get back into why it happened. And that's because we had that service double publishing. We had two servers. It's good to be robust. We need to make sure that if somebody, you know, if one went down, we had another one going, but both those servers had their own listeners for that particular set of messages or that particular queue. They were each grabbing these identical messages and trying to process them. They were both pulling it from the database, seeing that record, and then both trying to delete. One was getting to the delete and the other one was saying, the data is just like, wait, no, this is gone. I already have, you know, this is, and so it was airing up. And to a certain extent, like it didn't really hurt anything. On the other hand, it really highlighted, you know, like the fact that I was trying to duplicate this race condition, highlighted the fact that we had been running fast enough that we had never really talked about how to handle or create single processes or deal with any sort of back pressure on stuff. So there's a few different ways that we could have resolved this, but at the end of the day, you know, it would have been something where either we had a single listener as yet another place that was just feeding to us individually, so they weren't combined, or we had connected the two nodes in the application and just told them to have a single listener. I mean, there's a few different ways that we could have handled it, but at the end of the day, it came down to we had an entire system issue, like an architectural issue that I was about to just hide with a clever, and I mean that in a very condescending way to my own work, a very clever way of using the asynchronous, the ease of which we can do asynchronous calls. So with that, man, I went really fast. So with that, the biggest thing about this, and it's just like with code smell, the only way to really hone this is to be paying attention and finding, like whenever you make mistakes, whenever you realize that you've done a terrible thing, go look at your testing. Was there actually a chance that you would have caught that there, but like you did something clever in your test or you did something exciting there that you were so proud of that you failed to notice the bigger issue? So every time you make a mistake or every time you catch your new mistakes, go look at your tests, go figure out why they didn't catch it, go figure out if there were things in there that should have been telling you why, that it was there the whole time, but you just were too excited and you weren't paying attention. So, and I guess that's the biggest thing too was with Elixir, we have a whole new set of smells that we're having to get used to, and be aware of that. Try to remember to yourself that like, hey, I may have been doing this for a little while, most of this stuff is still new to most of us, and it's okay to be wrong, it's okay to be doing this stuff, but just keep working at honing that skill set. So, yeah, the biggest thing is your tests are a huge component of your code base. They don't actually run in production, but they help make sure what you're doing is there that is in production is running well. So, with that, I am gonna wrap up. Again, my name is Jeffrey Mathias. I'm an engineer at WeedMaps. I would love to hear your own failure stories. I'm happy to keep adding to the collection here, and at some point we may, you know, I just may actually start blogging about this stuff when these come up. The other thing is that I am one of the organizers of the local Denver, Erlang and Elixir Meetup, and we have a meeting on Monday. So, those of you who are either local or are gonna be in town through Monday night, please either hit me up or find us online through meetup.com and come join us. Cheers, thanks. Oh, and I can take questions. I got time. Yeah. So, you were talking about how like the only time you're gonna be working fresh is like on a green field where you're not gonna be writing tests for code, but I think more often than not for me, you find bugs later. So, even in those cases, in the green field cases, you're doing tests and then something you didn't intend to be able to happen happens and so you're going back and adding tests to existing code even in those points. Are there any like pointers or like smells that maybe could mitigate that a little bit? So that you can kind of like think of all those edge cases from the get go? Well, I think we've had some really interesting introductions or like peaks at property testing this weekend or this week or whatever this is. But, I mean, ultimately, I think it comes down to the other thing you should always be doing is running your code. It's really easy to forget that sometimes because we're so good at testing that of course we covered all the things, but so I don't know that there's necessarily from a smell standpoint that you're going to identify that, but I highly encourage that as you're writing those tests and running that code and getting those little green dots that you occasionally just spin up your application and try it and that'll help you identify those things pretty fast. And yeah, I mean, like literally that very first, the very first code you ever write is the only place where you're not going to be writing tests with already existing code, right? Thank you, Jeffrey. Yep, cheers.