 2.16, so we'll get started. Hi everyone, I'm Kath Tornwall, and I'm going to be talking about painless test-driven development with Elixir and Phoenix today. So right now, I'm working at Everything But The House. I've been there for about a month. You can find me on the internet as kTornwall, pretty much everywhere. And how many people do not currently use test-driven development in their day-to-day work? Or maybe not familiar with what it is? Okay, so yeah, there's quite a few people. So I want to do a quick introduction to that, just in case you're not familiar with what it is. So what we do with test-driven development is start the development of our features by writing a test. And then the key piece of it is that you write the minimum amount of code to make that test pass. And that's really like the secret sauce because what ends up happening is you're not writing a bunch of code handle extra cases or code that might not ever be hit by tests that could cause unexpected consequences. Another amazing feature of test-driven development is you inherently get an automated test tweet. And having worked on several very large projects, this is critical. So I started out as a C++ developer and we did not have many automated tests. We needed like a QA ratio of like one QA person to maybe two or three developers. So it gets expensive very fast. And we didn't have that tight feedback loop that you get with test-driven development, which makes life so much better. I was actually really surprised. I only started doing test-driven development about a year and a half ago. It was the first time it was introduced to me. And I like immediately fell in love with it. So if the words didn't make sense, I tried to make this diagram for a class I was doing with women who, oh, excuse me, women who code. So this is kind of my day-to-day process of developing in Elixir and Phoenix or Ruby on Rails. I start by writing a failing feature or acceptance test. And that means running a test in your browser and having the computer click through the elements for you. And checking if everything looks okay. Writing the minimal amount of code to make that feature pass. Number three is me checking work with my unit tests. So if I'm writing some kind of service or piece of functionality that's a little complicated, I like to write tests that don't interact with the browser. So that's what unit tests are. And use those to check my work, keep me going in the right track. And then once the feature test pass, you've got a really good time to refactor your code. And so it lets you clean up everything that you feel like you did wrong before. And definitely me and probably one of my pairs at my last job as a consultant, at a consultancy. We definitely would write some pretty interesting code to start out with. And then we'd come back and make it something awesome and really easy to read. This quote kind of sums up test-driven development for me and how I feel about it. And kind of backs this entire story of painless test-driven development for me. I get paid for code that works. Not for tests, so my philosophy is to test as little as possible to reach a given level of confidence. By Kent Beck, who's kind of a big deal. He's like one of the main proponents behind test-driven development and agile development. So he's kind of cool and knows what he's talking about. So yeah, that's why I've kind of described why I love test-driven development. I make sure to use it all the time because actually I don't get anything done without it. I can get really distracted and it's something that just keeps you moving forward, especially after you spend time looking at cat gifts and don't remember what you were doing. Or if you dance along with the cat in check and then you, so what are you all doing here and why, what's going on? No. But yeah, a lot of people don't like writing tests and it's because it's super painful if you're not coming at it the right way. So what are some of the issues that people run into? Especially with Ruby, but even with Elixir now, there are so many tools. So I've took this from awesome Elixir, so that's just what's on that repo. It doesn't fit on the entire slide and you probably can't read it from the back of the room because the text is so tiny. Nightmare sometimes. It's also really hard to find test files. So right now my current job, I'm working in a monolith app and there's a ton of folders and being new to the project, I can't find anything. It's actually really hard to navigate find the file and then once you get in the file, where is my test? Some of these are hundreds and hundreds of lines long and it makes it not developer friendly. Acceptance test, the ones that interact with the DOM. Often require knowledge of the DOM to understand what is going on. So I've got an example of that that we'll come back to in a second. And I don't wanna debug my tests. This was probably one of my biggest gripes getting started because I didn't understand how to write good assertion messages and I would need to go debugging into the test to figure out why it was breaking and that just is frustrating because then I have to debug my code after that and no one wants to do that, you don't got time for that. So how do we fix these problems? After writing all of these slides, I kind of ended up at the conclusion that we wanna do a couple main things. Keep it simple, make it easy to understand and this will result in eliminating some major pain points in your testing practice. I do have a demo app that I've made to demonstrate some of these things. You can find it on GitHub along with my slides here and it's a little application all about my favorite game, League of Legends. So I had a lot of fun writing it and there should hopefully be some good test examples and if my presentation doesn't seem done, this is the reason why. I may have been playing a little too much this week. So the first problem I talked about was how many tools we have available to us. How do we fix that pain point? So it turns out just make sure you only include what you absolutely need to start with when you're coming into a project. The minimum viable tool set that I was able to come up with for my Phoenix projects is X-Unit, which is built into the language and then Hound. You don't need anything else to start with. At that point, you've got everything to click around, do your tests. Great, when you need to add tools is when you start feeling pain of certain, whatever issues that you're running into. Am I duplicating too much code? It's time to start thinking about those kinds of solutions. So one probably very common example and I do end up pulling this into most of my projects is using a library called X-Machina. It generates models for you. Like Factory Girl in Ruby on Rails. It's the same kind of thing. I'm also really interested in generating reasonable fake data for my tests. So it's like pseudo random to help catch any unexpected errors. Errors like full names, addresses. Oh, five-digit zip codes versus nine-digit zip codes. That was a big deal one time. A tool called Faker can help randomly find these kinds of things in your test. You can tell it what kind of data you're expecting and it will generate something reasonable for you. So when you watch your tests run, you still see things like John Smith as a user's full name. Next, if you can't find your tests, we can definitely work on making that easier to understand by trying to generate, create a system of organizing your test files. My favorite way to organize my tests is to match the directory structure of the files in my actual code base so that there's kind of this one-to-one correspondence that's super predictable and I get really confused sometimes. For acceptance tests, they're kind of special. They go into their own folders and for me, as a personal preference, I like to match my Phoenix routes because that's my mental map of my system but a lot of my coworkers really prefer using the actions as their file names or basing their directory structure around the models in the system and it's just what works best for you and your team. Either way, it works. You just need to come to one conclusion and not try to mix and match. Otherwise, you'll both be confused. Long test files are also a big problem once you're dealing with more mature applications but it's easy to break this down and I was working with a friend on this trying to understand how we could do this better. As I was actually writing my talk proposal, he ran into this issue and what we did, came up with was that you can actually break up these modules and make a folder with that name and then put test files inside that folder that describe the piece of the module that you are testing and that was a really smart way to kind of break everything up for us so that we could find the test that we needed to work on and not have some kind of monolith test file for a really big module. As an example, in my app, I have a library to interact with Riot's API for League of Legends and in the actual code, it's just one big file with get this, get that, get that but there's a lot of different edge cases here and configuration options possibly. So in my test, I ended up making a test just for getting champions from League of Legends and it's a really easy and understandable way to help break up your files. If you feel like you're spending too much time building up your test data, we can make it simpler by utilizing libraries for factories and the helpers. So like I said before, Xmakina is one of my go-to libraries because a factory girl, we used it all the time in Ruby on Rails and it works really well with or without Ecto so a simple function call will be insert, puts it straight into the database for you, handles all of that without any extra lines of code but it also gives you an option to just build the object without interacting with Ecto which can be really important. If you don't need to interact with the database, don't do it, it will make your test slower. So it's a really convenient way for defining one function to do a lot of different things. Another great thing is factories can act as documentation for what your models might look like if you've got a property called name. What is it? Is it a full name? The first name is green name, no one knows but your factory will probably tell you if you're using reasonable test data. So an example of factories that I've got, I keep it in my support directory under test and I have one named for each model and it's simple, it looks just a plain, we return a plain object using a special name that Xmakina is looking for and I can say this is champion one or something so I'm not using faker in this, I was playing around to see how sequences worked out and my test to maybe reduce the number of dependencies that I had and so I can generate unique names in that way and I feel like this is a really easy way to understand what kind of data that you're looking for in your app especially with more complicated applications that have large domain. Helpers, helpers are absolutely awesome. When there's complex scenarios that you need to configure, absolutely vital to kind of move that logic into its own little area. For acceptance tests in the browser, the one I need in pretty much every single application I've written is sign in. You want to be able to actually click through the browser, sign in your user, authenticate but you don't want to write that in every test because all your tests are gonna be with a logged in user pretty much so the helper we run pretty much all the time. It can also be local to maybe a test if you want to build a specific character in League of Legends to test in your browser or something, you might want to name it build Annie. An example for my friend's application is he was working on a football app and they are dealing with games actually in progress and if you can imagine the data model for that, it's super complex, you need to know each play, the game, what teams are there, what's the record, this and that and he needed to run several acceptance tests on this model. It would be very strange to write a factory for that so he has a helper to do that for him. So acceptance tests, I don't know how many people have written a lot of acceptance tests but they can be really hard to understand the code base I'm working on in right now. It's just like a bunch of CSS selectors in the test and I don't get what's going on and I have to like look at the DOM at the same time that I'm looking at the test to be able to understand that but we can make it way simpler. To start with this, I like to write the test that tells your user story the way that they would and that's why I absolutely love writing acceptance tests which I feel is maybe not the norm. So I started out really wanting to go into design and I really love user experience actually and I lean more towards programming because of my math background but I still have a passion for that so they should just tell your user story what are they going to do in this application and if you write these tests this way a good suite of acceptance tests will document how you would expect users to interact with your application and that is amazingly useful when you're bringing on new team members to a larger app. So as an example of what something interacting with the DOM might look like in an acceptance test I've got searching for a summoner. So if you go to my app, we're starting at the main page. We're going to type in a username and it should pull up the result. If you read this, we've got insert a summoner, navigate to slash, okay home page, fill field, search input, okay. Click the submit, okay, yeah, I get it. Current path equals summoners and we're looking for some element. That doesn't mean anything. Really, without looking at the DOM how are you supposed to understand what is going on? Make it easier to understand and there's something called page objects in Ruby on Rails concept there. In Elixir, we don't have objects so page modules which James thankfully corrected me several times before I submitted this presentation otherwise he said people wouldn't make fun of me. So page modules encapsulate interactions with the browser. It can be a whole page, common forms across the application, like a user sign up or a sign in or just the header to check if the user is logged in. This gives you great advantages. If you make changes to the DOM, the code only needs to be changed in one place which makes your tests so much less brittle. It's actually, it's really exciting and I get so excited about this. Functions in a page module could be actions interacting with the DOM or assertions or checks that are inspecting the DOM to see if we have the right data. So examples of functions would be things like visit to visit a page, view mastery which would be clicking on the masteries link here so I could say view mastery. Fill form which is really great for complex forms. One function call and that's the only place you're dealing with all of that logic in your test. Function names can be, functions can be assertions and usually we try to end those functions names with a question mark to make that distinction really clear. Something I have on pretty much every page module is current page so I know I'm in the right spot and then I might be looking to see if I've got certain data, like if I've got the right mastery on my page. I had an example but we will just go to the code. So that test, that was the summoner's test. Here's the code, I've got the code with the, without the page objects at the top and this is what it looks like when you add in page objects. So insert summoner name, same as before but home page visit, I don't have to think about where that is anymore, it's just the home page. And then home page encapsulates searching for a summoner by just making a function for it. We have got asserting that we're in the right place and then we can just say does it have the summoner? We're not interested with how the page object is figuring that out, we can just read it in kind of as natural languages programming gets, at least. So if we want to look at the page object, pull that up. So we've got just a simple module, I've got a module that I'm kind of acting as a base class in Ruby that's holding some common functionality like sign in and it includes the hound browser helpers. So that's all built into this object already and then you can see navigate to a summoner page, actually kind of a complicated string that really simplifies that and then it's got kind of the same DOM interaction but hidden away from the acceptance tests. Next, sometimes tests don't tell us why they're failing but we can totally eliminate this pain point by writing better assertions and this is something I struggled with a lot when I was starting out in test driven development. Assertions can be more than checks. If you think through the assertions that you are writing you want to make them so that they give you very easy to understand errors. So when you've got a long accessor chain maybe more applicable to Ruby but you could do it in Elixir you can use intermediate assertions to tell you which object is null a little more easily than if it just fails somewhere in the middle of this longer query. You can also check that you're on the correct path of your URL before looking for an element when you're on the total wrong page. That's probably the one that's the most helpful because acceptance tests take forever to run. So let's see, an example of this would be I've got, so like my show test. You will see this in every single acceptance test that I write now. Always check that I'm on the right page before I check for any more details about that page. Sometimes making meaningful assertions can seem really hard. So I was struggling a lot with this current page idea. I really liked it on a project I was working on but every time it failed it says expected truthy got false. But boy, where am I? I don't understand. So after really struggling with this for a couple months I realized I can make a custom assertion. XUnit provides a way to prevent custom messages on failing assertions. And I can leverage this to give me the information I need to understand why my test is failing. So now I can make current page say incorrect path expected summoners to be this. And then I realized, oh, the query string isn't in the current path object. Okay, that's easy, my test is just wrong. And so how I ended up doing this was adding a macro to the base page object. So I've got that in support helpers, page helpers. I thought, oh no. Never fear. You put it in the wrong place. Okay. So I put it in my acceptance test helper. That's correct. Sorry, I got totally lost. That's what search all is for. So I've got it as a def macro, assert current path does quotes. We grab the expected path as our assertion and then we can assert current path which is provided by hound equals expected path. And then the second argument to assert is a custom message which I crafted to actually give me the information that I've always wanted out of this assertion. And it's made my life so much better. I can't even believe it took me this long to try to think of how to fix that problem. Kevin probably hates me now after suffering through that on the project with me for three months. So that was how I kind of solved a lot of this main playing points in my tests but there's so much more that you can do with Elixir and I've got a couple other tips and tricks. Mix test has some really, really awesome options. If you run mix help test, you can see all of the cool things that they've been doing. My friend told me about trace one time which gives like this really nice test output even though it doesn't support asynchronous tests. I saw him running it and I was like, are you working on a different project? Like I thought you were working with Elixir. He's like, no, just look at this, it's sweet. So if you run mix test trace, it's gonna show me line by line what tests it's running currently and how long it's taking and I really like having all that extra information because you can kind of see what's taking longer than other tests and what's currently going on instead of just dots which always make me feel really nervous. So also there's a stale option introduced in Elixir 1.3 which runs only test files that have changed since you last ran stale tests and this is using something called I think Xref to determine what modules are used in a test and it's a really awesome option especially if you are doing test-driven development because it's going to find interactions that maybe you weren't thinking of when you were writing your code and running your tests before so that like random tests over off in the corner that fails when you like are like, yeah, I got it, it's done. Now it will run with this. So, huge help. I've also been using a lot of filters in my tests and before the stale change came in I was using a lot of the at tag current to mark which tests I was running with test-driven stuff and I haven't tested this but I'm pretty sure that stale's not going to interact well with your feature acceptance tests in the browser because they're not directly linked to any of those modules so this is something that helps me really drill into what I'm currently working on. So yeah, if you run mixed tests only current drills right down and it really helps speed up our process instead of trying to pick out the line numbers because the line numbers can change for your tests and then it runs who knows what. Another thing that I've been doing and this might sound weird but hear me out I've been limiting use of setup blocks. So things like sign in. Great, that's definitely belongs there because you're gonna be doing that for every single test but I've been setting up my models inside of like the test statements and I've found that way more enjoy to be a way more enjoyable experience. It's clear what the current context is and it's easier to create only what you absolutely need for that test especially if your factory as long as you keep your factory is nice and clean too. It can be really hard to figure out okay well I've deleted this model from this test I need to go delete it from my setup stuff and then like getting everything correct out of the context I was struggling a lot with that it was taking me more time to refactor my tests than I thought it should. So I started doing this and I've actually been really happy with that practice. Another thing which is super awesome with Elixir that you definitely can't do with Ruby is asynchronous browser testing. Ecto2 had some big changes that made it really easy to run asynchronous browser tests with Hound and Wallaby which I think was discussed in the last hour session. It literally cut test time in half on the consulting project I was doing in Phoenix and Elixir because we had some very intense acceptance tests. If you're interested in learning about it my friend who was working on that project with me wrote a blog post link will be on the slides on GitHub but I love to demo this because it looks so awesome. So what's going to happen is we're spinning up as many Chrome browsers as my computer has cores and it's running tests all at once. So even I've only got about 20, it says 23 tests and it will finish super fast compared to running it without that. I think that I have some API tests going and I think that was going a little slow but it's been very exciting to see that change in particular because I could actually run the whole test suite in my project whereas it was getting to the point where like, we'll run the feature test once in a while it's just taking too long and then we could run them all the time locally. It was literally like a life changer. Let's see, I also have things that I didn't remember on the slides. If you're interested, doc tests. I have no experience with them so I didn't feel super comfortable writing about them but it's really awesome. You can find it on eluxirling.com. Org, I think. So what they are is you write IEX statements in your docs and it knows to check that the return value matches what you would get as output in IEX and that helps keep your APIs in line and I think that is it. Any questions? We need a runner so thank you. So I'm not too experienced with TDD but I'm very interested in it and one question I have always had is what do you do after you're done with developing all the features? Do you go and write even more tests to ensure that you got all the coverage or is your testing kind of done at that point? Yeah, that's a really good question. So I kind of go back to that Kent Beck quote. I only want to write as many tests as make me feel confident in my code. So I try to write those upfront because it's going to make me write the interface, write the function calls that I need to make in my code. So when I write my interface first as a consumer it ends up actually coming out much nicer and it's exactly what I need because I wrote it as the consumer first. Can you give us some examples of when TDD hasn't worked for you? So yeah, that's a really interesting question. Definitely we don't use TDD when we're spiking out experimental stuff as far as when we don't know how something works we're not going to start writing tests because it's such an unknown. Other than that, I can't think of anything. Can you go into a little bit about what your designers give to you and then how do you write tests from let's say user storage or what do you base your testing off of when you write your acceptance tests? Okay, yeah, so as a consultant we were actually interacting directly with the clients and trying to understand the functionality that they needed. So a lot of it was actually us coming up with those solutions. Now I get cards and Trello and it's like still try to come up with, okay well this is what's going to solve that story for them and the test names end up being very similar to what you would get in a Trello card like user signs in or something like that and then the test itself it's me trying to pretend I'm the user going through those steps and writing down kind of the verbs of what I'm trying to do. Like I visit this page, I click this button, I sign it and I fill in the form, I click submit and then check that I'm signed in. So I think, does that answer your question at all? So the user stories are pretty much complete when you go to write your tests and then you just kind of mimic those as best you can. Yeah, pretty much and if they're not complete I end up writing them myself. There's someone in the back, sorry to make you run. I was very interested in diving into some of those tests. Actually ran some issues that said it needed the riot secrets in order to, but it didn't have the structure of what it actually needed. Oh yeah, so in the repository and I totally shouldn't show you this but I will and reset my keys. So I made a file called riot secret and those are now my old API keys. So that's the file you need. I'll make sure I add a template on the GitHub directory so that's clear. Perfect, thank you. Anyone else? Well thanks for coming and I hope you get to use this.