 I guess we can hear me. Welcome to the testing outside the box track. I don't know where Sarah is. So I'm just going to get started and hope that's OK. So you might recognize this image. This is actually one of four versions of Edvard Munch's The Scream. It's known as one of the greatest depictions of existential angst, really, out there in art, certainly in modern art, from about 100 years ago. And it probably describes pretty well how most of us feel about documentation. Because literally, everyone hates documentation. All of you are probably here because you hate documentation. You hear from other developers how much they hate the documentation, as well as the documentation of things that they use every day. And the truth is, it's not even just coming from your average everyday developer. It's coming from titans of industry. It's coming from Ken Beck. This is an interview I heard or rather read not too long ago, where he described something that we see all the time. Projects start with these ambitious ideals for we're going to do it right. And then real life hits, business hits, and it just doesn't happen. And he said, let's learn from the experiences that we have as we do our jobs. And let's use that to iterate. So maybe people just don't maintain detailed documentation because it isn't actually a good idea. And he has this just delicious way of expressing it, where he says, if it hurts running your head into a brick wall over and over, stop running your head into a brick wall. And I think that's probably sort of the feeling that most of us have about documentation, is it just hurts so much. It hurts to create, to maintain. And the truth is, it's not just Ken Beck. It's also all of the other signatories on the Agile manifesto who explicitly wrote as one of the four guiding principles. We want working software over comprehensive documentation. They set up a dialectical tension between the two. You can have your working software. You can have your comprehensive documentation. But you can't have both. No way. Well, I'm apparently very, very greedy. I want to have both. I want to have working software. I want to have comprehensive documentation. I don't think it's unrealistic, despite the collective wisdom of whoever was there at Snowboard in 2001 when they wrote that manifesto. And I think there are three reasons why I'm not crazy. I don't think it's crazy to say, why don't we have both? And maybe people will celebrate you, too, if you want both in the right way. Number one is, I don't think that the kind of documentation that these sources are talking about is the same kind of documentation that we're going to talk about today. Today I'm talking mainly about API documentation, specifically JSON APIs. The ideas are applicable to a lot of other kinds of tools, but we are going to mainly focus on APIs. So just throwing that out there. So yeah, what we're talking about is usage documentation. That's the thing that you give to your users so that they're able to use the thing that you gave them. If they don't have that, they basically don't have working software because they can't use it. I think the kind of documentation that's discouraged by these titans of industry is more about the implementation stuff. It's the UML diagrams. It's the big upfront design. It's the things that you're kind of building the system twice, once in theory and once in practice. And they said, let's cut out that waste. Let's just build the system once. But the thing that you're giving your users is critical because without it, they don't have anything. But that's my own thoughts. You don't have to listen to me. You can go to the agile doctor. I don't know who this is. Someone who goes by the moniker agile doctor and registered the domain agile-doctor.com said that it would have been better if they had written in the agile manifesto not working software over comprehensive documentation but working software over comprehensive requirements and design documentation. That's the thing that everyone was railing against. So the same way that code comments have gone by the wayside for the most part in favor of self-documenting code, upfront design has been replaced with just-in-time design. So we've gotten rid of that kind of documentation. But that's not what we're going to talk about today. Even if you don't buy what I just said, it's not the same kind of burden or at least doesn't have to be when you think about it as rather than this wall that stands in your way, an extra box to check every time you launch a new feature or change anything. What if it was the tool? What if it's not a brick wall you run into but something a vehicle that helps you move forward faster and to get to a better place? That's the kind of workflow I'm going to describe today. And the last thing is if you're here at this talk, even if you don't buy anything I set up until now, you probably need documentation anyway. So hopefully you'll join me for the ride and we'll talk about how you can have a better experience with your docs. So if we're going to figure out this problem, we're going to solve it. We have to start by understanding it and we have to ask the question, why? Why is it so difficult to produce accurate documentation? And I'm going to go through sort of a few different ideas that I've heard over the years. Things that people say to try to explain why documentation isn't as good as it could be on their projects and we'll just look at them one by one and see what we can come up with. So the first thing of course that we turn to is human error. Human error by the way is never the problem, by which I mean it's always the problem but it's not a solvable problem because we will always be humans and so we will always make errors. So if you blame a problem on human error, you can't solve it. We have to blame it on something that we actually can solve. We will always be forgetful. And when we take this approach of like, let's just get better, we're going to decide to do it, it works about as well as the average New Year's resolution. Raise your hand if you've ever actually gone through and stuck with a New Year's resolution. Okay, like a couple. Raise your hand if you've not gone through with a New Year's resolution. Right, so a whole lot more hands. That's about how well this works. Okay, let's find something else where we can actually start solving the problem. Our API is changing all the time. We're always adding new things, taking away things, changing what's there. How do we keep up? It's a whole lot of churn. And this really starts to get at some of the issue but it has this uncomfortable thing for me where we're again pitting the idea of change in your API against documentation. Either your API remains the same forever or your documentation is gonna be bad. I'd like to have both again. But we're gonna have to figure out a way to have documentation not be something that blocks change but instead is either a neutral or ideally a positive force towards helping us make the changes we need to make in our APIs. This is more of a psychological concern. Updating documentation is very mechanical. It's very straightforward. There's sort of a clear right answer and wrong answer. And that's annoying. We're creative people. We like to think when we do our work and we sort of like turn off our brains when we open up the documentation file to make changes. Ideally, if we wanna have documentation that's gonna work for us, we need to go the opposite and say how can we look for ways where documentation can be actually a creative process? Here's the thing I hear sometimes not so much from developers but from maybe management, maybe in talks or blog posts, things like that in the community. Rather than blaming a human error, we blame human malice. You just don't care about your users and if you cared about them obviously you would write fantastic documentation. I just don't think it's true. I think it's really hard to write accurate documentation and it's not helpful and it's not correct to blame developers for the problem. I think what's probably at the root of it is complexity. We build extremely complex systems. There's a lot of moving pieces. Since the 1950s, the generally accepted number for how many things we can keep in our heads at once is seven plus or minus two. So try to find a single endpoint in your API that has less than that number of moving parts. You probably won't find one. You have all the various little details of all the inputs that you're doing. Of course there's the request method and there's the path and all the things that could be in the path and the query params and of course there's all the different little bits of the response. Of course you're gonna make mistakes. We can't deal with complex systems and we have to just stop trying to do that. The thing that's not gonna work is guilt. I'm just kind of tired of hearing a lot of sources in the community trying to guilt developers into better documentation. Let's stop talking about the problem and start talking about solutions and the way we're actually gonna solve this realistically is we make it easier. We create the tools that will allow people to produce better documentation. So I'm gonna talk about a system today that will actually help us to do that. And when we're thinking about what would that system even look like? The guiding principle has to be let's not have to hold so much in our heads because the more we hold in our heads the worse it's gonna end up being. So if we take that idea and push it to its limit what's the minimum amount that we might have to remember is zero. What if we didn't actually have to remember anything when we were writing our documentation? Well it turns out there's this one weird trick other developers hate him. You don't need to remember anything if you just write your documentation first. Now I realize that I'm sort of playing a little bit of a game here because you're right. Fine so I don't need to remember anything when I'm writing my documentation but now I have to remember my documentation when I'm writing my code and that's true. But think about how flipping the script changes the question because now we're not talking about how to fix documentation anymore. We're talking about how to fix code. And that's important for two reasons. One is that that's how our users think of our software frankly. They read our documentation. That's apparently what the software is. If a software doesn't do that then the software is broken. The second reason is that we're software developers. We have tools that we've developed over the years to help us write accurate code and identify regressions when they pop up. This particular community has pushed this particular tool ahead in the number of years kind of raised that flag. It's the name of this track. Of course I'm talking about testing. So if we have a test that fails until the code matches the documentation we can actually write perfect documentation. So what we're gonna go through in the rest of this talk is the idea of documentation driven development. And this is a term you'll hear out there on the internet. Pretty much everyone is lying to you about it because when they talk about documentation driven development what they really mean is documentation first development. And that's good as we've mentioned. That's valuable. It helps you think more like your users. But it's not actually having your documentation drive your development. So it doesn't meet the literal definition of the term. And of course it's also not as useful. So we're gonna talk about what happens when you create and when you update your code. We're not gonna talk about deleting because that's a little bit more intuitive. So you can probably figure that out on your own. So for a new endpoint here's the process that you follow. The first thing that you're gonna do is document the endpoint. As soon as you document that endpoint you need to have something that's driving you from there all the way to completion of your code. So you're gonna have a test that pops up and says hey you no longer have a fully tested documentation suite. So once you have that failing test you're like okay great. So now I have to write a test that's actually going to test that particular endpoint. And as with any kind of blank driven development write just enough code to make the test pass. Now that's just one line but that includes your entire current development process. So whether you're doing inside out, outside in whatever approach you're using to write your code that's all in that one line. But everything is driven by that initial failing test and you know that your coding is not complete as long as that test that you've written for that endpoint is failing. Similar for a change to an endpoint. So first you update the documentation. Immediately there's one or more tests that fail because of the change because things are no longer in sync and you write just enough code to make the test pass. Okay so that's the flow that we're gonna go through. I just wanna contrast it with what most of us do which I would describe as user driven documentation updates. So here's how it works and probably you're all doing this so you recognize it. For a new endpoint implement it, document it. I mean you'll probably document it you might forget but probably and then a user files a bug report and says hey your API is broken. So you say okay great I guess we have to update the docs. It's a similar process to change an endpoint. So you update the endpoint because it's an update so you're not in that same mentality you'll probably forget to update the documentation or you might not even notice that something changed externally. And then again a user will say hey your API is broken and you'll go ahead and update your docs. So we don't like that. That causes a lot of pain. There's a lot of bad feelings throughout that whole process and ideally we'd like to escape it and come to a place that's a lot more healthy both for us as developers and for the users of our APIs. Before we actually talk about documentation driven development I should probably say hi introduce myself because I haven't done that yet. So hi everybody. My name is Ariel. I am AM Kaplan on Twitter and GitHub. That's pretty much everywhere that matters on the internet these days. I'm am Kaplan.Ninja that's my site. I actually have a really long blog post. We're gonna go through kind of high level details about these tools today but if you wanna see a whole lot more information some gotchas and some pro tips check out my site. I have a very long blog post on the topic that'll help you out there. It's also probably easier to share that with your coworkers than to initially give them a 40 minute talk to sit through because they probably won't. I work at vitals so this whole row of people here pretty much almost the whole row is my coworkers there. We work in the healthcare space creating transparency around healthcare data so we empower healthcare consumers i.e. all of you to make better decisions about your healthcare through information on the cost and quality of your healthcare providers. So if that sounds cool or you like the content of this talk and the other two talks one on accessibility by Liz and then Gretchen right here gave a talk about our high school interns program so if you thought that sounded cool and you might wanna work with us let me know. I am deemable. That also applies to you over there on the watching on conference later on. I run this thing called Dead Empathy Book Club which you may have heard of or you may have seen I kinda littered the whole place with stickers. Essentially the idea is to give people something concrete to do about developing soft skills as they relate to a software development environment. And in general just to kinda be better people. So we do one book every two months or so. We have panel discussion, we have an open Slack channel, we have dedicated Slack chats once a month. You can check it out on devempathybook.club. I would love to see you all there. Okay one last thing I don't like addressing hate on Ruby generally and this is probably the last time I'll ever do that in a talk but also the first time but I kinda feel compelled to. So there's this blog by someone who writes PHP and I guess is a murderer I wanna say killer PHP called Why I Don't Believe in Ruby and You Shouldn't Either. And there was this line that I found to be just particularly delightful in a sort of ironic way. He says the only thing holding Ruby together was the hipster coder community of 20-something year old nerds who are now 30-something nerds. Well for the next few months and this is the last RailsConf where I can say this, I'm still a 20-something nerd. So all kidding aside, this is an incredibly diverse community certainly in comparison to tech as a whole. I've worked on a team where people have less years of development experience and I have fingers on hand and people who have been coding since the age of punch cards and everyone's writing Ruby and it's kind of representative of the community as a whole and in many, across many axes and it's really something special to be part of this community and just ignore the hate which I failed to do today. So let's get down to business. How do we get started with documentation driven development? And of course as with every instruction manual I'm gonna start with step two. Well you probably think of this as step one but you'll see why I have a step one later. Step two is to create documentation that computers can read. It should also be documentation that human beings can read but the thing here I'm trying to make clear is that it has to really be for both. There are many tools to do this. There are multiple tools to do the things I'm gonna advocate for today. So I'm just giving you enough information about one set of tools to be dangerous but you can go out and examine the market and choose for yourself. So Swagger or as it's now been rebranded OpenAPI but people still call it Swagger so I'm just gonna call it Swagger is a specification for writing JSON specifications for JSON APIs. Bottom line what all that means is that you write a bunch of JSON. You actually write it as YAML generally just easier to edit and to read but it describes the structure of your APIs has a lot of places to insert things to explain to other human beings what any endpoint is doing and then you get beautiful documentation that looks kind of like this. This is a Swagger Pet Store you can see it at petstore.swagger.io I believe and kind of get a sense of what Swagger documentation looks like. So yeah, I'm gonna give you just enough information about what Swagger docs look like in order to understand the documentation testing part of it but this is definitely not gonna be a full walkthrough we could be talking until next Rails Conf and still not be done. That is a huge specification. So just enough Swagger a little bit to understand the overall idea of how Swagger thinks of your APIs. So Swagger thinks of your APIs as on the top level you have all your routes. You can see that slash ID in curly braces that's kind of like a colon ID in Rails it just means a dynamic segment in your URL. So you have all your routes on the top level beneath that you have all the various HTTP request methods you might send to a route so you can think of it as like you have your collection and your individual and then what do you do with regard to each collection endpoint or collection route and each individual route and then there's all the response codes that each one could output. So that's kind of how things are organized from the top down. Let's take just one example and just for a little bit of context I guess the example we're gonna use here you can see it's about packages I don't mean software packages I'm thinking of like physical packages that you might send in the mail something like that. Okay so when you post a new package you're trying to create a new package and we have to tell the user okay when you're gonna do that what information do I need to submit what's the input to the API so that it knows exactly what to do. Then if the request succeeds I get a 201 back now I have to know what is the API going to tell me when the request succeeds and finally if it fails and there could be a number of failures in this case we're always gonna use endpoints with just one failure but there could be a whole bunch what will the API tell me so I know how to move forward. All right so here's how that looks in code. This is what a parameters object looks like hopefully it's visible on both of the monitors and it was a little bit cut off before. So in this case we just have one single parameter in the body it looks kind of like this so you can see that it's as required true because this is going to be a required parameter you can't submit a package without actually saying what it is. There's a human readable description package to insert into the system and then schema is where we actually say in the case of a JSON object what it's gonna contain but we haven't actually said what it is because we have this ref thing here and what ref does is basically it says look somewhere else in the documentation in this case in the definition section we have a package model that's gonna say what the package actually looks like so again on the bottom you can see in JSON what it looks like it has a destination ID length, width, height and weight. So somewhere else in our docs it's referenced by that ref we have our package model so we have to list the required attributes and you'll note that weight is not required so it's not like you just require all of them you require only the things that are actually required and then we have our various properties and the properties will have a name and then what's the type, what is the format and what's the description so type and format are more for computers but also for you to read and understand so destination ID which is the canonical ID of the package destination has to be an integer in the format of a 64 bit int length, width and height and weight I guess are all in this case numbers and they're all floats so pretty straightforward matches the stuff we have on the left the last thing we need is to explain the responses what are the possible outputs of the API so in the 201 status case the package was successfully created again we'll just get a package model back in the case of an invalid package it's a 422 and we have an error model to find somewhere else that will tell you like what to expect in the case of an error again trying to minimize how much code I'm gonna throw up here so again our three questions what do I put into the API, that's our parameters what happens if it goes well, that's our 201 response and then if it goes badly we have our 422 response defined over there so let's just kind of crush this up a little bit and we're gonna nest it under the route and the request method so on the top level we have our route slash packages below that we have post which is the request method and then a little bit of general information about that and then again we have our parameters and our responses so exactly matching the format that we described earlier this is the Swagger editor so I literally just copied and pasted the stuff that I wrote on these slides right into here and you can see how on the right it immediately reflects that this is editor.swagger.io if you're interested to try it out and it immediately generates documentation that you would see in Swagger UI so in this case we just have one endpoint if we open up that green line on the right that post to slash packages we'll see something like this so right on top there's a really cool button there to try it out where essentially you just kind of fill in the parameters it gives you a skeleton to start with and you can actually play around with it in your browser which is really nice as someone developing against an API like that then you have your parameters and their descriptions it fills in a bunch of zeros you can give Swagger a more realistic example to use if you so choose and it's probably a good idea and then on the right side you see the responses so you have your 201 on top with a sample response what it might look like and then on the bottom you have your 422 with an error string of course there's also another section on the bottom of the documentation called the models it's actually also inline there are ways to look at it while you're still in the individual requests so we see our package model with all the information that we wrote about it it's the destination ID that's a 64 bit integer all the floating point numbers et cetera and of course their text descriptions and one cool thing is in a lot of these places where there's free text you can actually write it in markdown and it'll just throw it up you know probably formatted on the screen so that's Swagger just again enough to be dangerous probably not enough to actually start using it immediately but you know there's plenty of documentation out there about Swagger which is a little meta documentation about documentation anyway go check it out so step three once we've set up our documentation our computer readable documentation in step two is to test it and for that the tool I've chosen to use is Appavor it's a cute name it's based on there is an actual word Appavor it means a creature that eats bees so kind of like herbivore eats plants carnivore eats meat Appavor eats bees but I think it's I don't know if it's maybe it's pronounced API for because the idea is it eats APIs that digests them and understands them so it's pretty neat here's how you set it up it's pretty straightforward we include the Appavor gem you know bundle install gem Appavor in your gem file all the usual details that's going to give you this Appavor Swagger Checker class you have to tell it how to find your documentation which means that you will need to set up an endpoint in your API that's serving up the documentation in JSON format you will write one test to assert that all endpoint statuses are tested which of course they're not which is kind of the point that's going to catch all the things that are not tested yet and then for every endpoint status combination you write a test so like any RSpec test you set up context and then you tell your Swagger Checker what request you're going to make what params to submit and what status code to expect so we're going to go through all this in code I just wanted to give you the overview of what we're doing right now so here's the initial boilerplate I'm just going to point to a few things so on top again we're telling Swagger Checker where is documentation available so that I can read it and parse it and figure out what the API is supposed to look like we have that one test expect subject to validate all paths that basically says okay let's run all the other specs and then once I do that one sorry once I've done all that I'm going to run this one and assert that I've now tested everything which again you can see from the common test go here has not happened yet and that's fine in order for this to work you have to use order defined which is RSpec's way of saying run these tests in order you can of course randomize the order of all your other tests but you have to make sure that that one test that's kind of the spec of specs is going last so if you put that into the API we just created with that documentation we'll see immediately we have a task list so that's really useful for backfilling API because it'll tell you oh you have a request a post request to slash packages you have to test both the 201 and the 4.20 response codes and you can just kind of work your way down the list writing tests as you go so let's actually do that let's do the happy path test first the 201 so we're going to set up valid params Appavor has this underscore data kind of magic param there's a few of those for headers for query params different ways of passing different things so you fill in just a reasonable set of parameters and then you use RSpec's implicit subject so when I say it is expected to validate that's the same as expect subject to validate but you don't then have to write any kind of string so it just kind of lets you write a really, really condensed test so it's going to validate a post request to the slash packages route expect a 201 back and here's the prams you're going to use to generate that 201 and we of course are implicitly asserting that the stuff that we get back matches our documentation so we have one green which means that we apparently at least implemented it correctly and we only have one test left to right so let's do that failure path test all we're going to do is flip the length to a negative number and we assert that now when we make that request to post to slash packages we get a 4.22 back and again use these prams and we assume that it matches the docs apparently we did a good job because everything went green when you actually do this you will not get to green nearly this quickly because you will probably have a whole bunch of mistakes in your documentation that's the point of the test is to catch all those mistakes okay so let's talk about now creating a new endpoint again our creates and our updates are the two major things we're going to talk about so in this case we're now allowing you to update a package that already exists so a request to slash packages slash in curly braces dynamic segment ID we're going to make a patch request we have the summary information on top we have our parameters which again is going to be in this case it's actually a package update model you see why in one sec and then we have our two response codes in this case it'll be a 200 and a 4.22 our package update model you'll notice is the same as our package model except for one thing which is that there's no required line on top because when you're doing an update like you don't have to update any particular property you can update whichever thing you want so that kind of makes sense and you can dry up a whole lot of this using yaml yaml has that thing with the ampersands that you can just kind of dry all this up and only write it once so if you're worrying that you're going to have to write a whole lot of code that's actually not true it's not a big deal so immediately when we write this our test suite is going to complain wait a minute we have these two patch requests a successful and an unsuccessful patch request that have not been tested so again documentation is driving us into the development now now we first have to write some documentation tests so we set up a package in our database we create a package with valid params and now we have that accessible to us in our tests we're going to use that packages ID so anything that's a top level param in app of or refers to something that you're going to put right into the path so when you give it ID it's gonna say oh there's this ID and curly braces in your path so we're just gonna substitute that in but of course then it knows that I look for packages slash ID and curly braces within the documentation in order to match things up so we got our 200 on top that's gonna be again updating length in both cases but in one case it's a valid length 8.3 so we're gonna expect a 200 and on the bottom we're doing a negative length which doesn't make any sense to have a package with a negative length so we expect a 422 and now you actually start your normal development workflow I can't tell you what that looks like so you're gonna have some kind of failure that's gonna be driving you through it could be that there's no route for this it could be that there's no controller action for this depending on what you've implemented so far you're gonna have some kind of error and rather than teaching you all how to build a RailsF I've decided to just kind of fast forward that and jump to the green so eventually once you finish your whole development workflow you will get back to green and that's how you know that you're ready to do something else how about an update so we have our package modeled from before everything is the same we're gonna just add one field volume length times width times height we decided to calculate that on the back end and so there is now volume being returned from the API and we're gonna just add that to our required list up top so now we're expecting that every single endpoint that's returning a package is gonna have this extra volume property as soon as you do this you run your test suite and you get a whole bunch of failures that look like this anything that uses the package model now expected something else so in this case you can see it says the error is that it did not contain a required property of volume as you're reading this you might note that we have timestamps there which is something that we didn't even document we didn't even think about and that's gonna happen a lot when you run these tests you'll just notice like oh there's parts of this API I didn't think about should they be there should they not be there okay so that's pretty much the workflow of documentation-driven development with these tools there's a couple caveats about the tools specifically with Apivore show you this slide for a while I don't know how many of you noticed hasn't been updated in like a little over a year which for me as a daily user of this is pretty frustrating actually it's still on swagger version two swagger is now on version three another thing I want to focus on it says that you can test your query parameters so you can test the inputs to your API just kidding you can't I submitted a GitHub issue like a while back for them to just like take that out of the description if they're not gonna do it hasn't happened so I don't know what the deal is but bottom line it's not that well maintained it's not a perfect tool but it's still a massive improvement and that's what I want to convince you over the next couple of minutes that remain okay which brings me to step one finally convincing your manager or whoever decides how you use your time hopefully you have some autonomy over that but whoever is making those decisions convincing your manager or convincing yourself that you need documentation testing so I like to travel back in time and talk about how my team and I got started on using swagger and Apivore so back in October of 2015 like I said we're in the healthcare space my team maintains a system that deals with patient ratings and reviews of doctors and facilities so it's PRS for short patient review system it was rewritten very quickly from PHP and Laravel over to Ruby and Rails I wasn't part of this I joined the team in May of 2016 so I kind of inherited a lot of the confusion that kind of was already there a month later we were getting close to launching with our first client so we decided to use swagger we had to give them some kind of documentation swagger looked cool so we decided to use it took us about a week to get that set up in July we decided yeah let's do this documentation testing thing let's try it out so I started writing these Apivore documentation tests and instead of one week it took until August 18th to finish writing those tests and remember how I showed you that there was like these tiny little very simple tests so like what took so long well so just to be clear again this was 29 days it was 18 pull requests it was actually a whole lot of work but the stat I think is really important is this so we added 1800 lines of code which is again way too many to kind of ring some bells ring some alarm bells in your head and we deleted 2700 lines of code through the process of documentation testing so based on the workflow I showed like that that doesn't make a whole lot of sense like what took so long what was going on that whole time the answer is we were fixing a lot of broken things documentation test documentation mistakes that's what you'd expect doc testing to come up with and we came up with a lot of those but that actually turned out to be not the biggest use of our time we had an API that was stuff with a whole bunch of things that it didn't really need so like I mentioned before on that slide we have timestamps there should we even have those let's get rid of those if they're not necessary there are some models that never change why return useless information a timestamp that will never change we had status codes that were just not the way that status codes are supposed to work so a create action should always be a 201 we were returning 200 in some cases if we tried to send a delete request to a non-existent resource it would 404 according to everything that I found on the internet it should be either a 200 or a 410 depends who you ask but for sure not a 404 we had routes that were just over complicated and this sort of came about because we had again it's a system for rating doctors so you have doctors, doctor ID, reviews ID well what if that review ID is a valid ID but it's not for that doctor what kind of status should that return and the answer was why don't we just make the route more shallow reviews ID that way we don't have to answer that question so that helped us get rid of a certain amount of code we were a little bit inconsistent so when you write a review you're submitting a whole bunch of answers to the API but then we had a whole set of routes for just editing individual answers so are answers like attributes of a review or are they their own independent model we were being inconsistent about it and we said let's just get rid of all those answer things you can only delete it as sorry you can only edit an answer as part of a review we were mixing database concerns into the external domain model so we have this idea of you can flag review you can like it, you can dislike it we happen to implement it under the hood as a single concept in the database but why should the user have to deal with that the user thinks of them as three distinct actions they should be three distinct sets of endpoints so we split them out this is where it starts to get a little bit nasty so you were able to review a nothing you didn't have to submit an ID of a thing that you're reviewing what do you do with that even what does that even mean we started requiring that field I don't even know how that happened honestly we had this little problem where we're a multi-tenant system and if you requested information you could kind of get it for all the other clients whichever client like any review ID basically was available to you that was pretty bad so we started properly scoping our permissions because some things were returning a 200 instead of a 404 there was an active model validation that was just not working you have to answer a certain set of questions in order just to submit a review except that the validation was failing and so when I tried submitting without those required answers it was not a 201 sorry it was not a 422 it was a 201 that wasn't right so we fixed that and finally there was some system level data that was accessible to all users but should definitely not be able to be edited by just any user who feels like it that was happening too so we got rid of that so that was a whole other set of endpoints that we were able to delete so that's a lot of where that negative code came from so if you have to convince your manager like hey I wanna do documentation testing because it's cool that's not gonna work if you say hey here's a bunch of things that I could save you from like client data exposure and insufficient limits on permissions that might actually get a response of that sounds great let's do it and I wanna emphasize this wasn't a team of developers who didn't know what we were doing this is actually some pretty senior people who just made embarrassing mistakes because embarrassing mistakes happen and it was actually really good code it was well organized it was solid and dry and all the acronyms you could think of it was quite pleasant to work with there was really good test coverage but we didn't always think about it from a design perspective because we weren't thinking about it from the outside and when we started doing documentation testing that forced us to start thinking about the API as a documentation user otherwise known as a user what's cool now is that as we move forward when we're designing new things we start by defining the impact on the users so the takeaway that I hope you can all go home with is you can build a beautiful castle with just marvelous wonderful things in all the rooms but it's not worth anything if you don't give people the key because they're just flying blind I love this quote by Zach Zupala from the perspective of a user if a feature is not documented then it doesn't exist and if a feature is documented incorrectly then it's broken not the documentation is broken but the feature itself is broken the reason that this is the case is because users just see the documentation that is their primary source of truth they have nothing else to go by it is their guide to making sense of your API so rather than trying to fight it rather than being annoyed by oh we have to update the users and do a documentation and whatever let's embrace that let's start developing our APIs like we're users let's start seeing our documentation the way our users see it as basically that is our product and the entire code base from top to bottom is just an implementation detail and maybe, just maybe we'll build APIs that are easier to work with that are better for us in terms of again working with and in terms of developing documentation and we won't have all the frustration that we've gotten so used to thanks