 So let's get started. So a little about why I decided to compile this talk in the first place. So during my initial months at postman, I was working on this module, which was supposed to be really, really performant. You were constantly trying to figure out different implementations to, you know, extract that last juice of performance out of this module. And we were actually trying out good strategies, but there was just one problem tests were holding us back. Every time we tried out a new strategy, I was writing and rewriting a lot of tests and then modifying them all because of a slight change in the code base because the tests were because the code tests were not able to accommodate that change. They were too rigid about the internal implementation. I was spending more time fixing tests than writing actual code at one point, which I thought was a huge red flag. I was going through tons of fixtures to debug failing tests. At this point, I thought something had to be done, but then things got a little worse. I realized that I was mocking and stubbing and faking and spying. And you get the idea, right? My tests were actually trying to use everything that is available to write tests out there. They were actually just as complex as the code I was writing. And I was hoping that the flaky tests don't show up again in the pipelines because some of them I couldn't really reproduce on my local system. So I had no idea how to proceed there. And at one point I was just staring at assertion failures and stack traces aimlessly. So at this point we decided that there has to be a better way. So we tried out a lot of different testing strategies and some of them started to make a lot of sense as a benchmark. We noted that from going from fixing about a hundred tests for a minor version release, we went from fixing less than 10 tests for a major version release. So we thought, oh, these things actually work. So we decided to compile all of these small things that you can do without, you know, changing a lot of different things in your existing suit. So since my title is so click baity, I would like to explain what not to expect in this talk. So this talk is not going to be about a comparison between different testing libraries and frameworks and approaches out there. I would assume that what you have, you're happy with it. If you're not, then there are tons of great libraries and resources and articles online would be able to outline these differences much better than I can in a 30 minute talk. This would also be not me preaching about writing testable JavaScript from the beginning. I would not tell you that your tests are bad because your code is bad because I can't preach about something that I don't do myself either. So yeah, what do you expect out of this talk then? Well, you can expect some small actions to improve your current tests without making a whole lot of drastic changes in your current frameworks. So let's get started. So these are some common mistakes and how can you avoid them? The way I've structured this talk is basically it's a 12 point talk. Sorry, that talks about before and after of some different common testing scenarios that you might come across. I think to note here is that all the before versions that I outline here are not bad actually. They work and they work very well. It's just that as you scale, some of them might not scale with you. Also, I've tried to be as a framework generic as I can, but for the purposes of this demonstration, I'm using Chai and expect. I think to note here is that you can actually do these suggestions in any of the testing frameworks just for the demonstrations. I'm using Chai because I love the beverage. So yeah, let's get started. So before the first point, there is this zero point, which is the test monolith. This is the term monolith has been thrown around a lot today. So yeah, I just wanted to ride onto the wagon. So the test monolith is basically this code that has is that is high coupling and no isolation. All of the tests are coupled in such a way that you can't really isolate them to run them independently. These are actually neither unit nor integration. What I mean by that is they make DB calls. They call third party services. They assert on multiple values and they don't really fall into any of the categories because they give you the worst of both worlds. You don't get the isolation and speed of a unit test and you don't get the coverage and confidence of an into an integration test either. So why I mentioned this as the zero point is that because this is something that we're trying to avoid throughout all of the 12 points would kind of build up to this so that if followed, you would be able to avoid the test monolith. Hopefully, okay, let's get started. So the first thing is asserting a deep equality. So we'll basically walk through this code and see what's wrong with it. So this is a normal test that tests for user details verification. It says that the returned user from a DB call should include all the required keys, which is the valid thing to test. So we have a user ID. We will not get into the details of how this user was created. And we're simply calling a DB service to get this user. Once you get the user, we're asserting it that it should call with this, right? ID one and name debug. Now this might seem like a perfectly valid test and it is, but there's a small problem with it. This. So whenever you're dealing with deep equalities and strict checks, there's a very, very high chance that you will be fixing your test later. As people who work in startups, we work on things that are under rapid development. So you might be adding a lot of different keys. You might be removing a lot of different values from the response objects that you send from one function to the other. And if you're very strict about the object that you're checking in your test code, it is bound to break. So tomorrow you say, okay, I want to add a username field as well to my users. All of your tests that checks for the user response strictly like this would start breaking because this actually says that your response should exactly be like this. Right? So a little slightly better way to go about it would be to do something like this. We have the exact same test here. When we call the, when we get the user by calling DB service dot get user by ID, rather than asserting on the exact structure of the object, we just say that it should include all these keys. Now I promise that I would not get into the details of one framework, but this is something that is present in most BDD test assertion syntax. I've just given the example of Chai. So you can find something like includes contain and something like that in your testing framework choice as well. So just a simple fix of, you know, not asserting on the exact deep object, but just checking that it includes all the required keys you want can actually make a huge difference when you start rapidly working on your projects. Moving on skipping tests. So skipping tests is something that we all have done in the past, or it is something that is actually required in some cases. So for example, in this case, I have a listen ordering test and it is that it should allow moving a listen to the beginning. I'm assuming that's just a listen linked list or something. And I'm testing that moving to the beginning should be allowed. It's a perfectly valid test, but for now we have skipped it. So what's wrong with it? There might be something that is actually not ready for this test to be done, right? But the problem is we haven't specified what that thing is, and we haven't specified when this test is ready to be unskipped. I've actually seen open source code bases in which there was a skip test from two years ago, and none of the future developers had the heart to touch that because they didn't know what that does, right? So a small, but nifty solution to this is specifying something like this. We're saying that moving a listen to beginning is currently not supported because of some limitation. And we are specifying a to do that says that unskip this once head pointer is introduced. So the next guy who comes along and reads this, if he sees that, okay, head pointer is now ready, he can unskip it and get on with it. This is just a small change that I've seen gets accumulated over time. Like I've seen 15, 20 skip tests that nobody had the heart to touch because nobody knows what they were supposed to do. So yeah, just a small thing that gets a long way when you're working with teams. Next slide. Too many fixtures. So fixtures are basically just dummy data that you use to send in your tests and assert on the responses, right? We've all seen that large json object that we have a fixture object of here. I'm seeing that we have too many fixtures. Like for example, in this one, I have a user that has the role as admin and I have second user that has the role of user. And similarly, I have so many other users with some different properties that I'm using in different tests. But the first one, I might be using it in a test for checking admin privileges. The second one I might be using for checking non-admin privileges or something like that. The problem with this is once something actually breaks, the next guy has to go three levels deep to debug what that was, right? So imagine the scenario you had a test that called the fixture data dot u1 and it suddenly starts to break. Now the guy goes to this fixture file. He looks at u1. He then looks what all properties it has. He then looks at the test and sees what key, what, what method would might be breaking that actual test. And then he goes on to debug that, right? And if you have like 2000 of these objects for different scenarios, you have like some technical depth you need to get off. So a better way to do this would be to construct an object with the minimum required properties. So here, as I mentioned, I'm checking admin privileges. I'm checking that the admin should have access to remove members from team, right? And I decided to construct an object with the minimum required properties that gets my job done. So the only difference between the first two users apart from all the name and avatar differences, but their role, right? So I can construct a bare minimum object that I passed to that function and check the same things rather than calling in, you know, fixture, fixture value. So what this does is since I'm specifying it in the test itself, it's very easy for the next guy to debug it. And we use U1 in the test code here and mention, see how I mentioned with the minimum required properties, right? Because some of you might have some policies in place that allows the checks that a user should have these properties, should have these keys and stuff like that, right? So the minimum required properties with which you can construct objects to be used in tests is a nice thing to have moving on. So conditional logic. So this one is actually a little bigger. Let's see through it and then see what's actually going wrong here. So this test is basically checking that on violating thresholds, we should add a user to the mailing list, right? So like if I use a violate some sort of thresholds, we add them to a mailing list to say that, hey, update your plan or something like that, right? So what we're doing here is we have a current use, a current usage of that particular user that we get through a random number. And based on the number that we get, the user's current usage might be above the threshold or it might be below, right? If it's below the threshold, we assert that the mailing list does not have this user. And if it's above or equal to the threshold, we assert that the mailing list has this user. This is actually a valid test wherein you're just checking that if the, if the user actually violates the threshold is added to the mailing list. But then again, there's something wrong with it. So the wrong thing, as we have already mentioned in the point, is conditional logic. Your test should ideally always test one execution path. If you have a test that is checking multiple execution path, you should ideally break down into two tests that enforces that to take one of the parts. So like, let's look at the better way of doing this. So here we have some value above threshold specified as thousand. We are using that value in the current usage. And in the test, we are simply asserting that the mailing list has this user in the next test. I can have a value that is below threshold that checks the alternate execution parts. So this is a simple thing, but what is what it's doing is it's making sure that your test always goes through one given execution path. There is no uncertainty involved. Next. So following poor spec naming conventions. So we all know that naming variables is a hard problem to solve in tech, right? But the same is true with naming your test specs as well. So what I've seen most people do is that they use the name of the function in the describe block that they're writing. So if I'm writing a function that adds a user, my describe block would say add user. And, but as a thumb rule for writing better descriptions, the describe it block should always form a proper sentence because even though you specified, say the name of your controller or anything in the describe block, the next guy who comes along might not have the context, right? So if reading the describe it block makes perfect sense to him, it makes his job so much easier. And a single test description should include the unit of work, the scenario and the expected outcome. Let's see some of the scenarios wherein this is being followed and violated to get a better idea of it. So let's see this. So here, as I mentioned, we're using the function name as the describe block description. In the eight block, I'm specifying that it's a test for number seven, two, double three, right? Now the seven two double three could be a GitHub issue, could be a JIRA ID, could be, you know, God knows anything, right? So if this test decides to fail tomorrow, the guy comes along. The next day I was looking at your code comes along. It tries to figure out what seven two double three means. It then looks at that particular issue, sees what went wrong there, then looks at the test, then sees what actually changed from there to now, which might have caused this test to fail and then fixes it. By this time you've lost 1000 users, right? So a better way to go about it would be to use the spec convention that we have followed stated right now. So yeah, something like this doesn't really make a lot of sense because what if you were using something like JIRA earlier and right? And then you decide to transition to something like aha or something and you have a different issue tracker now, how do you decide what this idea actually belongs to? Moving on. So we have this particular test here, which again uses the function name as a described log text and it says that it should return 200 for proper input. It should return 400 for wrong input. This I've actually taken from a very famous open source, uh, liability, right? And it just says that it should return 200 for proper input and 400 for wrong input and both of these tests pass, right? In the first test, somebody had used a value of say five or something in the next one, somebody had used a value of, I don't know, eight or something, right? As the next guy who comes along, it's up to my curiosity, what I make out of those numbers. Is it that the, this particular function works only for odd numbers? Is it work for, you know, multiples of five or something? I don't know, right? Since I've not specified what the proper input here is or what the wrong input here is, I don't really know what the context is. So it actually leaves a lot of things to the imagination of the next guy. And there's this very famous quote that says that always write your code in a way that the guy who ends up maintaining it is a serial serial killer and knows where you live. So if you follow that principle, this is not what your test fix should look like. So a better way, right? So this is a scenario in which I'm specifying that dispatching an active bucket should empty the active bucket and add it to the process list, right? Now this is actually following all of the principles that we've talked about so far, because we are specifying what the unit of work is, what the scenario is and what the expected outcome is. So if the next guy comes along and sees that it's failing, immediately knows that something's wrong with the dispatching logic of the active bucket and he can go ahead and fix that. He's a happy person. So are you next? So deeper implementation levels. So let's look at this again and see what might be wrong with it. So we have a function. We are testing for a function that adds a user to the database. We're saying that adding a user should add them to the database so much for good naming. We are specifying a user with the bare minimum requirements that we had talked about earlier and in the async waterfall, we are adding the user first. In the next one, we are calling an internal queue in which the user is added first and then flushed to the database. And we are saying that the internal queue should include that particular user. This looks like a perfectly valid test, right? We have a scenario wherein we add a user, it gets added to an internal queue, then gets flushed to the database and the user is added to the DB so much for writing good tests, but there's something wrong with it. Well, we are at a deeper implementation level than at what we wanted to be. How does that mean? And how does it make sense? Well, it would in a minute. So, yeah, he's a better way, right? We do the same thing. We add the user to the DB, but now we stay at the same level by doing a DB verification. Now, rather than checking whether the user was added to the internal queue properly or not, we actually get all the users and see that the user is part the user that we just added is part of all of the users from the DB. What does this do? And what why does it make a difference? Right? So consider the scenario where in your internal queue flushing logic went down. Right? So any user that you add is actually not being added to the database, but you're all of your tests till pass because in your test, you simply asserted that the user should be added to the internal queue properly. This is a very simple naive example, but the thing that is trying to highlight is we should always stay at the same level of implementation. If you're trying to say that an action went through, then rather than just checking that the endpoint returns 200 verify that that action reflected in the DB that way you're not screwed. If there's something wrong, or if there's a discrepancy between your database and how you're returning your JSON, right? Is it making sense? Okay, nice. Okay. So yeah, we expect that the users include this particular user. Uncertainties. So yeah, we have this particular thing wherein we are testing for string handling. We are saying that it should ensure that large strings are handled properly. It's one of those functions which formats large number of likes into like, say 10, 10 M 10 million likes to 10 M or something like that. It's just formatting likes and the way this function behaves is if it gets a string that it cannot handle, it throws an error. Right? That's the assumption. So here's how we're testing it. We are basically creating a very large string using this iterations, uh, variable that we call in the loop and we get this very large string X, which is just the repetition of the string one, two, three, four, five. Then we call it in format likes, uh, and we can, and we see that if it actually throws an error or not, right? If it does, we return the done call back with another. Otherwise we don't write. So this basically just says that if your format likes function through an error, your test fails. Some of you might be wondering why we don't have any assertions here. That is because of how we have handled the error in the catch block. We're simply throwing the done call, the returning the done call back with the error, which makes the test to fail if it actually throws, right? Now there's something wrong with it because if this test fails, there are, there's an uncertainty in my mind. Did my format likes function go wrong or did my string generation function go wrong? Right? It might be so that this method that I use to generate large strings has gone wacky and it's not generating strings properly anymore. Did that go wrong or did my actual function go wrong? Well, you have to check and know, right? And that is what you don't want to do when things are breaking. So a better way to do this would be to just, you know, specify a static string so that you trust your tests. You know that I'd specified a static value in my test. And if that is breaking and the error and the test is failing, then something is wrong with my format likes function for sure. The last thing you want when your things are failing in production is to be uncertain whether my tests are wrong or did I do something that might have caused this rate? You want to be certain about which thing you want to target when things are breaking and this allows you to do that. So we just specify a hard coded string and use that in our function. Separating declarations from usage. So this is something that is not really related to how tests work. It's something that you might come across when you're debugging failing tests. And it is something that used to get on my nerves. So I decided to add this here, right? So what I'll see is I have two variables at the top bonus points or anyone who knows what these variables mean and come and talk to me. Yeah. So, and in this describe block, we have tests that have vast declared up top, right? The first blog do not use Luffy or Zorro. The second blog uses Zorro, not Luffy. And the third block uses both Luffy and Zorro, right? But we've still declared all the variables at the top so that they're available in all the describe blocks. The next guy who comes along to your organization might see that, see this as a practice, right? That any variable that you want to declare for usage in your test, you declare it at up top. This actually leads to a snowball effect. I've seen test files, which when I open, I have 150 lines of variable declarations and then my describe it block starts, right? So the last thing you want when you're debugging a failing test is to scroll all the way up, see which variable is failing and then try to debug it. I've tried that split view and all of those things, but when things are failing, you just need your piece of mind. So a good way to do this is to use the scope that describe blocks provide you. So here's like, uh, yeah, so the good way to do this is to just specify all the variables close, uh, closes to their usage, right? So you, if since the describe block provides their scope of a scope of their own, if there's a variable that is supposed to be used in this particular describe block, specify it at the top of that block. It actually makes a lot more sense to, you know, scope them closest to the describe block they're being used in rather than at the top. Moving on dependence. So let's look at what this test is doing and then we'll see what's wrong with it. So we have a describe block that tests for all scenarios of updating a user status. In the first test, we are saying that it should allow updating meta details about the user and in the second one, we're saying that it should allow updating role of the user. So we create a user here, we pass that user ID to the next block and then we verify in the DB that the update went through like a good tester, right? Good thing, but here's something that we did wrong. We actually, uh, in the second one, we update the user and verify it in DB. Right? We use the same user that was created in the first test block in the subsequent one, hoping that since they're in the same describe block, they would always be executed again. So that way, when my first test run, I have that created user, I update its preference, I can use the same created user in the subsequent test. Well, this actually leads to dependence in tests. Ideally, all of your tests should be able to run in isolation. When you're fixing only one test, something like an only clause should work with each and every test and set up and tear downs are there for your rescue. So here's that same test again in a better way. So in the updating user status, we have this before method that actually runs before, uh, before the tests in this particular describe block. So the order is the describe blocks, runs first and then the subsequent ones. So if you see, I'm using it dot only here and it actually works because before this test ran, we actually had that user creation in the before block. So even if I run this in isolation, my before block would still run and I would access, I would have access to that user, which I'm trying to update. Unlike the first case wherein the user was created in the first block and then reused in the subsequent one. This actually brings us to the next point, which is extra setup. So we just saw how setups and tear down can help us remove dependence, but they can also hold up and act and kind of choke your pipelines, if not used in the right way. So let's look at this particular test. We are testing user updation flows in the first one. We are checking admin privileges. We create an admin, add a non admin user and in the after block, we delete the created user. Notice how we're using before each here instead of before. So this actually runs before every user so that even if we create and delete them, we still have access to them in the next test block. In the first one, we, we simply create the admin and non admin user and it uses the created admin user. And the second one, it uses the created non admin user. Right? This seems like a perfectly valid test. The thing wrong with this is we're actually creating the admin and non admin user before each of the test runs. Right? So in this particular case, the flow would look like what the flow would look like is we create two users, one of them is used in the first test, then we delete both of the users and we create two users again. One of them is used in the second test and then we delete both of them again. Right? These are just two tests imagine 5000. Right? So if you have a lot of code in your, you know, setups and tear downs, which is not exactly being used by every test in that particular describe block, then you're running a lot of things for each of your tests. Right? And as you scale, your pipelines would run for 30, 40 minutes. So your developers would simply push something, go out, eat something, come back, see that the pipeline has failed, rerun the pipeline and do that for the rest of the day, which is not something that you want. Right? So here's something that you can do in a better way. So here, what we've tried to do is we've tried to scope out the different, uh, checks that we wanted to do into two separate describe blocks. Now, since describe blocks follow scoping, each of the describe blocks would have only their particular life cycle methods run. So the first one would have set up for admin users and all of the admin related tests would go to that describe block. Similarly, the non admin users would go to the second block. In this way, the first test would verify is that whether the admin can update team billing cycle would have only the admin user being created before it is, before it runs, right? So this actually accumulates, uh, this actually allows you to scope out, uh, different tests in such a way that only those tests who need common prerequisites are grouped together in a describe block. Moving on. So asserting on values without explanations, let's look at this. Uh, so we have something called spy here. So spy is basically just a construct that allows you to, uh, look at a function and see everything that's happening to it, how many times it has been called, what arguments were passed to it each time and things like that. You can read more about spies. I'm using Synon here, right? So I have a spy on DB update and a spy on DB destroy and I'm creating an admin user and I'm deleting the user here. And in the assertions, I'm saying that the spy on DB destroy should have a call count of one and the spy on DB update should have a call count of two, right? This test works. Everyone is happy, but then one fine day, this decides to break. The assertion message that you get would simply be the spy on DB update has a call count of one rather than two. At this point, the next guy who sees it, since he has no context of what all goes in the DB update method, uh, in the, what all happens when you do the, uh, admin deletion, he has no idea what might be going wrong, right? In the worst case scenario, he might actually change that call count from two to one, make the test pass and then push that again, hoping that it was actually wrong earlier and now you have fixed it, right? So something like this can easily confuse people because you're just asserting on random values without explaining there. And if something like this starts to fail, you simply have a message like expected four to equal three, right? Which is very confusing for the next guy. So a small thing here would actually go a long way. We do the same thing. We create the admin user, we delete the user, but then we specify what each of those calls actually meant. In the first case, we specified the DB destroyer would be called once for the user who's actually being deleted. And in the second one, we specify the DB update would be called twice, one for the team members being updated. And since this was the admin user, the ownership had to be transferred to someone new. So one DB update call goes through for that, right? Now if this fails, then the guy knows that, okay, either the team member updation logic is going wrong or the ownership transfer thing is going wrong. I have to go and check one of those things, right? So he's a clear way of debugging what he has to rather than just speculating what could have been. Okay, last point, too many assertions. So let's see what this huge test is doing. We're saying that we should verify that a user with role admin can delete the team. We are creating a user, asserting on the return response like a good tester. We are verifying in DB that this action went through. We are then updating the user to the ground delete access and then we're verifying in DB again that the access was granted. We are then deleting the team this user belongs to and then as a good tester, you are verifying in DB that the team was deleted, right? This looks like a great test, but there's something wrong with it. So at every step, we have a lot of assertions, right? We are creating a user, then we are verifying it went through, we are updating the user's permission, we're updating that we're checking that it went through. And then finally we are actually doing what we wanted to, right? A good way to go about it would be to use something like stubs, right? So what a stub is, a stub is basically a function with a pre-programmed behavior. So you can specify that this function should always do this. If a stub is wrapped around a function, then the original function is not called. The function that you specify in its place would actually be called. So here I'm having separate tests for user creation and updation, which is very important. I'm saying that I am pretty sure that the user is properly created and a user is properly updated because I have specific separate tests for them and they work and I'm confident about my tests. In this particular test, I would simply stub that the user is created and has the required access, right? So I'm saying that whenever dbservice.get user is called, rather than calling the actual function, just return this user that has all of the granted access. And then I'm getting the user, deleting it, and then as a good guy, I'm verifying that it was deleted in the team, right? So this basically says that you should have some sort of abstraction in your tests so that if you actually are doing multiple things, you can extract out some of those things and stub to reach that particular point, which brings me to the next thing, which is abstraction and tests. So consider the scenario wherein we have an initial state, some setup, a branching state, and then we transition, we go to different parts, path A, B and C. Right. Ideally, there should be separate tests that assert that one user can reach the initial state properly. There should be a test that stubs that the user reaches initial state and then should check that it reaches the setup state properly. Another test that stubs whether he reaches the setup and can reach the branching state properly. And another test which reaches the branching state and then tests for each individual parts. So the point that I'm trying to make is if you're testing a thing once, make sure you tested only once. If you have a test that tests for user creation, then every subsequent test that requires a user to be created should not be doing that again and again. So if you have to reach a particular branching state to reach multiple parts, use stubs to ensure that you have already reached there and then actually test only the thing that is required in that particular path, which brings me to test design. So test design is nothing fancy. I'm not going to talk about TDD because it's not, it does not really make a lot of sense for some of the startups wherein your rapidly changing things. You write tests for something, you make them fail, you make them pass, you write the code. Tomorrow some other requirement comes the exact, the entire code base goes away. Right. So TDD does not make a lot of sense in some of the cases. So something as simple as test design where you're laying down constraints to define boundary where you're identifying the different actors, scenarios and all the permutations from the beginning and writing them down as described blocks and then writing the test and just filling those described blocks later makes a lot of sense. This allows you to think first and test later so that you have already prepared a spec of what your test should look like. And when your code is ready, you simply fill in those specs. And what all does this give to you? Right? You've done all of these 12 points. You've mastered the art of writing mature tests. What all does this give to you? Well, on boarding new people isn't a nightmare. Consider the scenario you have a huge project, right? And I decide to join tomorrow. I see this massive code base. I come to your desk and I'm like, please explain to me what all of these files do, right? You don't have the time or patience to do that. But if you have good test suits with proper naming conventions, where in describe it blocks make proper sense, you can just try redirect me to that and say, just read through all the described blocks. They would tell you what all different things are supposed to do since there are actually many use cases. Right? So onboarding me to your project is simply redirecting me to the well written test spec. On the other hand, if you are the one being onboarded and the next guy has great tests, well then hooray for you because you just got saved. And it gives you confidence against regressions, which is the most important part. When you're writing something, you don't want that thing in the back of your mind. I wrote something similar to it two months ago. Will this break that? Should I recheck my get log? Should I see what I did back then? Right? If you have good tests, you don't have that voice at the back of your hand, a back of your head. And that's what you need to make the next big thing in tech. So yeah, go ahead and build that next big thing in tech. So that test don't hold you back while you do it. Happy testing. Thank you. Wow, that was a nice talk. So any questions? Hi. Yeah, so I have a difficult time trying to understand the difference between mocks and stubs. So if you can highlight on that. Okay. So in my experience, what I've seen is you don't really need mocks at all in majority of cases. This is just my personal experience. Right? What I've seen is a stub that has a pre-programmed behavior can actually suffice in most cases. You use mocks only when you're actually trying to assert on the response that the mock gives. Right? So for example, if you're making a third party call to get some data and you need that to backfill your test, in that case, a mock makes sense, right? Because you're trying to say that rather than actually making that third party call, this is what that call would end up returning. But if you're using something internally, for example, in the examples I showed, you're using something like a get user or DB service or calling an internal method that was supposed to do some computation, their stubs make a lot more sense because you know, you can just specify that this is the function that should override this implementation. And since I talked mostly about unit test, they should not ideally include third party calls at all. So in those scenarios, mocks don't make sense. But yeah, if you have any third party calls, which you would like to assert on the result of, then mocks make sense there. So that's the main difference that I find that for any internal thing that you are calling stub surface, but any third party calls go with the mock. Have everyone, sorry here. So in your 11th explanation, where you were explaining why the test code written equal to one equal to two was not a proper one, the solution was that they can write some comment over that so that the end user can understand. Yes, yes, yes. Should I be there more to it? Otherwise, if I have to rely on the comment, I can rely on the same comment in the code itself instead of writing on the test clock or somewhere else. Yes, the sky when it would be extraordinary enough. So that that's a great that's a great question actually. So my counter argument to whenever someone asks that is I use tests to onboard people like I mentioned, right? Onboarding isn't a nightmare if you have good test specs. So if I'm redirecting someone to look at the tests, I would want them to minimally look at the actual code. So if I'm in my test description itself, I have I ideally tell them that close all the described blocks and just read the sentences that they make, right? Say dispatching the active bucket should do this should do this in this scenario should do this in this scenario and stuff like that. Right? So I ideally tell them not to open the it blocks at all. But in worst case, in scenario, you would like to see what's happening, expand the block. And there if they find a comment like this, they don't have to go to the code, right? So just for out of my personal experience, I've seen that specifying these things in the test allows you to onboard people better. Because if that guy goes into that code file to read that comment, now isn't he's in a new rabbit hole, right? He starts to try to decipher what that code is doing, how things are being working, how things are integrating together. And he actually gets confused a little. So in my experience, I've seen that specifying things at the test level itself makes more sense in these cases. Hey, I'm here at the back. So can you hear me? Yeah, I can. Okay. So great talk. I like the idea of designing test cases before actually writing them. Thank you. I have a question around the documentation. So let's say I I have written test cases for like five to 10 files. And somebody is joining my team now. So I would want that team member to go through documentation first rather than going into the test cases and looking at the assertion. Sure. So let's say comparing to a code, there are if you if you add just full or not just full just I'm not I think I forgot. So there are if you like comment blocks on top of your methods, just dog just dog. Yeah, that's all. You can use any npm package to generate a documentation. Surely is there anyone for test cases as well? Yeah, so just dogs actually make a lot of sense. We do use them heavily for some of the things. But what I've seen is that when you are working in a fast paced environment for a module that might not be even available tomorrow because expectations change in startups, right? In those cases, you don't really have the time and patience to write those beautiful JS dogs and comments, right? So something as simple as a normal comment, where are you specifying what I'm trying to do here or the described blocks making proper sentences would suffice there, right? If I'm writing a module that is being open sourced or that was going to be well maintained throughout, I would definitely recommend going with JS dogs because you can have a simple npm command to update the documentation when you do any more changes, right? That can be a part of your publishing pipeline. Whenever you publish, update the documentation by picking from the latest comments. That makes a lot of more sense for modules that you know are going to stick around. But when you're just hacking something quickly for something that is rapidly iterating, things like these make more sense. Okay, so my question is around, do we have any tool to generate documentation of test cases similar to JS doc? Probably, I haven't come through one, right? If you if you do, then please tweet about us. Please tweet about it. I'd be happy to look at it. Yeah, thank you.