 Hi, and welcome to test driven development in open source projects. My name is Jonathan Burkhan. I'm an open source contributor for IBM. What does that mean? It means I work full time on open source software projects on behalf of IBM day in, day out. I ship code to various open source projects. Currently, I'm a contributor to Kubernetes, specifically Kubernetes operators. Previously, I worked on service catalog and other Kubernetes extension project. Before I worked on Kubernetes, I worked on another open source platform called Cloud Foundry. I've been in the space for around five, six years now, so hopefully I have something you might call wisdom that I am here to share with you about why tests are good, why you should write tests for your open source projects, and how they can help you manage the distributed world of open source software development. Before we go any further, though, I do have sort of a caveat. This talk was originally going to be an interactive sort of live coding tutorial. Given the pandemic and the shift to online, it was decided to rewrite this as sort of a more conventional presentation of me talking at you. However, if you would like to follow along, especially if you're viewing this as a recording, all of the code that I'm going to be tinkering with is available online at my GitHub. So just follow that link on the screen and check out the code yourself and try and follow along. So let's go ahead and get right into it. So test-driven development, that term in my experience has been sort of contentious when people hear you say that, they're like, oh, I really like that, oh, I really hate that. But it's kind of lost its meaning from overuse, so I want to stop, take a moment to sort of define what exactly I mean when I say test-driven development. To me, test-driven development is using tests to drive out functionality of your software. In that respect, the tests themselves are really sort of the primary store of functionality of your software. If you have some feature, the test you wrote that proves that that feature exists and then that feature works the way that you think it does is really sort of the primary form of that feature. The source code that implements that feature, it actually causes it to exist, but it's the test that proves that source code does what you think it does. That means the tests come before implementation. You need to create a test that describes the functionality you're trying to create that proves it exists. And that also means that test writing is hand-in-hand with development. It's not something that you sort of have a separate QA section for. It's not something that happens after the fact. Writing tests is development itself. It's done by the developers. It's done before you actually even write the source code. It happened at the same time by the same people. So why would we want to do this? In a sort of normal, non-distributed software development team, it might look something like this. You have a single unit of people who collaborate with each other to create a product, to work on a project. And hopefully, the sort of knowledge of the state of the project is distributed amongst them and shared because they're all working together. And I realize this is an idealistic sort of view. Even single teams are often geographically and temporally distributed. But in an ideal world, it might look something like this. In my experience, most open-source project teams tend to look something a little bit more like this. They're geographically distributed. They have members that work for different companies in different time zones. Different people with different expertise are responsible for different parts. So often the responsibility for developing the software is distributed, along with that expertise. Which means in terms of who wrote what part the actual software product ends up looking something more like this. You have a whole bunch of different pieces that connect with each other that were written by different people at different times. Some of whom maybe don't even work on the project anymore. So to look this back around, if you use test-driven development as a methodology to develop your project, you will know, even though many different hands touch many different pieces of the software, when a new change comes in and it has the test for the new feature, as well as all the old tests that were written for previous features and all those tests pass, you know that your software continues to work because the tests pass. The tests prove that your software worked at the beginning, so you know that it still works now. So to sort of demonstrate what I mean by this methodology, most of this presentation is going to be sort of a live coding exercise. So hopefully this works. Okay, so what I have here is a very basic sort of program that acts as sort of a calendar. You can set dates and check if the date is a holiday. Yes or no. And although I have a very basic sort of scaffolding set up right here, so far it doesn't actually do anything. I've written a very basic failing test, and I'm going to step through sort of the thought process of using TDD to drive out functionality of this program. Originally, like I said, this was intended to be sort of an interactive session. Feel free to chime in with suggestions or questions. I have a moderator, Chris, standing by to forward those questions to me. So feel free to make chip in and make comments. So, like I said, this is a basic calendar sort of program. You set holidays, and then you give it a date and say, is this day a holiday? Yes or no. So I've started off with set weekday holiday. So I should be able to set days of the week as holidays. We'll start off with Saturday. I don't come into work on Saturday, so Saturday is a holiday. And I've written a basic failing test that just instantiates a calendar, sets a date. That's the 6th of June 2020, which I've selected because June 2020 started on a Monday. So one through five on normal days, six and seven should be holidays. I set Saturday to a holiday, which is a basic sort of scaffold method I've created. Doesn't actually do anything yet. And then I assert that is holiday D where D is June 6th should be true. If it's not true, error and say Saturday should be a holiday. So if this test on the right, I've written it to drive out the functionality of my program. What functionality does this test proves exist? Currently it fails. So what is the simplest possible change I can make to my program that makes this test pass. Now, you might think, oh, I could actually make set weekday holiday, do something and store that state somewhere and actually drive out this functionality. But I haven't actually written a test that proves that that functionality should exist yet. The simplest possible implementation I can write that makes this test pass is actually something that is trivial. I can just make it return true. And that'll make the test pass. Now, obviously that's not a reasonable actual solution. But I haven't actually written any test that means that that shouldn't be the actual solution. So how can I help write another test that drives out how this should actually function? So let's go ahead and add another test that's similar to this one. But let's do the opposite case. So instead of test weekday holiday, let's say test weekday, not holiday. And instead of a Saturday, we'll make the date one, which is a Monday. And we can actually get rid of that. It's not actually doing anything for this test. So this shouldn't be true because Monday should not be holiday. So now we have two test cases. I've written another test case, which should fail, hopefully if I've done everything. Okay, so we have a problem. Currently, we're just always returning true, which works for the first test case, but doesn't work for the second test case. So again, what is the simplest possible implementation I can write that satisfies these two test cases? Well, I can't just return true or always return false because now we have sort of two different test cases. So I guess the simplest possible case is I can check if the date is a Saturday. And if it is returned true, else return false. Okay, so we've written a test satisfies both those conditions. So the test pass, so that means we're good. Now, again, obviously, this isn't actually a useful implementation, all I've done is hard coded in Saturday. But how do I prove that in the test case? Well, we can add another test case. So let's try Sunday, which would be June 7 to rename test case. Okay, so that fails. It passes on the first case but fails on the second, because we haven't actually made anything that cares about whether it's Sunday or not. Now, we could go in and, you know, drive this out a little bit more. Hard code in equals equals Saturday equals equals Sunday. But I'm going to go ahead and skip ahead forward a little bit because I've already sort of created the scaffolding and these methods aren't actually doing anything. So we're just going to go ahead and move those over here and store that state somewhere. So we'll have a list of weekday holidays, which is really just an array of strings. And rather than hard coding that in there, we're just going to range over that struct. Okay, so here a very basic actual implementation that should make our test pass. So that's great. We've sort of driven out this functionality. But let's say that's that was, you know, version one of our program. And now we want to write version to want to add a new feature, say the ability to set a specific date as a holiday like the fourth of July 2020. Before we even begin in the source code, let's write a test case that drives out that functionality. So it's sort of similar to this test and we're going to test one specific day as a holiday but sort of just days of the week. We're going to say test specific holiday. And we're going to do July fourth. And we know we want to be able to set this because it's a different kind of holiday. Now this isn't going to compile because that method doesn't exist. So let's just go ahead and sort of fill in a very basic placeholder doesn't actually do anything, but it's enough to make the code compile. So go ahead and rerun the tests. And that fails. Okay, so we've already got an implementation sort of working that worked for the previous one. So let's just go ahead and copy that. Add some more stuff to our struct. This is like holidays, dates. Oops, if I use all the right parentheses. And then down here, just make it range over that. So hopefully that should pass. So we drove out that feature using a new test and you could you could continue to iterate on this sort of process for a while. Writing tests to drive out new novel functionality someone and so forth. So let's go ahead and do that but sort of take a twist at this time, instead of. Well, let me just go ahead and write the test case. So instead of a specific holiday, let's say we want to set a recurring holiday every New Year's Day should be a holiday. So a recurring holiday test look something like this. So New Year's Day both 2020 and 2021 should be holidays. And obviously that method doesn't exist yet. So we should go ahead and write sort of a placeholder that doesn't do anything. But it's enough to make the code compile. And we have a failing test. Now, at this point, we could just keep on doing what we've been doing. Add another struct to keep track or add another field of the struct to keep track of this information. Add another range in our is holiday method to check this. But let's say, you know, we're adding feature three to our program and I'm a different developer. Other than the one who wrote the first two features and I say, we could do that, but that's that's getting a bit cumbersome. We're adding all these four loops and I really don't want to deal with that. I really don't want to deal with all the previous implementation. So I'm going to, I'm going to put on my refactoring hat and I'm going to go in and refactor some of this code. This seems like an ideal candidate for an interface. So before I even touch the novel functionality, I'm going to go back. I'm going to refactor some of the code that's already been written. Let's say rather than storing every type individually, I'm just going to have one single field called holidays. That is an array of holidays, which is an interface I'll create. And this interface specifies a single method will call it equals it takes in a date and returns a bull. Does this date equal a holiday according to whatever, whatever my idea of what a holiday is. And we'll implement this drug for each of the three types we've got going on right here. So the first one will say weekday holiday that has a strength. And then here, instead of keeping it specific, we'll just say append this to holidays. And that's going to complain because that type doesn't actually implement equals yet. So let's go ahead and do that. And I'm not writing tests. I would like to mention this for this functionality because this is just an internal refactor. These methods aren't, I mean, they aren't supposed to be publicly exposed. So it's not that anyone outside of this package would ever use it. I'm exposing the same interface that people have been using previously to interact with this package. So as far as they know, nothing is changed. So let's go ahead and write equals, which is really just this functionality right here. We say if week actually, instead of ranging over that, I can just say, okay, so I've reimplemented weekday holidays using my new interface. We've still got the old functionality for specific holidays. So if I go back here and run the tests, theoretically, all of them except for the ones for recurring holidays, which I haven't bothered to implement yet should pass, which is great. We know we're sort of back where we started, despite the fact that I rejiggered stuff around on the inside. So let's go ahead and do the same for specific holidays, which shouldn't be too difficult, as long as I can remember the spell frames, right? And this has just a date. We use this to do the interface. And that doesn't equal equals yet. So let's go ahead and make it do that. And again, this is really just the functionality that we had down here moved into a method. I think I left it a parentheses somewhere. We can go ahead and get rid of this code because it's now covered by this generic function. So we're back to where we started. But now the previous functionality, which I had implemented a completely different way, has been refactored to this new interface. But all of those tests that I wrote still pass, they still run in the same way. They still call the same method, same fields, arguments, but everything that's happening in the back end is now completely different. But I know that my functionality didn't change. I know it still works the way it originally worked because my tests still pass. So when I were to go submit this change as a pull request and someone who, you know, this is a totally new implementation on the back end, they've never seen it before, but they have proof that it works because my tests say that it does. So that said, let's go ahead and implement the new feature using this new interface that I made. So this should be fairly simple. And now it's entirely self-contained. So I don't have to actually muck around with is holiday anymore or other people's things like messing around with adding new fields to the struct. In fact, I can go ahead and get rid of all those fields there except for the generic holiday array. So let's go ahead and do this. Recurring holiday, which has a month and a day. It needs to have an equals method. And then I just append it to that struct like so. And now that I've refactored doing this interface, I need to create the new methods. I don't have to change this holiday. I don't have to change the original struct type. And our test should pass. So I wrote the new test to drive out the novel functionality. I didn't have to change the preexisting tests despite the fact that I can completely reimplemented the back end and all the tests pass. So I know that all the functionality still works. So despite the fact that those three different features could have been written by three different people at three different times where they didn't even actually know each other or know why or how those tests were implemented. The fact that they were implemented using TED proves that the functionality, the thing I actually want still exists, it still works the way I want it to. Now, this test or this example was obviously a very simplistic implementation. So let's go ahead and take a look at some real world examples. So like I said, I'm a Kubernetes contributor developer. So that's what I'm familiar with. If you're not familiar with Kubernetes, Kubernetes is an open source platform as a service that hosts containers called pods, runs them for you so that you can run infrastructure in a containerized manner. So the test we're going to look at is a very basic test. It's just, can I create a pod and update it? Now, this is a version 1.9 test. So this test is probably in its original form around four years old. But it's a very basic sort of piece of the functionality of Kubernetes. It still runs in our CI every time anybody submits a pull request. And we know that our pods still work the same way they always have because this test passes. Right now we're in the middle of ripping out the entire containerization technology that Kubernetes uses to run these pods, but this test still passes. It will still continue to pass. And we know that's why we have faith that it still works. So we make a pod, we create a pod spec that has some fields in it. We sync the pod spec, which will cause eventually a pod to be created in the background. We then use the label we put on it to find the pod. We expect that it come back, or rather we expect that it doesn't error. We expect a single pod to come back. We then update that field in it with a new value. And when we come back, we expect that the pod's okay and has a new value. So you can kind of see how this is a similar sort of very basic test of I do a thing. I expect the value that comes back to be what I set it to, which is similar to the test we were writing in our calendar application. Now, this is obviously a bit more complicated because it's running containers and stuff in the background, but the sort of the core idea is the same. I interact with this thing using the interface that I define, and I expect a certain outcome to come back. Now, like I said, we're currently in the process of rejiggering the containerization technology with you. So what's actually happening in the backend can change how we run that pod, where we run that pod, so on and so forth. But this itself, the idea that when I request a pod to be created, it comes back. That still works. I know it'll work because this test will continue to pass. And then we have a similar test that's also from 1.9. So this test is also four years old at this point. This is sort of a similar test where we create a pod and then we hook it up to a service, which is how routing sort of interior and Kubernetes works. So we create a pod, connect it to a service, and then I think this is we, yeah, we expect that the environment variables that that service should be exposing interior to a pod that we would use to receive this incoming connection are populated. So again, the implementation of how this happens in the backend. Since this test has been written, I know the entire routing infrastructure of Kubernetes has been rewritten. This is how this actually happens has entirely changed, but this test has not because the observed functionality of the feature still works the same. So this test continues to be good. And these tests were written hand in hand with the people who originally implemented this functionality so that when I submit a pull request that changes how this works, this is sort of proof that my change works, my new feature works, and I didn't break anything that existed previously. Okay, so let me stop the screen share and go back here. So that's sort of my rundown sort of a very basic tutorial on how you can use TDD to drive functionality for open source projects. So now I would like to take a moment to open the floor up to questions. I have helpful moderator Chris. I've been assured that I will be able to see or hear your questions. So yeah. So far, we just had two questions. One about sharing the link of the presentation. I noted that the slide handout there should be a widget at the bottom to download the slides, but I'm not sure if you have any extra materials beyond that, Jonathan. Widget. It is maybe it's only available to attendees, but. I'm sorry. Yeah, I'm not the most familiar with this technology. So I can tell you I will make an effort to make the slides available either on the schedule or failing that. I'll just upload them to the GitHub website. If you don't have the slides, it doesn't work, does it? We could do it afterwards. One of the moderators just said that we could do it afterwards. Okay. Yeah. So I will make all available efforts to make sure that these slides make it out some way to you. So thank you for that. Okay. So some of the media questions. So how do you test functions working with databases? So that that can be a more complicated question. Yes. The answer is you have multiple, multiple kinds of tests that implement that test multiple kinds of features. So going back to my holiday calendar example, let's say rather than just a simple struct that exists in, you know, in memory, we're storing a very large number of holidays in a database on disk somewhere. The answer would be from a unit test perspective. The test I was writing was unit tests. Then I wouldn't have a single unit of functionality anymore. I would have the sort of middleware that exists. That's where all those functionality exists. And then I would have sort of a back end of the database itself. So I would, now that we're, you know, implementing a more complicated application, we need more complicated tests to prove that that functionality exists. And I would do that in two ways. So the unit tests, the tests I were writing here, those will continue to look pretty much the same as they do. The difference being is that rather than storing that state locally in a struct, I would have a database that contains that data. And the interface between the two would be faked out. So I would have the unit tests, which aren't actually necessary. Like the tests I wrote, they don't actually care how the data is stored. They're checking the application logic of the July 4th holiday. And those would continue to operate the same. The actual data that's being tested against in the unit tests wouldn't actually go all the way to a real live database that has data in it, because that's not really what I'm trying to test by this unit test. I'm not trying to test the functionality of the database. I'm just trying to test that when I set July 4th as a holiday, the sort of the logic I wrote in my calendar struct itself works. So I would fake out the interface between the code I wrote here, the sort of this calendar package itself and the database. Now I want to make sure that my application as a whole still works though. So how would I do that? Then I would have to go and write integration tests. So I wrote this unit test that proves that my calendar unit works the way I expected to. That one piece works. But I need to make sure that the entire integrated application as a whole, the unit running that has the calendar logic and then the underlying database function correctly together. So I would write an integration test that actually maybe builds an entire thing, maybe does some fancy fake out stuff again. And then assert from the front end that, okay, when I say set July 4th as a holiday, that percolates through when it ends up in the database. And then when I check that that is July 4th a holiday, it comes back up through the whole sort of integrated application. So it's sort of multiple layers of testing. Kubernetes, an actual open source project, the testing itself is a project unto itself. So Kubernetes, for instance, has multiple suites of tests, unit tests that test the individual libraries or packages like the one I just showed you. Integration tests that tests, okay, now I have these two pieces, how do they work together? And then end to end tests that actually stand up an entire system and then pushes buttons and makes sure that the functionality of the system as a whole is what I expect it to be. And again, each of these tests suites is just further insurance that when we make a change, we didn't break anything. If a test fails, we know generally what broke and we can go where it broke and where to go fix it. So the more tests, the more kinds of tests, the more coverage you have. It's just another tool to make sure that you didn't break anything. Chris, do you just want to curate the questions for me? I'm not really sure. I got stuff coming in from multiple places. Yeah, sure. So the next question is how much time normally do you think TDD adds to overall development time? So that's sort of not really a straightforward question. You are correct in that if I just simply wasn't writing tests, it would take me a lot less time to just write the source code, ship it out. I'd like to go back a bit, though, to these org charts I sort of made up. The problem with that is that if a purple person writes some functionality and ships it out and then some indeterminate time later, a green person needs to make a change to that thing, they don't necessarily know how it works, what working is. If I make a change to it, how do I know the original functionality still works? So that sort of distributed responsibility and that sort of distributed functionality really needs test cases to even work in the first place. Otherwise, a person down the line who didn't write develop the code eventually, how are they going to know it works, how do they know how it's supposed to work? So I would argue that really in the long term, especially for open source projects which are inherently distributed team-wise, that this really saves a lot of time in the long run. Yeah. There was a similar question that was kind of saying, do you think the project management would agree with how much time maybe you would spend on some of these things from just starting to code first and then writing the test later? Yeah, I would definitely. So I would also argue that although it looks, and I know this is a big hurdle to get over, although it looks very cumbersome and curmudgeonly, if you code in this style, eventually it becomes sort of second nature to write the tests, especially beforehand, I think that's also very important. If you write the source code first and then backfill the tests later, you're sort of putting the cart before the horse. You're pre-confirming, you're pre-existing notions of how this thing is supposed to work and you can get blindsided on that later because you didn't think to put this weird test case in because you didn't think because you already wrote the happy path and maybe this weird sad path is going to come back to blindside you later. But if you write the tests first and you've used the test to drive out your functionality, you know it works because you wrote the test that describes the functionality and then and only then went and implemented that. You can sort of use the step-by-step process to formally describe what the software itself is doing. Yeah, I mean in my experience, like this has saved huge amounts of time in development costs just because when something breaks we generally know why it broke and where it broke and what we can do to fix it rather than running fast and writing a whole bunch of source code and then six months down the line or even just the next time a different person has to pick up that piece of code and it breaks and we don't know why it broke or how it broke or who broke it. Yeah. Makes sense. The next couple questions are related to code coverage testing. Like as besides from test-driven development, how much of your time would you focus on working on things that will increase the code coverage and do you think that there's a good percentage of code coverage to shoot for for testing? So in a perfect world obviously code coverage would be 100%. I don't know how useful in real development code coverage as a metric is because I know I could very easily write trivial tests that give me 100% code coverage and do nothing or test nothing. I could write tests that are very helpful and very useful that don't really increase that number. And this is definitely something I've had real experience with in the real world. So I actually don't put too much stock in code coverage like just some number that on my project says, oh, you know, you're 75% covered. Okay, but what are the tests actually doing? What functionality are they actually driving out of the product? I would focus a lot more on sort of the behavior of the tests rather than just some static number of, oh, you have this many lines covered. That's great. So I don't actually put much stock in code coverage metrics. Do you think that there is a bad percentage or a bad percentage? That's that's like what if I'm what if I'm shooting around like 35% would you try to dedicate some time to improving that? Or is I would I feel I feel I feel if you've gotten to that point that your number is something very low like that that you really need to take a step back and reassess how you're doing development to begin with. Like I said, I really don't like going back and writing the source code and then going back and back filling the tests. I feel like that's going to that's a very bad habit and it's going to come back to bite you because you can't, you know, you wrote the source code and you think you covered all the things in the test, but you didn't really you don't really know how do you know unless you wrote the test first to drive out functionality. And only implemented features that you proved you needed the existence of. And that that reminds me of a joke that sort of sort of semi serious but also sort of actually completely serious is any any line of code that you can delete and not make a test fail should be deleted because it's very clearly not required. Otherwise, there'd be a test for it. So yeah, like I said, that's that's only sort of half joking with that. That's a pretty succinct sort of description of my philosophy. Related to that philosophy. We have a question. When you're writing the test after you already have the code. So I imagine something you'd probably see a lot in Kubernetes. We're trying to create the test afterwards. How do you avoid writing the code twice as in just confirming what's already in the code? So I would generally not like look at the code as little as possible when writing the tests, especially if I'm writing a unit tests, I like to think of the package I'm testing as sort of a black box. And I wasn't really very studentless on the coding sexer size. I did just now, but like making, you know, the things that are public are the those are the buttons on this black box I can push. And it probably, you know, has another calls out to some other library. That's like the things that black box goes and does and the interior functioning of the black box. I'm not really supposed to know about or care about. So really harp on what's publicly exposed and what's not trying only think about, okay, set, you know, set day of the week holiday. That's publicly exposed method. What do I think that method should do? Don't actually even go look at the implementation. Just think about what, what should that method do? Okay, I'm going to go write a test that tests what I think that method should do. And maybe it'll blow up. Maybe they'll not. But if it does blow up, if it doesn't work the way I expected it, doesn't that really say something about the method itself? Maybe it's not even really an implementation change. Maybe it's sort of a management of expectations that needs to change. Maybe the method is a bad name. Maybe it should be called something else. Yeah, so I would, I would avoid writing the code twice by literally not even paying attention to the code. And that's where it goes back to my original philosophy of the tests are the sort of formal description of what the software should be doing. And you don't actually need to know the implementation of the software, hopefully, to know what it should be doing. You know that this package should be doing a certain thing. And you presumably know what that is, even if you don't look at the implementation itself. Awesome. Some I related. What what's your normal kind of plan of action when you get into repository that has a bunch of feature code but might be lacking a lot of proper unit tests. So that's actually something I'm sort of in a situation with right now. I'm working on an operator SDK that was sort of written in that fashion. Some portions of it are well you attested but other large portions of it or not. And I mean you do you got to make do with what you have. I'm currently on a extensive rewrite project to implement unit tests for the parts that aren't tested. And like I said, the process I just described is what I'm doing. I'm going in. Okay, there's this package. It should do a BNC. I'm going to write a test that proves it does a BNC and maybe that'll break stuff. Maybe it won't. And you can use that process to sort of drive out refactoring, especially if if a package was written without, you know, they just wrote the code and didn't bother right tests. It often is impossible to actually write tests for that code simply to the way it's it's it's worth maybe they have a bunch of private variables maybe they have a whole bunch of global variables. So that actually can help drive out, you know, a refactoring the functionality of the code doesn't change. But it, it makes the pack, you know, you refactor the package so it's testable. And that's going to make it more reliable, probably easier to understand instead of a whole bunch of spaghetti code you've got you know neatly sort of refactor little packages that connect to each other instead of just being a whole bunch of mush. So yeah, I think I think that can actually be a very helpful exercise. Backfilling tests and learning from that to help sort of refactor the code itself. Awesome. So you have just a couple questions left and left in the presentation. Yeah. So, next question. When you're when do you write the functional tests? And when do you write to unit tests or you're writing them at the same time? I'm not quite sure what you mean by functional tests do you mean I'm assuming like and tests like this whole thing functions at once. So I generally start off with the unit tests if I'm say adding a new feature and let's say, you know, I have like like a front end part, and a back end part, and then a functional test would be writing, you know, making sure the whole thing works on the way through. Generally, like I said, I would start with unit tests start working on either part a your part be drive out that functionality using writing the tests first. And, you know, even even if I write a whole bunch of good tests and use that to drive out the functionality of package a, you know, the feature doesn't actually do anything yet, which is fine. Just because you aren't exposing novel functionality and the product as a whole doesn't mean, you know, you can't write tests that are helpful. And it's this sort of bottom up approach that I think is really what provides the faith that, you know, if you use the bottom up in part a, okay, I know part a works because I wrote all these unit tests. So I know part a works even though that doesn't actually do anything in the whole scheme of things yet. I'm going to go do Part B, same thing, bottom up, make sure Part B works the way I think Part B is supposed to do. And then once I connect Part A and Part B, write functionality, your end to end or integration tests and say, okay, now when I press this button on the front of Part A, Part A is going to go do stuff and then press this button on Part B and that's going to go do stuff and then return all the way back. And it works the way I expected it to. Awesome. This might be kind of answered by test driven development, but at what point does a developer that is currently refactoring the original code? And should they modify the test suite associated with that? So this is existing code and test cases for it? At the same time? Well, ideally never. I mean, so if you're if you are truly only refactoring the code, the functionality of the code shouldn't change. So the test shouldn't have to change. So if you if you find yourself thinking of refactoring the code, you're like, oh, I really need to change this test like that's the point where I would generally be like something is wrong here. Either that maybe the test, I mean, it might be the tests that the tests are actually written correctly, but that means you have a pre existing problem that you need to, you know, think about and consider independently of whatever you were originally doing. That generally that generally tells me someone somewhere, maybe me has screwed up if I think I need to change the tests while I'm just refactoring the code. And I think we are just about out of time. So thank you all for coming. Okay. Yeah, thank you all for coming. I will make sure you can find the slides online. But yeah, thanks.