 All right, good afternoon. Thanks for coming to my workshop today. Hope you'll enjoy it. I hope I can keep you entertained enough to stay out of the post-lunch food coma that I have going on too. So I've noticed in the room, this side's darker, this side's brighter, and so there's more glare on the screen on this side, so it might be a little harder to read the code on the screen, but if you like being in light, that's great. I'm gonna be sitting for a lot of this and you won't see me past the podium, which is either a downside or a bonus. I'm not that pretty to look at, so you might wanna sit over there. So I'll give you a minute to move around if you want. There's instructions for the lab up on the screen. We won't need that for a little while, so you have some time to get that going and if the Wi-Fi goes down, we'll have to live with that, I guess, sorry. So I'd just like to get a sense. How many of you consider yourselves relatively new Ruby programmers, like maybe less than a year? A few of you and everyone else fairly experienced. I just kinda wanna know how much explanation we need to do on some of the stuff that we go through. So I'll try to explain things as we go. If you have questions, stop and ask, that'd be great. All right, so how many of you have had to deal with legacy code before? A lot of you. How many of you have written legacy code yourself? That'd be me too. I think we all have at some point. So this workshop is based on a code cata called the Gilded Rose Cata. You can look it up online. It's a really cool cata, I love doing it. It's plain Ruby code. I'm not gonna get into Rails here. You know, it's RailsConf, I know. But it's an exercise that's like byte size enough that we can run with it during the workshop a little bit. We probably won't get done. This is an hour and a half workshop. The schedule, the post it around the conference is a little bit unclear, but we actually end at 320. So just before the last talk slot, I'm happy to stay around after and ask questions. There's nobody else in the room after us, so we could stay late if you want. But I'm gonna plan to wrap up for 320, just so you can get to that last talk if you were planning to. And if you wanna hang around, I'd be happy to do that, although my colleague Adam Cuppey is speaking right after me, and I kinda wanna see his talk, so, but I can see it some other time if I have to. So anyway, so Gilded Rose Cata. We're starting a new job today, and we're working for the Gilded Rose Inn, and we're working on the inventory management system at the Inn. If you play World of Warcraft, you recognize a lot of the references in this code. You'll probably have fun with it. So in our system, we have a bunch of items that we stock and we sell, and they all have a sell-in value. So the number of days that we have to sell the item. All the items have a quality, and that quality degrades over time. So it's how valuable the item is. And our system runs kind of like a batch process at the end of the day, and updates both values for all of the items in the inventory, okay? There is a spec. You do not need to read this wall of text. Just know that there is one. We have no clue if we can trust it. Documentation lies, right? You come onto a system where the spec was written months or years ago that code may or may not reflect that. So this might give us a bit of an idea what the system's about and some of the rules in the system, but we can't trust it. We don't know if it's true. We don't know if the code matches it. So we can use it as a rough guide, but you don't want to put too much reliance on it because somebody's going to pull that out from under you at the wrong time. We have a constraint. There's a troll that works on the item class, and he does not share code ownership of his code. The actual catalog description says that if you change this code, he will one-shot you and you're dead. So we cannot touch the item class or the items property. So that's a constraint. There's no collective code ownership on this team. This is not a workshop about how to deal with problem people on your team, so we won't cover that today, much as we probably would in a real team, okay? Now, with legacy code, if it's running and working and you don't need to add features, there's no reason to touch it. So we're done, we can go. Now, we have a new task. There's a new kind of item we're going to support called conjured items, and conjured items degrade in quality twice as fast as normal items. So that's what's special about them, and that's what we have to support in our system. So the way I'm going to structure this workshop, we'll do a little bit of presentation and demo. This is probably the most I'll talk at once in the whole workshop. We're going to do some mob programming, and we're going to do some small group pair individual work. I'm going to really kind of play by ear between those two. If the mob program is working and you guys are liking it, we'll keep doing it. If it's not working and you want to kind of work on your own, that's cool too. If you want to just take this coda, take the code and run with it for a while on your own and ask questions, I'm cool with that too. It's a really fun coda to work on, I love playing with it. So, but we're going to start with mob programming. And so when mob programming works, I'm not sure if you've heard of it. A guy by the name of Woody's will, I'm not sure I'm pronouncing his name right, kind of came, has been popularizing it a little bit. And the idea is, so pair programming, you have a driver role and a navigator role. So the person with the keyboard and the mouse is driving and the other person is navigating strategic direction, that kind of thing. With mob programming, you have one driver and everyone else is either a navigator or a researcher. So if you need to look up an API call, somebody can have their computer and look it up. But there's one person actually typing the code and that person is mostly typing what the other people are telling him to type. Oh, let's do this and let's do that. And so I'm going to be the driver for this just because having to get different people up here working on my setup is probably not gonna work well. And you're all gonna tell me which direction to go. I'm gonna be giving some guidance because I have a few points I'd like to get across this, but it's gonna be pretty organic and really driven by the ideas that you come up with. I am gonna try to steer things in certain directions. So how do we safely change legacy code? Anytime you have to touch legacy code, that's your question. It's like I have to change this stuff but this stuff's running in production. It's meeting somebody's needs. It's maybe making my business money or it's helping people, whatever your application is doing. For all that legacy code is gross and nasty, it is working mostly. It is doing something that is valuable for somebody and you can't afford to break that, okay? So as much as you wanna dis on the code when you look at it, remember that it's actually providing value. There is something there and you have to respect that. And so a lot of people just say, hey, I'm just gonna rewrite this mess. There's a lot of blood and sweat and tears that went into that code and a lot of learned lessons that are hidden away in the mess and when you do a rewrite, you forget about that stuff. So we need to respect that this code is actually doing something. We need to be safe about changing it. If you work with legacy code, you need this book by Michael Feathers. It's called Working Effectively with Legacy Code. It's a few years old now but it is gold. It is pure gold. There's so many good techniques in there. So in that book, he gives the legacy code algorithm which we'll be following. The first thing you do is you identify change points. So you have a feature you need to add. You need to figure out what can you change in the code? Where does this change need to happen? And then you need to find places where you can test. The way we safely change legacy code is we get tests around it. I should mention Feathers definition of legacy code is code without tests. The definition I like more is code that I didn't write but really I think his is a little more politically correct so we'll go with his. So you need to find test points. Then you need to break dependencies because a lot of times the stuff you need to test is too tightly coupled to everything else in the system and you really can't get at it. Then you write the tests, then you make the changes and refactor the code. So I actually tweeted at Michael the other day because I was curious how he thought this book applied to something like Ruby because Ruby you have things like metaprogramming and monkey patching which I mean he was writing more about C++ and Java stuff which don't have those tools. And so his response was you put more effort into characterization testing than dependency breaking, the latter is trivial. So in Ruby the dependency breaking is less onerous because you have power tools but you still need to do some of it but the characterization testing that we'll be talking about in a minute is the more important part. So I wanna thank Michael for responding to me that was cool. I had a slide up earlier, here's the code for the workshop. You can clone that if you want to if you haven't already. A couple of people were having problems with a couple of things so I actually did just push up a change. I forgot to include rake in the gem file. So it's there if you're having a problem with that. But let's get to looking at the code and before we do that OSHA requires me to show you this warning. That's the Occupational Health and Safety Administration if you're not from the States. I have to show you this warning because the code is really bad. All right, so the first thing we're gonna do is we're gonna do some mob programming and we're gonna write some characterization tests for the code. So let's, I'm gonna sit down now and we'll get into the code and take a look at it. How's that font size in the back? Is it readable? Yeah? Awesome. Okay. And color on white is good? Bigger? Bigger, yep. So unfortunately RubyMind does not respond to command plus so I have to do it the hard way. Can do what? Oh, cool. How's that? I wanna be able to get some code on the screen here so I'm gonna maximize that a little bit. Is that better? Yeah. Okay. So this is the item class that we're not allowed to touch. How much is that shown on your screen? That should be all right. Okay, so we just wanna look at it and see what it's about. You see an item has a name, a cell and a quality, just standard adder accessor so this is essentially a struct and not much more than that. So like I said, we're not allowed to touch that because of the troll in the corner that we're not dealing with today but it's there and that's what it does. Let's look at the tests. That's all the tests we have to start with. This is how the Cata comes. So it actually calls a method but I don't see any checks in there. There's no shoulds, no expects, no nothing. So great, legacy code. And here's the code. So there's this gilded rows class. It has, we have our database hard coded right in the initialized method, that's awesome. And then there's this update quality method that I will just scroll through and scroll through. So don't try to study that yet. It's nasty. I call this wedgie code because of the shape of it. All the wedges and triangles, whatever you want to call it. Okay, so that's what we're working on today. So the first thing we need to do according to the legacy code algorithm is we need to identify change points. So just kind of skimming through this code, what part does it look like we might have to change to be able to support conjured items? Update quality, that looks like a pretty good change point. Notice in our items collection there is actually a conjured item already. We're gonna guess that that's probably not being handled correctly yet. We don't know that, but probably not. But there is actually a conjured item in the database for us to work with, which is good. But update quality is our change point. So now we have to identify a test point. So what can we test about this method? There are no input parameters. We scroll down to the bottom. There's not really return value. So what can we test about this method? What's our test point? Items property, yeah. Pretty much this whole method operates by side effects on the items collection. Okay, there is no getter for the items collection. So how are we gonna get at that? There's basically at least three things I can think of. So the first obvious one is, well let's just add an add a reader, right? Let's just drop that in up top like that. The problem is we just changed production code and we have no tests for it. Now sometimes that's what you have to do with legacy code, because there just is not another option. But this is Ruby and we have power tools. So let's not do that. I don't wanna touch the production code until we have a safety net to work with. Yeah, instance variable get is the second way we can do it. And the third way, so I'm just gonna make a let for the items because that's what we're gonna be working with here. And we'll play with some ideas. So one of the ideas was instance variable get so that we can do, if I could spell, that would be great. So we can do that. And there's one other way we could do this that we won't go with, but we could actually reopen items and we could monkey patch it only for the tests. We could do that. So Ruby gives us some tools, whereas in other languages, we'd probably be forced to change some production code without the test safety net. So for today, we're gonna go with the instance variable get because I think that's probably the lowest impact one. But now we have the items. So there's what Michael Feathers calls these are sense points, things that you can sense about the application that you can actually write some assertions about. So that's great. So what's the first thing we should maybe test on this code? We don't wanna study it, we don't wanna actually, we don't know what it does yet. It's unfamiliar. All we're trying to do is get a safety net. So we wanna go really quick and dirty. The kinds of tests we're gonna write here are not the kinds of tests you've been taught to write your whole career. These are gonna be ugly and messy and you're gonna throw them away. Coraline Emkey did a great talk at Mountain West Ruby and part of her talk she was talking about some of the testing you do on legacy code and you basically write, you can write nasty, ugly tests and once you get the code refactored so you can write better tests then you throw the ugly ones away. They're just there scaffolding to where you need to be. So we're probably gonna do some anti-patterns and some gross stuff with the tests and that's okay because it gets us to where we need to be, all right? So what might we wanna test first? That's an idea, okay? Whether or not the items change in quality is good. I wanna go even simpler than that. Why don't we even just test that we can actually get the information out of the items in the first place? So change nothing, run nothing, just make sure that we can assert things about the items as they stand today. Does that sound okay? All right, so a lot of this talk is gonna be, I'm gonna be kind of insisting we go smaller, smaller baby steps because once you know how to go really small then you can handle any situation. Most of the time you won't go that small but when you need to you really can. That's a point that Kent Beck makes a lot when he talks about programming. It's just sometimes you really need to go really small baby steps just to get through a tricky section and then you can kind of make, take bigger steps later. All right, so all we're gonna do is we're gonna try to write a quick spec about the items. So I'm gonna actually get rid of the tests that's there that's a complete no op assuming my, I learned how to drive a keyboard today. So I'm going to do, these are gonna be actually, I'll use specify here. So after no updates and these are gonna be ugly RSpec tests and that's okay, I'm using RSpec. I actually have many tests in the repo if you prefer many tests. I have a specific reason for using RSpec for this that we'll get to in a few minutes but I'm kind of, I switched between the two a lot. I haven't decided which one I like the best yet. So let's just assert something about like the first item and that its quality hasn't changed. So let me split screen here real quick so that we can see both. I'm using RubyMind because I like the refactoring tools in it, you can use whatever editor you like. All right, so let's just take the first item and make sure that it's, I have to remember the order, sell in is first and quality. So let's just make sure it's, let's write a couple of expects, they're gonna be ugly and that's okay because we're gonna change them in a minute anyway. I got these backwards, didn't I? It's selling first and quality second, right? All right, so let's go ahead and run that spec. Okay, so it passes, we can actually sense the items now. Now you can imagine what these tests are gonna look like if we keep writing them like this and so instead of continuing down this road and spending our whole workshop on it, I wanna teach you about a different style of testing that works really, really well for this kind of legacy code. Do we actually care about these numbers in this test? Do they matter at all right now? Do we care that they're right? Not at all, we just care that we don't change them. And so there's a style of testing, it's actually an anti-pattern but for legacy code it works great. So let me just flip back to keynote real quick one second, there we go. All right, so it's called golden master testing or it's also known as the guru checks output anti-pattern and so what the idea is that you get some output from a system and you have a human look it over and say, yep, that looks good, you save that off. The next time you run the test, you get the output out again, you compare the two, if they're the same, great, test passed, if they're different, then the test failed and the human has to look at the output again. Now the reason this is an anti-pattern is that the time you most need this test to be right is probably when you're making a change that you need to get into production to fix a critical bug and at that point the humans involved are gonna be the most stressed they're ever gonna be and so to have them manually check the output at that time, really bad idea. It's a recipe for disaster. That's why this is an anti-pattern. But in this case, we're working with legacy code, we don't care if the output's right, we just care that we don't change it and so golden master testing is awesome. We don't even have to look at the output to see if it's right, we just need to save it off as this is the master, if I change it, something broke I need to back up and go again. So we're gonna use this for this and Katrina Owen wrote a gem called Approvals for this. I have to give a huge shout out to Katrina because I did a run through of this lab at a community college class last week and there was just some things with the command line interface that I thought could be improved. So I spent Friday, submitted like four pull requests to the gem, this is like last Friday and that night she merged all four of them and I thanked her for it and she said, that'll come in real handy in my workshop and she goes, oh you're using this next week? I'll cut a release for you right now. So Katrina is awesome, thank you Katrina if you're watching this. So we're gonna use her gem because it's really cool. So let's keep mob programming a little bit and we're gonna bring in approvals, it's already in the gem file and so the way it works, it actually interfaces really nicely with our spec so we're just gonna require it in, okay? And I'm gonna change this after zero updates test, actually I'll keep it running but I'm gonna just throw an old on the end of it so I wanna write some new stuff using approvals. So we're gonna specify after zero updates do and so approvals has a method called verify that's kind of the main method that you use and it takes a block and all we're gonna do is verify the items and what it's gonna do is it's gonna save some representation of the items out to a file on disk and remember that is the golden master and that's all we have to do to use this so I'm gonna go ahead and run this test and it failed and, sorry, I wanted to blow that up. I do know how to drive Ruby9, I really do. Sorry, this font is really small. That is not right. I should have been getting a different, okay, these are just warnings I guess. So here's the actual error, it's an approval error because I don't actually have a golden master copy yet, okay, and that's expected the first time so I'm gonna flip over to, not that, sorry, there we go, to shell and so what we can do is we can run bundle exec approvals verify and so it's showing me a diff of the current approved copy and the new updated or she calls it received copy and it's added a bunch of lines and we can say whether we wanna approve that or not so we're being the guru who's checking the output right now and we're gonna say yep, looks good, yes, go. Yep, would that be better here? Okay, yeah, that's my turn. That's okay, I can flip it real quick. Is there a profile here? Yeah, there we go, light background. Close it and reopen it, okay. There we go, is that better? Yeah, those colors aren't great, are they? All right, we won't be spending much time in the shell, so anyway, so okay, oh yeah, we accepted that one so it's good and we go back to Ruby9 and we rerun our test and it's still, oh, that's interesting, so I rerun the test after we accepted the golden master and we're still getting an approval error, it's not matching, see if I can grow the text on here, cool. Okay, and so let's go back and run approvals verify again. So the difference is these object IDs in here. Every time you run this, you're gonna get different object IDs for the items so we actually need to have our golden master output be a little bit less variable than this so we need to do a little bit more work here so I'm gonna say no, I don't wanna approve that and we'll flip back to Ruby9 and we'll just tweak our test just a little bit. So what I'm gonna do, zoom this particular one out, let me get rid of some stuff here. Okay, so instead of just getting the items, I'm gonna actually map that and just produce a string representation instead and this will just be really simple, we'll make it a little bit human readable so that when we are actually looking at these, they make some sense. So I'm just gonna do a really simple, I will push this up to GitHub so you can pull it down when we start working on this on our own by the way so you don't have to try to type everything I'm typing. All right, so we're just gonna do a little string interpolation and get a string representation of each item in an array and let approvals work with that instead. So run that again and again we're getting an approval error and we run our verify. All right, so now this time we've got a little bit cleaner output that's not likely to change every time we run the test. So let's approve that as our new golden master and we rerun the test. Why are you still red? Oh, undefined methods sell in for name, what did I do wrong? What's that? Oh, that's right too. Right, we have our new test, we can get rid of the old one, thank you. There we go, green bar, okay. So now we have a test, it's not actually calling the code at all but at least we know we can get the items out and we can run the test repeatedly and it's not failing every time we run it, which is good. That's a good first step, especially with legacy code, sometimes it can take you a week to get to this point honestly. So that's cool. So let's do an actual update here. This is gonna get a little repetitive but we'll fix that in a second. So the actual body is gonna be almost exactly the same but before we do the verify we need to actually run our method. All right, so we're gonna update quality and then verify the items after that and again that's gonna use the golden master testing. We get our approval error, we go run our verify and so now it's comparing after one update. Notice it's getting like sub directories and test names from RSpec which is really nice. If you run this in mini test you have to provide a couple extra arguments which is one of the reasons I'm using RSpec for this and it's showing the new output. Again, we don't care about what's in here. It doesn't matter if it's right. We just need to know we didn't change it. So I'm gonna approve that and we'll go back and run our RSpec again and now we've got a green bar again. Okay, so we could keep doing this. How far should we go? How many of these tests do we need to write? That's one option. In this particular code though it keeps decreasing the cell in day no matter how many times you run it. So yeah, it was a good idea. So his comment was that we run it until the output stops changing. So we run it until basically it gets everything all the way down. And that won't actually work in this case because cell in just keeps going down and down. It goes negative and keeps going negative. So we can't do that here. That would be a good thing to try though. I just don't want to take the time in the workshop to do it right now until the value's becoming valid. Yep. Yeah, they don't actually ever become invalid here. So that will never stop. So I want to introduce you to a little tool that you've probably heard of. It's called SimpleCov that does code coverage. And what it does is you run your tests and it tells you how many lines of your code have executed or which ones have been executed and which ones haven't. So you can run this. I actually have it hooked in so it runs as part of the tests all the time. It produces a directory called coverage that has indexed that HTML you can look at. But RubyMind has a nice integration for this. So I'm actually just gonna run this test with coverage and it pops up a little coverage window here that I'll try to maximize. And all we really care about is our gilded rows file. So I'm gonna dive into that. So it's saying 67% lines covered. And if we go back to the code, it has actually turned up the contrast on this. I hope you can see it down the left side. You see these green bars all the way down the left? Are those visible at all out there? Okay. If you scroll down a little bit, there's also some red bars there. So anything that's red is a line of code that has not been covered yet. Okay, so when I ran that test, all of these red lines of code have not yet been covered. It doesn't actually mark the ends and stuff like that. But so this gives us a sense of how much of our code has been covered. So at this point, we could say, probably our tests aren't quite good enough yet to be really feel comfortable refactoring. So we should probably keep going. And this is another place where Ruby comes in really nice because we can just kind of, these tests are very, very similar. So I'm gonna do them in a loop. So let's start with, let's go twice because that's what we're doing. So I'm just gonna refactor the tests now as long as the bar stays green, we're okay. Jillian did a talk yesterday. How many of you saw Jillian's talk on refactoring late yesterday afternoon? Anybody? She did a great job. And one of the points she made in that talk was that you should only be changing your code or your tests at the same time, never both because your tests are your safety net for the code and vice versa. So we're gonna change our tests a little bit. We're not gonna touch the code while we do that. So we're gonna just iterate two times. And within that, we're gonna generate specify blocks here. Again, these are not tests you necessarily wanna keep, but they're gonna work for now. Okay, so all I'm doing is I'm gonna repeat this loop twice and I is gonna start at zero and then be one. And then after that many updates, we run the update quality method that many times, so zero or one times, and then we run verify afterwards. So this should exactly duplicate the tests that we just had. Let's run them. And not quite. And the reason, if we look at it, it's another approval error. Couldn't find an after one updates.approved. So we had called our old one after one update because we actually used proper English in ours. We're gonna go with that for now because again, these are throwaway tests. They're just there to get us to where we need to be. So I'm gonna run my approvals verify again, and I could actually, if I wanted to, spec fixtures, just to be really safe. So I've got this after one update and after one updates. I could actually diff those two files just to make sure they really haven't changed other than the name, and there's no difference. So we're pretty sure we can do this. So now I'm gonna run my verify again. Unfortunately, approvals doesn't actually walk up, back up the directory tree to the root, so you have to kind of get there yourself. That might be my next pull request to approvals. All right, so I'm gonna go ahead and accept that, and we'll make sure we're green now, and we are. Okay, so we already know that only doing one update only covers our code about 67%. So we could try two and then three and then four. I'm gonna jump up a little bit. We'll go to five and try that just to move things along a little bit. Again, we've got approval errors. Let's go verify those. So that's four and three and two, and green bar, and let's rerun with coverage again. Look at our code. So there's still some red bars in here. Just a few, but still a few. So let's try 10 times. So this is really kind of mindless. Just try to get enough coverage on the code to be comfortable with it. So we'll go 10 and we'll go approve everything. So there's five updates and seven and six and eight and nine. Now we're green and run with coverage. So we're at 93% down here, it says. And if we look at our code, there's still this couple lines of code that are still not being covered. So I'm just gonna bump it up. I happen to know that 20 works, so we'll just go ahead and do that. And go approve all those. Okay, and one more time with coverage. So people talk about code coverage and whether it's useful or not, and whether you should shoot for 100% coverage or not. I'm not gonna get into that debate a lot. It can be a handy metric to watch trends, but it shouldn't be the end goal. It should be a tool that you use in service of a more important goal. But now we are 100% covered. So how confident do we feel refactoring this code now with 100% code coverage? 50%? Anybody else? Okay, so if we're confident in our code, if I was to comment out, say, oh, I don't know this line and this line, that should break something, right? We have 100% code coverage, but there's lines of code here that we can comment out and our tests don't fail. I don't know about you, but that scares me. What's really going on in this case, won't dive too deep into it, is that there is an implicit else right here that is not shown in the code and the else clause of this if statement is not actually ever getting hit. So code coverage is helpful, but it's not 100% reliable either. So you have to be careful with something like that. Okay, but for now I'm gonna call this good enough. So the problem why we're not covering all the lines of code is our tests are actually coupled to the predefined set of items in our initialized method, and those items in their current state do not exercise all of the possible cases in this code. And so this is actually a dependency that we ultimately want to break, but in order to break that dependency, we needed to have some test coverage and we need to change the code. So we don't have a way to inject new items into this test very easily. I guess we could add some to the end of the array, but we're not allowed to touch that array because of the troll. So we have to work within the constraints we have, and so we're gonna have to start changing production code without being 100% confident that we're not gonna break anything. But we're pretty sure. I mean, these are pretty good tests. They're just not perfect, and I wanted to make sure that we pointed that out. Okay, so now it's time to start cleaning up the mess. We have at least enough of a safety net that we're comfortable moving forward. So let's talk about how we're gonna do that a little bit. I need to switch back to keynote for a minute for that. Okay, so we did that. All right, so I talked a little bit earlier about rewriting. We do not wanna rewrite. Almost never is a rewrite a good idea. If you remember the browser wars back in the day, there was this thing called Netscape. They stopped the world to rewrite their browser and got completely destroyed by IE and didn't really come back until it was in the form of Firefox, and that took years. They did a stop the world rewrite on their code. It was just too crafty, we can't work in here anymore, and they died basically. Joel Spolsky writes about rewrites and how they're a horrible idea. Sometimes there's nothing else you can do, but I wanna try to teach some tools that, if you can't rewrite, what do you do instead? Sandy Metz actually did a talk here last year using the same Cata, and she actually didn't refactor the code. She did an incremental rewrite on it. It was just a very piece at a time, piece at a time test driven. It was great. It was an awesome talk. You need to watch it if you haven't seen it, but it was not actually a strict refactoring, and so I wanna kinda go a different direction and let's do this straight up refactoring and see where we can get to. We may not get through it all today, given the time we have available, but we'll just see how far we can get with it. And we have to also remember, we're probably not gonna get the code all the way cleaned up, but let's follow the Boy Scout rule and leave the code cleaner than we found it. We could go hack in this new feature right now, and the code would be worse. And that's not, if you have to keep coming back to this code, make it a little bit better every time and over time it will get better. The parts of your application you touch the most will become the cleanest parts of the application if you follow this rule. And that's huge. I mean, you can't stop the world for three weeks to clean code up all, usually. So make the code a little bit better as you go. We're gonna take baby steps. I talked about that already. We're not gonna boil the ocean. We have to remember, we're trying to implement the feature here. We're paying down technical debt. We're taking longer than it would take to just write the feature in a clean system because of the mess it's in. But we have a feature to do, and so we can't lose sight of that. Now, because of the speed we'll go in the workshop, it'll seem like we're losing sight of it a little bit, but when you're working on your own code, keep in mind the end goal and the Boy Scout rule. So make it somewhat better. It might not be perfect yet. Make it better. So Kent Beck has a great suggestion for how to make changes to code. You make the change easy, which might be hard to do, and then you make the easy change. And that's kind of the approach we're gonna take. And then when I was first researching, I actually did a talk on this Cata at GoGuruko last year, and I found this quote while I was preparing for it, and I had to include it. So when you start a new programming job, you have to walk right up to the biggest function in the yard and refactor it in front of everyone. So it's kind of like prison advice, right? But that's what we're gonna do. Update quality is the biggest function in the yard, so let's go refactor it, okay? So we're gonna keep going with the mob programming a little bit. Again, if you wanna just kind of run off on the side on your own and play with this, feel free. Before we start refactoring, I'm actually gonna commit the tests in the current state so that you can work from those. So let's go ahead and do that. Let's keep resizing. So I'm actually gonna create a new branch first called RailsConf 2015, okay? And I need to add some files. I always forget the shortcut for that. And the spec fixtures. Okay, those are all added. I'll go ahead and commit and push assuming Wi-Fi is working. All right, so you should be able to pull down that RailsConf 2015 branch if you wanna kind of work on this on your own. For now, we'll just keep going with the mob programming a little bit and then we'll see if you guys wanna kind of do some para small group work or keep going with the mobbing. Okay, so we have our tests. We really don't need to look at them again probably for now. So what we wanna do, like I said, we wanna stay focused. So there's some messes up here, but those aren't kind of the core of the problem. Let's focus on update quality since that's where we know we need to make the change. Now, if I look at this code, I don't really wanna, this is the least I'm ever gonna know about this code, and this is not when I wanna start making the huge changes because I don't understand it yet and I don't wanna study it for three days to figure it out. So what I wanna do is do really simple brain dead seeming changes just to try to get a sense of the code. It's like a potter working with the clay and just kind of feeling the clay a little bit. It's kind of what we're doing with our medium here which is the code. And so we're gonna try to make really simple changes and we want some sense of whether we're making the world a better place. So what I'm gonna do is flip over to the terminal for a second. And I have a rake task called FLOG and it runs a tool by Ryan Davis who's speaking right after this workshop actually on many tests. He wrote FLOG and what it does is it basically gives you a sense of how ugly a code is. It runs something called a cyclomatic complexity analysis on it, which is a big phrase, but it basically tells you how gross your code is. FLOG is actually a tool, I believe it's used by CodeClimate, so part of your CodeClimate score is what FLOG says. So we're gonna run this and just see where we're starting. So the overall code base has a score of 180.5 and the update quality method, as we surmise, is by far the worst, it's 155.1. So that's our starting point and we're gonna kind of see how we do against that as we go along. So let's flip back to the code. What's something really, really simple we can do to this code that would make it just a little bit better or a little easier to understand. Yep. Okay, so actually make this an idiomatic Ruby each loop instead of this thing that was ported from C sharp. And so, but first you said do something with items I. Okay, so I think I understand what you're saying. So I'll do a couple of mechanical things. Yeah, no, I got understood the language. I just wanna make sure I'm going the direction you're saying. So what he's saying is we need to do something with items I, like maybe extract it to a variable called item that would then become the iterator variable. So I'm gonna do this baby steps like I talked about. So the first thing I'm gonna do, now RubyMind has some refactorings which is why I like using it. So we can actually go refactor extract variable. I usually use keyboard shortcuts for this but I wanted to kind of show you where it is. And it says do you wanna replace all 34 occurrences of this expression? Yes, I think I really do, thank you. So I'm gonna call it item, boom. We're gonna run our tests and I'm gonna flip over and run flog again. That was a pretty big improvement. That was awesome. So that got our score down quite a bit. We've now left the code better. If we had to hack in our new feature, at least the code's a little bit better. But we don't have to hack in our new feature yet because we still have 45 minutes to go so we can keep going. All right, so now you want us to turn this into an each loop instead, right? Okay, so does everybody feel pretty comfortable without refactoring at this point? Okay, should be reasonably safe, right? So the idea is we replace the for loop. So actually before I do that, is that for loop really equivalent to items.each? How hard do you have to study it to figure that out? Kind of have to stare at it a little bit, right? It's like what's the size minus one thing? Oh yeah, it's zero based indexing. It's probably equivalent to an each but you have to stare at it. And so one of the things I would like to argue is that idiomatic code, code written as native speak is much easier to understand for new readers because you don't have to stare at it and figure out what it's really doing. You just know, oh, I know that idiom, great, okay? So any place where you can make the code more idiomatic really improves the code, okay? One of the things we did with that variable abstraction, we got rid of some duplication but we also got rid of some noise in the code and that also makes it easier to read. So what's that? We're writing ribby, do we care about speed? I'm kidding, it's joking. But yeah, it is actually speed boost as well. So items.eachdo and we'll call that item and so that we should be able to get rid of those two lines and change nothing else. You broke something. Right, instance variable, thank you. Tascot me, that's what they're there for, okay? So that's a green bar, that's another good refactoring. When I was teaching this to the community college class last week they got completely addicted to running FLOG all the time, that's hilarious. So it actually made it worse. That surprises me a little bit. I didn't expect that but it's making it worse and we're gonna make it better in a minute. All right, what's something else we can do to this code just to try to get some noise out of there or make it a little more understandable so that we can add our feature. Extract the quality incrementer, okay? So like extract a method for it, okay? So what do you wanna extract here? Just this one line, okay? We can do that. I wanna point out this pattern and that pattern and that pattern. So that three line fragment is repeated three times and actually almost four but not quite. So I've been through this code a lot so I kinda know where some of the noise is. So that's almost the same except there's a whole bunch of extra code in there as well so it's not quite the same, okay? So you wanna extract, we can extract just the incrementer or we can extract the whole three line snippet which would you rather do? All three? Okay, what do you wanna call it? Increment item quality, okay? Increment or increase? Increase, okay. So RubyMind has refactoring for this as well. It's called extract method and I'm just gonna use command alt M for that and so it's gonna be increase item quality and it already figured out that I need to pass an item parameter to it. I'm gonna say okay and it's telling me in really, really small print there that it found two other code fragments that can be replaced with the call to this method so I'm gonna let it do that. And so it's actually prompting me for them. I could say do all but I'll look at each one so there's one and there's another one. So it found the duplication for us and helped us extract it which is really cool. I'm really not running an ad campaign for RubyMind but I like to refactoring tools. I'm a small talk guy and I just got addicted to refactoring tools so I like it. All right, so we're still green bar. Now how's our flog score? That's significantly better. That was a good extraction. So that's cool. Still update quality is the biggest rock so we should keep looking at it a little bit. All right, anything else we can do to this to start to clean it up a little bit? We could do that. Yeah, I'm not sure. It kinda makes that method a little more complicated. That's not a bad idea though so the suggestion is that we add a block to that method we just extracted to handle this last case where it does something after increasing quality. So in that case, when I think about stuff like that, I think, is it generally useful? Does it conflate the purpose of that method we just extracted? Because that method we extracted is really nice and clean and fairly single purpose. I'd said to ignore the spec early on but if we go look at this for a second and go back to our spec, which is in the read me here. One of the things it says is that quality of an item is never more than 50 so if that spec is actually accurate, it looks like we've actually captured part of a business rule in that method, which is good. The do something after that, maybe not so much. So in this case, I would probably lean towards not doing the block thing yet but it's not a bad idea. It's a good idea. So we'll go down that route. All right, anything else? I'm kinda scrolling through this a little bit. You can look on your own screens. Max quality as a constant? Yeah, that's not a bad idea, okay. So this is actually a technique I've heard called active reading. So as you read the code and you understand something about it, capture that knowledge in the code itself. So by giving things good names like we're doing here, we're actually capturing our understanding and so the next person to come along which is probably us in another week doesn't have to study the code as hard to figure out what we learned. So we're basically capturing newly gained knowledge in the code. All right, so where do we have max quality? So we can global search and replace 50 but we might find a whole bunch unrelated 50 so we have to think. So this one's comparing to quality so we'll go ahead and use it there and the only other 50 is down here I think, okay. Run that. I doubt that helped our vlog score. I did a little bit actually, cool. Didn't expect that. So like I said, the students I taught last week got really addicted to FLOG. It's a tool like SimpleCov. Shooting for a certain number is probably not a good goal but watching the trend can be useful. You don't wanna make that the metric that you're going after because it's a metric in service of a bigger goal so stay focused on that bigger goal but in our case it's a good kind of gauge to say are we actually making the code better? At least by one particular measure. Yeah, Olivia? Oh, okay, cool. Yeah, that makes sense. Yep, question here? Okay, yeah, you noticed that, did ya? So notice we've been reading through the code a few times and we're starting to pick up on some of these subtleties. I said early on this is the least we'll ever know about this code. We're starting to see some of this ugliness and some of these weird things. Now, is that always zero? Item.quality minus item.quality is that always zero? What's negative? Unless what's negative quality? No, subtracting a negative number adds it, yep. So it seems like that's a safe refactoring. What if quality had a side effect? Then we could be shooting ourselves in the foot. So if quality had a side effect I would have to hurt somebody because that's not cool. But we looked at item and it's just simple adder accessor so we know it doesn't have a side effect. So we're gonna say it's safe to do that refactoring and go ahead and replace that with a zero. And we have a green bar so we're reasonably confident. Yeah, Alex. Okay, so you wanna get rid of the parentheses? Oh, you saw that, did ya? Yeah, I think this is a leftover from the port from C-sharp but a lot of times legacy code has been ported from another language so you're gonna run into this stuff in the wild sometimes. Get rid of the semicolon, run the tests. So our tests run super fast and so you can run them all the time. It's a great feedback loop. RubyMind actually has a feature where you can have it auto-run the tests every time you save a file. It doesn't react fast enough for me. I want my little hit of green bar way sooner than that, so. All right, so we're gonna go ahead and do this replacement. I think they're all safe to do. I probably didn't start it high enough to catch them all. Whoops, didn't wanna go quite that far. Screwed that one, replaced that one. And I need to go back to the top and start that again. So I know if Ryan was here, he'd say I could do more and replace all the method parameter ones too but I'm not gonna go quite that far. I'm doing something I shouldn't be doing here. Okay, I gotta fix a couple here and then I gotta get the closing ones, not that one. Okay, I'm sure you can all tell me which ones I've messed up here but so can my tests. Yeah, I messed one up. Anybody catch it? Right there. Okay, cool. All right, so somebody said they didn't think that would help FLOG. It actually does. I'm guessing because there's a rule in there about unnecessary punctuation or something like that. Also got rid of a bunch of yellow, which is cool. So RubeMind was also helping us find the ugly code. What's that? I could. That's a Seattle style versus not Seattle style thing, I guess. All right, let's take a vote. Who likes Seattle style with no perrens anywhere? One. And who likes this style with perrens in it? Mob wins. We're gonna go mob real here. It could go either way. I'm actually becoming more and more a fan of having as little noise in my code as possible and so I actually made start doing an experiment with Seattle style, see if I like it. I find some Seattle style code just a little harder to parse but I might get used to that. So that's my personal preference. It's, I try to run little experiments like that a lot so I can make myself better all the time. So I probably will run that experiment pretty soon, at least on my own code. I won't do it to my team yet. That's one thing that caught us are really good for though is you can really practice things that you can't really practice in your production code. They're just little simple coding exercises that don't take very long but they help you really hone your craft a little bit and that's a good thing. All right, what else can we do here? Sorry, I heard over here what? All right, so this is another case where idiomatic code is more readable. So the suggestion is we use minus equals and plus equals here. Let me know if I miss any and there's this one down here that we extracted already. All right, how's that? That's green, did I miss any? Okay, so that actually helped our flog total quite a bit by the way. So the, which one here? These three that I just changed? Yeah, there's actually only two because this one is selling, but you're right. Okay, so what Alex is suggesting is that we extract out this item quality decrement like we did with the increment. How much of that code do we wanna extract? There's actually these two five line snippets are identical. So do we wanna extract that whole five line snippet? Yes, do all five, okay. Yeah, well it's a pretty small step just to extract five lines into a method that's duplicated. So that's two reasons for doing it. So I wanna take one other thing into consideration before we decide and I will let you decide which way we go. We have this increase item quality method down here and it compares quality and then increments it and three of the five lines that we're talking about here have something very similar, kind of a very parallel structure, but they have this other item name check kind of interspersed with them. So is there something we could maybe do? So what I'm thinking is if we extract a decrease item quality method, parallelism and structure would be a really good thing there. And so maybe there's something we could do first. So what do you guys wanna do, yep. Okay, so what he's suggesting is that we swap these two lines right here. Is that a safe refactoring? Is that gonna break our code? Okay, he's saying there's no else so it shouldn't break our code? Probably, okay, all we're doing is stuff with item and we already know that item just has simple accessor so there's not side effects to worry about here. What's that? If there are side effects, that might be true. In this case, there's not. So we're reasonably confident that this is safe. So everybody okay with trying that? Swapping the two conditionals first and then doing the extraction? We have tests, right? What could possibly go wrong? So I'll swap those two lines and reformat and I'll do the same thing up here as well. So let's run our tests and I did it. What did I do? Okay, undo. Yeah, you're right, I did. Okay, so that's the other thing about baby steps is if you mess something up because you're not paying close enough attention or because you're live coding in front of 100 people or how many people are here, you can undo and you don't lose a lot of work when you undo. That's what's really nice about baby steps. If I was doing this for real, I'd probably be doing micro commits every time. I just want to take the whole workshop up with committing. I'll give you a link to a repo later where I actually did that. Like every micro commit I committed. It's like 76 to 80 commits on this Cata, just getting it to relatively clean code. Okay, so I messed up here, right? So what that actually tells me is that I actually took too big a step because I made two changes before I ran my tests. Okay, so this time I did it right and that time I did it right. Okay, so now that we've done that, now we can do the extraction that we're talking about. And I'd argue that we should call this decrease item quality since it's really parallel to that other method. So let's do that. And there's one other code fragment, which is good. That's actually having RubyMind check that for you is good because it means that you actually made them duplicated and it found them. If you're expected to find one and it doesn't, then you know you might have done something wrong. So that's really handy. Okay, so now these two methods, what we're talking about, they're very parallel in structure and that's a good thing because they're very similar concepts. This one's increasing, one's decreasing and that's good. That's making the code more understandable. So check our tests and let's see where we're at on FLOG. Down to 53 now, that's pretty good. All right, what's next? The which phrases? Yeah, all those strings are, yeah. We can make those constants so you want to extract them out. Yeah, so let me flip over to read me for a second. I don't normally do this refactoring this early, but you brought it up, so let's talk about it. So it's talking about things like AgedBree and Sulfuras, a legendary item. So I'm just trying to get business terminology from here. Backstage passes, does it really have to be only one band's backstage passes or all backstage passes? So we could actually make those tests a little bit more generic. I usually wait until I've gotten rid of the duplication in those names and then I do that, but we can do it now. Do you want to do it now or do you want to wait? It's up to you. So what I'm programming is I'm most, well I'm doing a lot of navigating but I'm supposed to be mostly driving. Okay, if we look at update quality, what responsibilities does it have right now? All of them, yes. Validating quality, yeah. So the two main ones I'm thinking of right now is it's iterating over all the items and it's doing something to the items. And doing something to the items actually has multiple responsibilities as well. So what if we were to take the body of that loop out and make it its own method? So normally I actually do this refactoring a lot earlier and there's a point I want to make so that's why I'm suggesting it now. I think that's far enough. So I can extract method and let's just call it update item. So singular. Okay, now look at update quality. That's pretty clean and that gives us a really nice place to stand while we deal with the rest of the mess. So the main method of this class is now extremely readable. It's at a glance you can see what it's doing. Yeah, Olivia, you had a question. You wanna single line it? That's not the gym wire convention. All right, so single line. All right, did that actually work? Yes it did. Would that do to our flog score? It is lower and update item is now our worst offender as we expected. But we're not gonna take the time to do this but I wanna talk about this for a second. What we just did is we gave ourself a test point where we've decoupled ourselves from the items collection in the class because we have a method that we've left as a public method for now. We can now in our tests, we can create an item, pass it into update item and be reasonably confident. So we can make an item, all those corner cases that are 100% coverage didn't actually cover. We can now test those because we can construct an item with exact properties we need to test all of the cases on our code. So if we wanted to at this point we could stop the refactoring, go back to the tests and write better tests like oh this is what a normal item does and this is what backstage passes do and this is what AgedBree does. We could go write all those tests and they would look a lot more like TDD style tests, maybe not totally, but they'd be a lot better than the golden master tests that we have now. So we don't have enough time in the workshop to actually do that, but I really wanna point that out because that's huge. This gave us, we basically decoupled ourselves from the dependency on the items collection which is one of the steps in the legacy code algorithm removing dependencies and that helps us write a lot better, more focused, more targeted tests and that would actually be a huge improvement. In real life, that might be the next thing I do is go write some better tests because that would also help me understand the business rules better. I could actually write tests based on the spec and see if the code passes those tests based on the spec. That would give me some information or I might keep refactoring which is what we'll do today, but yeah. Yeah. Yeah. So you know that because you change the API. Right. So when you prepare some tests for updating items how do you do those tests because mostly, I'm asking you, you would like to test update item metal very easily currently, but when you move it into private metal you will get messed up. Right, so in that case, so the question is normally you don't wanna change the API, your public API when you're doing refactoring like this and so really all of these methods that we've been extracting should be private and so that would argue against using update item in our tests because you don't test private methods. So what would I do with that? That's basically your question, right? So in this case, I'd probably leave it public temporarily and I would look for a way to move most of its behavior onto other objects that I test and test them over there. We're not quite to the point where we can do that yet, but I mean ideally we'd extract a bunch of behavior over to item itself, but we can't touch that class right now. If we get far enough we'll learn some tricks for that. So I would probably use this in its current state as a stepping stone to where I ultimately wanna be. I have been known in the past in ugly code to just like put a comment about private above it just to communicate to the reader that hey I intend these to be private, but they're public for testing or I might say real private but exposed for testing or something like that. Again, this is scaffolding to get to a better place. This is not Greenfield code written the best way you know how to write it today. This is try to get it to a state where it's in better shape. What's that? You can do that too, put an underscore in front of it. Absolutely, yeah, this. So yeah, I agree with you. I'm actually pretty big on like separating public and private API just for communication purposes and I've been doing this for like more than 20 years and I read Sandy Mess's book a couple of years ago and she completely rewired my brain. So after 20 years of like experience doing this stuff and you read one book and it rewires your brain it blows me away, that's how good that book is. And so I'm very much in that school now where yeah, public methods are what you test and private you don't, but in this case there's so much stuff in that private method that I would probably leave it public for now and test it and it's actually possible that this is a good addition to the API, I'd think about that a little bit too. Maybe being able to update a single item is actually a good thing for this API. I mean this is really kind of a it's almost doesn't deserve to be a class on its own a gilded rose thing, it's probably will eventually go away but yeah, that's an excellent point though. Yeah, you can do that too. Yeah, I consider that cheating but sometimes you have to cheat. All right, so what else can we do in here? So explain that. This condition at the top could be, oh I see what you mean. What's that? Yeah, I don't think so but yeah. That's not a bad idea. I'm kind of trying to leave those conditionals for last. I want to get rid of all the other noise first and then deal with those conditionals. What's that again? Yeah, actually that's a good point. I was kind of annoying this because we're focused on the place we need to change but this is actually an interesting point. What does that line of code actually do? Exactly, it creates a class instance variable. This is not actually pre-declaring the items instance variable that we've been working with. This is actually dead code and confuses people, especially people who, I mean class instance variables and things like that are a fairly tricky part of Ruby to start understanding. And so this is actually confusing new people. So let's just get rid of it entirely, that's a good point. Doesn't help us with our update item method but it's good to get that noise out of there because it just doesn't distract people and doesn't confuse people. Something else we could do really easily is now that we've got our test coverage we could actually put that at our reader in. Might not want to keep it forever but for now put the add a reader in and then we could actually get rid of, I keep forgetting my keyboard shortcuts. We could use the reader there, oops, there we go. And actually go back to our test and use it there if we wanted to as well. We don't need the instance variable get anymore. So that's pretty simple and it gets rid of noise again. Again, getting the visual noise out of your way helps really focus on the structure of the code. All right, so we've got a little over 15 minutes left to go here and so I actually want to get to implementing our feature just so we can say we did. Normally I would probably keep refactoring a little more because it's kind of hard to tell where to add that feature. Just make sure we're green and I'm gonna go ahead and flog again. So we've gone from 155.1 on our worst method down to 45.1. So we've made the code better. Like I said, we can keep going but I want to actually try to get the feature implemented and at this point I still don't see how to add it to this big messy conditional. There's still more work that needs to be done there and if we have time after we do the feature we'll come back and do some more on this. So for now let's just do the ugly thing and handle the case of conjured items up at the top. And I want to establish a bit of a new pattern here. So conjured items, we're gonna use a regex match instead and just say if the name has conjured in the title at all, then it's a conjured item. And we're gonna early out when we're done and let everything else fall through to the rest of the code. Now this is making the code worse again but it's still probably gonna be better than where we started so we followed the Boy Scout rule. At this point we basically said, you know what? We had a time box for the refactoring we could do. We got as far as we could, the code is better. It's not great but it's better. Now we really need to get this feature in so let's get it in, okay? So this is gonna fail because there is a conjured item in our list of items and we've got a bunch of approvals warnings. We haven't handled conjured items properly yet. So let's go, actually we can write some more focused tests for that. So I'm gonna comment out the approval specs temporarily while we write some more targeted tests for conjured items. So let's do a describe conjured items. Okay, so let's flip over to what our spec says which is that conjured items, decreated in quality twice as fast as normal items. So normal items degrade by one every day. So conjured items have to go twice as fast as that. So we're gonna have to do a two times item.qual, sorry, decrease item quality on item. Okay, I could just go item.quality minus equals two, but there's that protection against going below zero that we don't wanna miss. So we're gonna do this ugly thing. This is gonna suggest some more refactorings by the way. But for now, let's just do that. And put the S on times would be really handy. Actually, I wrote this before I wrote my test, that was bad. So it decreases quality by two each day. Again, these might not be great specs, but so we're going to do this. And in here, we're gonna just make a single item. We're gonna make use of that update item method that we have. So item.new, conjured something, don't care what it is. As long as I was conjured in the name, it'll be fine. Sell-in can be five, and quality can be 10, doesn't really matter. So now we can actually do a TDD style test and write the test first and say what we expect to have happen. So what we expect to have happen is quality should go down by two. And we could write this differently. Actually, we could do that to change by, I'm not gonna do that right now. But that would probably be better. So we're gonna go, or subject, update item, item. And so we think it's gonna go down by two. So it should equal eight. Fixed my white space, cause I, all right. And that actually failed. Expected eight got 10. So it didn't go down at all. Started at 10, stayed at 10. That's because we're not actually doing anything in our code. So let's go ahead and implement it now, which I had already kind of done. Okay, that's sort of green, except of all the commented out specs for now. I'm just trying to focus down on this one spec. So the other thing that should happen is it should decrement the sell in day by one, right? And I should probably extract a before block here, that there, and then here. So expect item.sellin to equal four. And that broke, cause I expected four and got five. So we are not actually decreasing sell in. We need to do that. We don't have a method for sell in yet. So we'll just do the minus equals one thing. Okay, that passes. So there's one other business rule that we really should be covering here, which is that once the sell by date has passed quality degrades twice as fast. And so we need a test for that. So I'm going to context past the sell in date. I'm gonna change what item is. Actually, let me do it this way instead. Cause we can actually, I forget what order before blocks run in. Outer first, right? Yeah, never mind then. That would be bad. Let's make a new item here. Give it a name. Let sell in be minus one. And quality can be 10 again. And so this time, so this before block is still gonna run and update the item. It decreases twice as fast after the sell in date. After the sell in date. So expect item quality to equal this time it should go down to six because it's gonna go twice as fast. And that's back. And we expected six but got eight because we're only doing it once. So we get to do this again. Okay, and that's green. So now let's go turn all our other specs back on and see what broke. Cause we expect them to have broken because we've changed our golden master output again because conjured items are now handled differently. Looks like we failed four specs. So let's go run our verify. All right, so after one update, quality is now four instead of five. That sounds right. It went down twice as fast. So we're gonna approve that one. And then after three updates, instead of being three, it went down to zero. So it starts at six. If we go look at our items collection, it starts at six. So after three updates, it's gonna drop to zero cause it's going twice as fast. So that's approved. After two updates, it's gonna go instead of four to two. That looks right. And after four updates, it's gonna cap our bottom out at zero. It never goes below zero and that's handled by our decrease item quality method. So that one looks right as well. We're gonna approve that. But again, we had to be the guru checking the output. That's where the anti-pattern part comes in. But all those changes made sense. So we're gonna run with them. Okay, and we're still green. And now we are actually handling conjured items. Not what I wanted to run. I wanted to run Flog again. So we went back up on our Flog score, but we are still quite a bit lower than where we started. And our new feature is now implemented. We followed the Boy Scout rule. We left the code cleaner than we found it. Not perfect yet by any stretch, but it's better. So let me just do a little bit of wrap up. I'll take some questions. And then if you wanna stay for a little while and keep working on this, would you be happy to do that or we can go to other talks? I've got one I wouldn't mind seeing, but I don't mind hanging around either if you wanna stay and keep playing with the refactoring. So let me just run through my last few slides here and then we can decide what you wanna do. All right, so we actually did a bunch of stuff. So I have a public service announcement. Please write tests for your code. Please don't make me work in your code base if there's no tests. All right, so if you want more, here's the original source of the Cata. These slides are gonna be up on speaker deck shortly. I did a talk on this Cata at Gogoruko where I kinda walked through the way I refactored this code. There is a repo on my GitHub account where I basically, I committed after every little small baby step refactoring that I did so that you can actually see the progression from start to finish and how far I got. If you wanna try the Cata on your own and see where you go with it, before you look at mine that'd be great because I don't want you to be colored by mine. I talked about Michael Feather's book working effectively with legacy code. There's also Martin Fowler's refactoring book which is a great catalog. He's also got a website with a lot of these refactoring's on it. So if you wanna learn more about refactoring it's a great place to go. There are the gems that we worked on today, approvals, again I wanna thank Katrina for turning those changes around really quick. I don't like to bug open source maintainers to kind of give priority to me. It's like they're volunteering. So I didn't ask her to and she just like later that day I submitted the poor request, she went in and merged them all, cut a release for me and everything. So I say thank you again to Katrina for that. SimpleCov is pretty cool. Again, the metrics tools lent FLOG. Don't treat those as your end goal. Treat them as tools to help you get to your end goal. All right, and again, you have the code repo that you saw there. I'll push this code up here in a second so you can get the latest. I'll have the slides up on speaker deck shortly. And if you wanna give me some feedback on speaker eight that'd be great. I always wanna try to get better when I'm presenting. So any constructive feedback would be great. Thank you very much. So any.