 So it's great to look at you guys and basically see how many people are interested in code deletion. How many people enjoy deleting code? Yay, yay, yay, we all enjoy deleting code. We all have this, right? We all have the ugly garage that has all these crazy things and it's like everybody wants to delete code in general but we don't really do it as often as we should. Is that true? Yeah, in most organizations it just ends up being the thing that you want to do but you never really get around to doing. And the thing about that that I find kind of fascinating is that we have not really developed systematic process around deleting code in many organizations and even beyond that I think there's a bigger issue which is that we don't know how to justify it. This is very much where we were with refactoring years ago. It's like we basically had this notion of refactoring, there's been books written about it. Martin Fowler is here at this conference today and you know there's been a lot of training and stuff around refactoring but the thing about it though is it's always been kind of hard to go and sort of take this practice and justify it within the context of an organization. How do you justify deleting code? It's kind of odd, essentially. The model that many people use is that the code is there and it's just, you know, it's not going to hurt anybody if it's just sitting there but the fact is as developers we quite often know different. We know that essentially code that's not being used is often there, it often doesn't give us a very poor idea of what the reality of our system happens today. So why should we do this? Why should we delete code? I think the general thing, the most important thing to recognize about deleting code is it helps us clarify our understanding of the system and that's really, really important. I don't think you've had this experience. I remember early in my career spending a lot of time, maybe, I think it was like two days, working on trying to optimize a particular area of code and discovering halfway through the process that this code wasn't even being called. Okay? And it took me a while to go and figure this out. It was just really a lark. It was like the thing of like knowing you're like really heads down and you're working on something and then it's like suddenly just get this stray idea of like, gee, I wonder if this code's even being called. And just because I had that particular thought, I did a bit of an investigation and discovered that it wasn't really being called. And I went to someone I knew on the team and said, hey, you know, it's like this code isn't even being used. Why is it here? And I said, well, you know, we thought that would be useful someday. Okay? And I said, well, you know, it's kind of getting in the way because if this particular feature wasn't here, the optimization you're asking me to do would be much easier. Can we get rid of this and add it in later if we choose to, if we discovered that we actually need it? I'm like, yeah, yeah, we can do that. So I deleted that feature and was able to go and do the optimization work. It was a much, much easier task. So we have this thing about code that when we see it, we assume that it's really being used. We assume it's there for a purpose. And as a result, we basically have our vision of the system, and we have what the system is, and they diverge over time. So without having, you know, the assurance that basically the code you happen to have within your system is really being used, you have a false view of the system. And that goes and impedes our understanding and impedes the updates and the work that we needed to really do. So we have to figure out when we can delete code. First off, I want to go and discuss a couple of different varieties of useless code. Going to Wikipedia, there's the notion of unreachable code, which is something that quite often, you know, VMs and interpreters and compilers will basically go and eradicate for us. Often the thing that's kind of terrible about this is that this unreachable code is often removed from our binaries without our knowledge at all. So we don't really have any way of tracing back into the source and seeing that this particular code is not really being used. There are exceptions to this, of course. Like in Java, for instance, there are certain cases where basically you have unreachable code. The code won't even compile because of the fact that it's unreachable. But in many cases, compilers optimize a way unreachable code that you just kind of strip it from the binaries. It doesn't sort of impact our understanding of the system now. There's also the notion of dead code. And this is a bit different. Quite often we take the term dead code, we kind of like use it as a general term for all this stuff. But the real definition of dead code is code that actually executes but doesn't produce any meaningful result or no result which changes the behavior of the program. So this is quite often harder to find. And there is tooling around this sort of thing that you're going to try to find these things. But it's harder because essentially you have code that really does execute, but it doesn't vary the results at all. So you're not going to go and see this through a lot of static analysis. There's also this, low value code, code that actually executes. But it executes and it doesn't really do much for you that's very important to the customer or important to the business. So this is the one we'd really like to kind of hone down upon. If we can actually find low value code within our code base and then go and ask ourselves the question, why does this really need to be here? And perhaps discover that maybe it doesn't, then we can systematically route it out and get rid of it. So how do we go about this? Well, it's kind of funny because we still are at this real ad hoc level with doing this sort of thing in organizations. I was looking at a blog a while back and this one guy was going and talking about a game that he was working on and he was just going through and taking about a week and removing all the code that he could from it. And one thing that kind of helped with this is that he had intimate knowledge of the runtime of the system. He had intimate knowledge of how it worked. He had a game library that was used by other games, but primarily by his particular game. And there was a lot of platform-related code that didn't need to be there. So he was able to go and say, OK, this is for Windows. We're not using Windows anymore. Just go ahead and remove this, remove this, remove this, remove this. And he was very proud at the end of this process. He was able to remove lots and lots of code. And you can see from the little thing here, 10,000 lines of code just removed in about a week of effort going through and just getting rid of things that don't need to be there. But the more realistic situation we have in a lot of production environments is that you have code you're not really sure whether it's being used or not. And you don't really know how you're actually going to approach this because sometimes the code is really necessary and you just can't tell by looking at it. So how do we know? Well, it's funny. We use the concept of code coverage quite often within the industry to go and describe this concept of going and understanding whether code is being used or not. But the thing about this is that really it's conflated within the industry with testing. Quite often you will have code coverage tools, but you're basically going and seeing the coverage of code by a particular test. Do we ever do code coverage in production? Why don't we? Really because it's expensive, right? In general it's an expensive process. Generally code is instrumented to go and find out whether particular branches are executing or not. And nobody really wants to go and basically take that, incur that cost for all their production runs and basically discover the actual production usage for the code that they're dealing with. So, you know, there are tools still that do this kind of thing, but they're not really tailored to the kind of work I'm describing. So there's GPROF performance tool. You can basically go and use it to instrument code. It'll basically tell you the amount of time spent in each function, things along these lines, but your code does run very slow as a result of doing this. And the thing is that, you know, essentially this is for performance, so it's not really a tool that's optimized for the detection of dead code, but you can still basically start to approach things that way. There are other things as well. Some people have been running some experiments. There's a group that was basically doing this in Python. They basically created a very small snippet of code that we would have gone used to go and do snap shots of the stack periodically to go and basically find out, you know, which areas of code were being executed and which areas of code there weren't. But since it's a sampling process, you don't really sort of like know for certain that there's an area of code that hasn't been touched. It would just come down to the frequency of going and snapping shots of your stack to go and discover where you are. So people have done this kind of thing about. The other thing, the way, other way of going and detecting dead code is through mutation testing. So I'm not sure how many of you have run into this or heard about this at all, but there's a tool that was popular at one point in time with an Easter called Jester. And essentially the idea behind this was to essentially take your code and start to go and do crazy things like invert the conditions and change constants one at a time, run your tests and basically find out whether a particular path of code were actually consequential towards the results that were produced by a particular function, for instance. And it seems like a great idea, but again this is going and using test harnesses as a proxy for your production environment. But you are able to go and discover periodically, here's a code that do execute, but really inconsequential with respect to the results that are produced, or meant to be produced by the code that you have. So another approach, one I've used with Teams a bit, I'm actually developing a little bit of tooling around this now. It's something I'm calling feature probes. And the idea behind this is to essentially take hypotheses about particular areas of code which may be dead and try to go and gather some information about them. So, you know, this is kind of rudimentary code I've got right now. I've got a tool called SIV, and essentially it's the idea behind this is like in Ruby you basically make calls to the SIV function and what it does is it goes and records status about whether a particular, whether the call has actually been made or not. So the way it kind of ties in is that essentially you make these calls to a function called SIV probe, you make these calls, you place them within your code, then there's a process that goes and rips through your repository and finds all these calls and basically goes and records the data which this particular call has been inserted in your code base. And then what happens is that whenever this call is made in production, it goes and records the fact that that call has been made and then it goes and reports to you after a period of time whether that call has ever has been made in a period of, in that period of time. So for example, it's like if you look at output you'll see things like okay here's a couple of different probes that we place within the code. The first one is basically been, it's been 8.6 days since it was inserted into the code base. Okay, the next one it was 6.11 days but it has been called since it was inserted in the code base. It's been basically dead for 6.11 days, right? Does any of this basically tell you that you have dead code? Not really, but basically what it does is it gives you information about the areas of code that you suspect are dead. If you suspected they're dead, then what you can do is you can start doing some additional investigation, asking yourself, you know, why is this code really here? Is there anything we can do to go and simplify this particular thing? Do we really need these particular features? Is there a way going in sort of like scaling some of these things back? We can also go and start the addition, start asking the additional question, which is, is this code really producing much value for monteroing? Okay, if it is not, then essentially it's probably worth going in trying to go and prune those branches and find alternative actions or alternative features that can produce more revenue for the organization. So when do you want to probe for these things? Essentially, one process you can use is cloning and pruning, okay? And use this a couple of times where you essentially say I want to go and basically discover where they can actually go and remove some code in a particular area. You copy services over, you isolate the endpoints and then what you're able to do is sort of put probes in when you're running this and basically discover whether there's code that's being used or not and systematically prune things but keep both services, both copies running at the same time and then you're able to go and sort of like always sort of like switch over to the one that is, switch over to the one which is full in the code when there's a difference in results. Okay? So the typical strategy you want to use with this is to delete unreachable and dead code when you detect it, disable low value code at the entry points, okay? What I mean with this is essentially if you discover low value paths, what you want to do is basically make sure that you can go up the call graph until you get to the point where this particular code is about to be exercised and basically install some kind of an alternative action so you're able to go and fully dead in that particular path of code before removing it and you can essentially go ahead and use profiling or these feature probes to verify over time that you really are, you really are not using this all that often but what you want to do as well is basically put a human in the loop, you want to basically go ahead and set things up so that it's worse in some cases. So if there's a particular path that's being taken is there's some way of going and saying okay well this is happening we're going to basically abort this operation for the user, we're going to send an email or some kind of like a support message over to people who can go and handle the fact that somebody has basically tried to execute this path that we're trying to actually prune actively from the code base. Now if you're able to go and start doing this, if you're able to start removing dead code, what you're able to do then is actually start to go and do, basically start to move yourself towards the ability of doing systematic rewrites as well within your code base. We spend a lot of time talking about refactoring within the industry, how to go and take code as it exists and essentially refactor it in order to go and actually make things a bit better but quite often code gets so bad that we have to go and actually sort of ask ourselves whether the rehabilitation process is really worth it. So systematic rewrite is another thing we'd like to be able to do. The thing about this is really kind of tough is that when you're trying to go and actually take code that exists and rewrite it, what's the first thing that gets in your way? Well the main thing is that you don't understand what it does, right? If you understood what it did you'd be really easy to go and rewrite it. But the problem with this is that we quite often don't want to go and actually rewrite code that we truly understand is the things that are really on the cusp of illegibility, code that we don't really understand all that well that we'd like to go and actually perform rewrites on. So what we have to do is basically go through the process of going and rewriting systematically. And I use this process I call characterization testing to do this. Okay characterization testing is the process of going and writing tests that basically go ahead and describe the functionality you have within the code in order to go and get a good decent idea about what's really happening with it. And you can basically go and use these tests that you've basically written in order to go and understand the code that you currently have as a basis for going and performing the rewrite that you want to perform. Now the thing about doing this though is that it's kind of like a style of testing which is a bit different than what we're used to and say test driven development where the typical process of just going and writing tests after the fact when we're going and doing our work. So the way you really approach this is essentially by going and taking a big piece of ugly code. What does this do? Quick, quick. Format some text. Yeah, format some text. Yeah, fair enough, okay. It's good to have these nice intention revealing names and you really know what's going on. But I'm sure that everybody knows would agree that this is really atrocious code. It's literally with conditional. So we have all these crazy control paths and stuff along these lines. It's quite a bit of work to actually go through the mental parsing to figure out exactly what's going on with this. But fortunately what we can do is we can actually go and start to go and systematically write tests in order to go and actually understand what's going on with us and develop enough understanding to actually choose to rewrite it. So process for this is kind of interesting. We can start out this way. I essentially say, look, I'm going to have like a hypothesis here. What I'm going to do is basically say, assume that if I take some plain text, pass it as an argument to the format text function, I'm going to get the same text backward, back from it. And so we can very easily write a test, give it that name, run it and then basically discover whether this thing succeeds or fails. Fair enough? Right. Now the thing about this though is I think that this is really basically stepping forward too many steps. What I really like to do is I like to do this. I like to start with a test named X. This may seem kind of odd, but it really is kind of powerful. What you do is you basically start with a test and you don't give it a real name because you don't really know what the test is about yet. In a lot of testing we have our ideas about what the code is supposed to do. But what this is really about is it's about going and taking some code and exercising it well enough to be able to go and actually start to go and perform a good name. It goes and tells you a story about what it is that you're actually doing. So starting out this way we can say okay let's go ahead and try something very elemental. And elemental in this case is going to be, I'm just going to try something simple like going and passing an empty string and we'll see what happens. And I can make the assumption that basically well maybe I return an empty string as a result of that. So we get to run the test, we get a red or a green depending upon what happens. If it is a green then we get to go and basically place in a name that goes and describes exactly what's going on with this. So it's a bit more of an iterative process with this. The thing that's really nice about this is that basically when you break dependencies, once you broken dependencies well enough around a particular piece of code to be able to do this kind of characterization testing, the process of characterization testing itself is pretty easy. Because you're not trying to spend a lot of time going and trying to figure out in detail what the code is actually doing through a mental process. What you do is you hypothesize or you actually get very curious. You start asking yourself questions like oh okay what happens if I do this? What happens if I do this? And then you discover something and get to say wow that's what happens when I do this. And once you made these discoveries you get to document them, understand a lot more about what your code actually does. So other things with this, we have that simple test there. What about this right here? Is this a good test, the one that I have at the very bottom? It's kind of funny, we're going to see what happens when we pass a null into it. Good test, bad test. Depends upon where you come from and what your default rules of engagement are in the code base. In a good code base I really think that you know chances are if you pass a null into a particular function all bets should be off, right? It should be something that people just typically don't do, passing null into a particular function. Right now we're basically going and saying we're going to pass a null in and we're just making the guess that it's going to be an empty string. We get to run this and find out what happens. And of course we discover that what it does is it throws an NPE. And so we have a way of going and approaching that within testing. What we do is we basically place an expected, you know this is in Java, an expected on the test annotation to basically say that when this thing is actually executed essentially we get a null pointer exception. Now I can clean this up by going and actually removing the insert entirely. I can just say pattern.format.text passing in a null and actually just seeing that we are throwing an NPE. Is it worth keeping this test? This is up to you and it really depends about what you're trying to document what you're trying to understand. As I mentioned in some code bases this is something which really is not, it's kind of expected behavior because we just kind of assume that no way is going to be passing null into particular methods we have willy-nilly. So yes, you get to go and make a choice about that. Okay, so here's another one. We're basically going and checking to make sure that text that we basically pass in without a delimiter is going to be okay. And if you remember what I had briefly up here in the beginning, okay you can see that it's making a lot of conditional choices, you know, based upon whether we happen to see angle brackets within the code. So we basically verify that, you know, without having any delimiters at all, we get the same thing back. What about this right here? Removes the limiter under formatting. Is that a bug or not for our code, our formatting code? And wait for somebody to stand up and scream, we don't know, because that's the truth, right? We don't know whether this is expected behavior or not, right? And the thing is all the developers we have to go and basically start approaching these edge cases all the time. You know, it's kind of like we have the general intention of what the code does, but as you characterization test, you start discovering the little nuances of behavior. And you get to ask yourself is this something we want or something we don't want? And quite often in the beginning of doing this, you start getting into this strange space where what you're trying to do is you're trying to understand, you know, what level of quality do I want to have for this particular piece of code, right? It's like, do I want to safeguard all these different things? Are there things that I'm seeing as errors which are simply not possible because somebody upstream for me is never actually going to give me data of that particular format? And the thing is quite often these are very agonizing decisions. Finding practice that the best thing you can do is just basically document this behavior, come back to it later and then make those decisions once you have gathered more information about a particular component you're trying to go and replace by deleting and rewriting. So as I said, is this a bug? Okay, this is again one of the things we get to go and ask ourselves. So as far as going and doing this process of going and characterizing code to be able to go and systematically rewrite it, I have a couple of different heuristics I use for this. Once you basically start with a name that's simple like X that simply is a placeholder until you actually are able to arrive at a name which is going to be useful for you. The next thing is actually to try to use really expressive names. You know, quite often in the process of doing this, I am basically going and creating these monstrosities. These names are like seven or eight words in length. All kind of like pasted together. And that is just because I'm trying to go and actually render the understanding that I've gathered from the test as natural language and place it within the test. And I find that as I do this, I quite often go back and start renaming tests in the very beginning. Like I'll write three tests and say, ah, it's kind of like I have a better story now. I have a better understanding what this particular piece of code is doing. So I just go back and rename a couple of test cases I've done previously. And you find as you start adding like 10, 15 tests or something like that, you go back and rename them, you know, less often as you kind of go and start consolidating on some knowledge of what this particular piece of code happens to be doing. But I find that this process is going in sort of like revisiting names as a characterization test is pretty valuable. And the next thing is you have to make the call on bugs. You have to go and ask yourself, is this really a bug or not? The thing is it's really kind of powerful or important about this is you don't want to get into the practice of trying to solve these bugs or trying to fix things that you think are bugs at the same time that you're doing your characterization is something you want to go and defer until a bit later because quite often with a bit more understanding you have to go and actually with a bit more understanding you'll be able to understand whether a particular thing that you've done is really a bug or not. And sometimes in the beginning you just don't really know. Okay, but quite often this is a bit of a a bit of a depressing process because you're writing tests for things and you're kind of like, oh my God, I can't believe this. I can't believe this. I can't believe that this particular thing is possible. As I mentioned earlier, going upstream is often very important as well. There are some things you look at and say, I can't believe that I'm going to go and get this particular result for this particular thing. And then through another bit of investigation you discover that from based upon some upstream component those particular things that you're basically concerned about simply can't happen. Essentially, there are certain implicit conditions which make certain inputs impossible, which mean that these particular bugs you have under the code are not really ever surfaced. And you have to go in again and make a decision about whether you want to go and really bullet proof a particular piece of code or live with the fact that you have certain implicit guards in place. And the final thing with this is being curious. And this is the finest part of the process. You get to ask yourself questions like, if I pass this, what happens? If I do this, what happens? If I do this, what happens? And yeah, you just really let your curiosity drive you as you're going through the process of going and writing these tests to go and cover these things. Now, having done this, having characterized some behavior, you can be in a position to actually start to rewrite particular things. And this is one of the things that these are the criteria quite often used for trying to rewrite a particular piece of code that's really kind of ugly like this. You know, it's funny when you look at a piece of code like the one I just showed you, this looks like a very small piece of code. But because of the conditional complexity we have within it, rewriting it is actually something which is, you know, a bit hazardous without having a full set of tests or enough tests to kind of prove to yourself that you really understand it well and have to be able to do the rewrite. So you need to basically make sure that you have all the inputs and the outputs of the code. That there isn't anything that's secretly reproducing side effects like calls to singletons or external calls and stuff along those lines. Another thing I basically use as well as criteria for rewriting things is can I reduce conditionality? Okay, now this sounds kind of weird. But I'm finding over and over again my code becomes clear as I remove conditionality and error checking in it when I can, okay? You also want to be scoped for being able to do this kind of testing. And another thing, it's nice to be able to actually run things redundantly for a while to be able to go and sort of like run two things in parallel. The new code that you're replacing it with and the old code and basically sort of like, you know, you run both of them and then you want to be able to go and sort of like say, okay, if the results are different I've got a problem but if, and I go back, revert to the previous one but if they're the same I can just go and use either one of those results. I find running redundant is very useful for this. But reducing conditionality, as I mentioned, it's a rather big deal. This is the call path for Apache. Apache, yeah. It's a call path for Microsoft IIS. Which one do you want to work in? Yeah, yeah. So this is the thing that's kind of funny is that complexity is something we should always be aiming to reduce in our programming. And I've really kind of like gone full bore on this quite a bit recently, at least with some of the, my own projects. I think that essentially when we have if conditions in our code we're gonna start to look at these if we haven't already as being almost like unstructured programming. Back in the day when we used to use go-tos and labels all over our code things were considered unstructured and you had Dijkstra and a bunch of other people coming along and saying we should be using loops and conditionals to make our code more structured than it is currently. The problem with if blocks and loops and stuff like this is essentially it's so easy to go and drop things within those braces. To go and mix responsibilities inside a particular block in order to go and sort of like get something done at roughly the same time as something else. And when you do that, you can really be in this this odd situation where your main responsibility is rather deeply. Not only that, you also have all these crazy scoping issues, you declare a variable outside the block, use it inside the block. Is it used after the block or not? Essentially conditions and loops are relatively unstructured compared to a lot of the programming that we're kind of moving towards. So without going and getting into detail as to what this is a piece of code I wrote a while back and it really doesn't have any conditionals in it. Essentially it uses a couple of tricks to go and actually select based upon in-adverse. It uses a bunch of different functions in Ruby that allow you to go and sort of transform data in a pipeline. Some of the modern power cost collection pipeline programming. And I guess I have to ask you, it's like, do you know what this does? Does it make a lot of sense as a program? I don't expect it to, okay. This is the problem. It's essentially when you start going and breaking things out into these pipelines what you find yourself doing in many cases is using names not from your domain but rather using names of these higher level operations that are available to you within the more functional pieces of your libraries. So you have Link, you have Microsoft RX, you have Java Streams Library, you have all these libraries that give us these high level transformative operations. Things like maps and transposes and joins and all these other things. And we're kind of left when we look at code like this trying to fill in the blanks for ourselves. It's like, what are we trying to do here? I can see what we're doing but what are we trying to do, right? So this is something we have to kind of like step forward with if we're gonna adopt a style which is more towards removing conditionality. It's how do we actually go ahead and convey our intentions in our code as well. But I do feel that we're kind of like, this is where a lot of programming is going, silently. We have a lot of people going in using this more transformative style in their code and I think it's a net win. There are certain errors that simply can't happen because of the way I've structured some of the code that I have here. Certain edge cases just really kind of melt away because of the fact that I'm performing these transformative operations. And with less conditionality, quite often we can get, you know, we can produce code that's a bit less error prone but there is the readability issue that we have to kind of confront as well. So I've been kind of like throwing around the name for this. I like to call it edge-free programming in a way. It's not strictly the thing of going in using collection pipelines but like this but also being able to go and write code which uses fewer conditions and yet basically goes and allows good things to happen in the presence of errors. So I think it's kind of an interesting thing to go and try to deal with. So conclusion with this, you know, essentially I've kind of gone through a bunch of different things here from deleting code to actually going and writing enough tests to be able to go and delete code and systematically rewrite things. And then kind of ending up with this notion of going and trying to remove conditionality from our code. But I think that the main thing that I want to go and get across with this is that, you know, essentially in development and this is really kind of cool to talk about at an agile conference, we quite often get into this situation where we're sort of saying, ah, you know, my product owner wants this, my customer wants this and you have these ideas about what needs to be done. But the thing we really haven't done very well in a lot of agile is really close the loop between knowledge of the features that we have and how they actually impact the code over time. Right, so you end up in a state where you're kind of like, here's some features and I want to go and add them to a code base and we don't really realize what the impact is going to be over time. We end up going and sort of like muddying things and making them worse and worse and worse over time. So if we're able to go and basically take time to actually understand these impacts, okay, to actually dig in and basically discover, you know, do we know enough about how this component works to see where this feature itself is actually a good idea? We're in better state than we would be otherwise, right? How many people have actually rejected features in conversation with the business? Said, well, you know, I know you kind of want that but it's like, you know, this really isn't a good idea right now. You ever had that conversation at all? Yeah, more and more people I see are having that kind of conversation and I think it's pretty valuable to have that. It's not like you want to sort of like push back to the degree that people think that you're kind of like a barrier to actually getting things done but the thing is that quite often as developers we understand the code or we should understand the code intimately and understand what it's readiness for change happens to be and part of this model of being able to do this is basically understanding the code well, as I said and you really can't understand the code well unless you really have this kind of usage information unless you really know enough about how a particular piece of code is being executed whether it's being executed at all, how it basically goes and fits into the value chain of the particular features that you happen to have within your development. So I think that this is where we need to go. We need to go and basically make people understand that the quality of the code base that we have and it's readiness for change are vital to a business's understanding of what it can do and when it can do it and how it basically keeps development sustainable over time within an organization. And yeah, I just encourage you to basically start investigating approaches like this so you can actually instrument your code, discover more about what's being used and what isn't being used. Stats D, anybody here with the stats D protocol at all? Yeah, using that is also very handy. You're basically sending out packets when particular areas of code are getting hit just basically going and keeping track of areas of code that aren't being hit all that often, doing some analysis and going to figure out why they aren't being hit all that often, understanding where that fits in the value proposition of the particular features that those particular pieces of code enable and then making decisions as a business about whether you need those things or not. The less code you have, the more you can add and still keep things in your head or keep things in the team's collective head, understanding the system well enough to be able to change it properly. So that's where I want to go and basically talk to you all about. Do you have any questions at all? You're gonna, okay. Yeah. It's an interesting example, the pipeline example. Yeah. Because the striking thing about it is, okay, it's unconditional, but it's also quite nameless, right? There's almost no names to guide you through that pipeline. So that seems like a syntactic gap. It is, and I'm not really sure how to go naturally approach this now. I know Martin's talked to him in his refactoring book. He mentions the notion of an explaining variable. So you can have like break the pipeline and actually go ahead and sort of say, look, this is a name for the intermediate result. I've experimented with commenting at different stages to go and approach these things. So I do not think this is a panacea at all. I don't think that we really know enough about how far to push this particular style of programming, but I do think that there's something there and I just don't know quite where we're gonna end up. So, yeah. Oops. Two quick questions. Regarding probes, you showed an example with Ruby. Can we do this with C-Stock? Can you do that with what? Logging, C-Stock. Yeah, you can. And it's like, you have to basically, the key thing with this is really basically sort of knowing when you've introduced a particular login statement and basically knowing how long it's been. You're testing for a negative. You're testing for the absence of a call. Yeah, negative is an interesting thing. There is now this approach to introduce new code through pitch effects. We can also introduce retirement flags. Refinement flags? No, code to retire flag as a feature, like to remove the code, but we can enable disable by a feature flag. Yeah. Retirement flags. No, and that's valid. It's a useful thing to do. So, fair enough. So, you talked about characterization testing and as we're writing each characterization test manually asking yourself like, good behavior, bad behavior and made a comment suggesting like, well, you know, maybe it's best to collect information and make that decision later. Yeah. My approach for characterization testing has been almost just to turn my brain off and go through every set of possible inputs I possibly can and then just completely unintelligently capture the reporting, the return value or the result or the exception and lock that in place so that I have like as rigid a test harness as possible. I may be encoding a bunch of really bad behavior in my rewrite, but if I can just like flag those as being nonsensical while I'm in this like new code base then I can theoretically go back and clean those up because I put all the fixed needs of like, this doesn't make sense or this is stupid. Yeah, yeah. Is that a workflow that I just described appreciably different from how you typically talk about it? I think it might just be a matter of preference. I think for me, I can't, maybe I can't turn my brain off enough to sort of like not create a model, right? I'm always trying to go and basically discover a model and particularly, you know, when you look at the sheer combinatoric scope of inputs you can basically give to particular piece of code, for me, it helps to go and sort of like just ask that question and it's like, well, what happens when I do this and what happens when I do this and then start to go and look at the results and say, huh, that's interesting, right? And then start to go and try to use that as a baseline for going and discovering what I'm gonna experiment with next. I don't see any problem with what you're doing with that. I think it's fine. I do think it just might be a matter of mindset and how people might approach things differently. So, yeah. Any other questions, comments? Yeah. I just wanted to know your opinion. Like, few of the libraries, they do support kind of obsolete flagging on the methods of code. So, do you prefer like that kind of way we can also use like kind of flagging it as an obsolete and then later maybe when you are next to the next verse and you can just delete that code? What kind of libraries are you talking about? I'm purely talking about Microsoft, which is to your perspective. Yeah, I know there's stuff that they're doing with the app stuff. I think it's a nice approach too. I just haven't used that. So, I don't really know. So, they just come up with something like a directive. When they start up the class or a function module, you just say like it's an obsolete module and you use that at your own risk if you want to. Fair enough. That's kind of like we had some questions. Yeah, no, it seems good. I haven't done that though. So, yeah. Okay. All right. Thank you very much. Okay.