 A while back I found an online community that had a set of free art tutorials, but it's been so long since I started them I figure I'd better just go back to the drawing board. I have a running joke with the regulars in the Thunk Discord channel. I keep a tally of how many times I've taken an episode length script and control a deleted the whole damn thing, suggesting that the more scripts I've flung in the garbage, the closer I might be to actually recording something. Rewriting a few pages of text isn't that big of a deal, but that wicked urge to throw everything out and start over from a blank page is a constant temptation in all sorts of projects, regardless of how much time I've spent on them. CAD models with hundreds of parts, costumes that have taken me days to sew together. Whenever I've been working on something for long enough, the thought inevitably creeps into my head. Wouldn't it be easier to just start from scratch? Although it's sometimes helpful to totally reboot a project that has deep structural problems, it's just as common to realize after I've spent hours or days redoing everything, I'm more or less back where I started. The old weird edge cases have been replaced by new weird edge cases. The compromises, bugs, and trade-offs have been exchanged for different ones. The fun creative energy of laying out rough architecture and making big important decisions turns into frustration and disappointment as I work through the details of my shiny new vision and find it not quite as shiny as I imagined. For veteran coder and Stack Overflow co-founder Joel Spolsky, that impulse to discard and rebuild is the source of much frustration. In an essay titled Things You Should Never Do, Part 1, Joel attributes the prolific failure of the Netscape 6 web browser to the company's decision to totally rewrite all their code from scratch, a decision that ultimately cost them years of development and almost all the market share they still had in the year 2000. The Netscape 5 code was, by any estimation, a total mess. Many coders would avail on it much sooner, but Spolsky argues that it only seemed messy because it was battle-tested, scarred from each weird bug and edge case it had been amended to fix. Yes, there were bloated functions full of inelegant spaghetti code, but each addressed some persistent problem, a rare event or weird user request that had taken someone weeks to figure out, and they weren't. You could start over from a blank page and maybe make something prettier, but if your brilliant new design didn't address those issues, you'd trade proven functionality for slick-looking, untested code. Code that may very well become just as hairy once all the kinks are worked out. For Spolsky, the motivation to rewrite a program from scratch instead of editing or refactoring the existing code isn't driven by any realistic evaluation of how much work it's going to take or anything like that. It stems from the fact that writing new code is fairly easy compared to reading and understanding old code, especially old code written by someone else that's not particularly readable. Now, if you're not a programmer, Spolsky's critiques might seem like a nerd complaining to other nerds about nerd stuff, but I think he hits on something important. GK Chesserson famously wrote that, before demolishing a fence, one ought to consider carefully why someone had gone to the effort of building a fence there in the first place, especially if it appears pointless on first inspection. It seems both Chesserson and Spolsky recognize our sometimes misplaced enthusiasm for creativity and innovation, especially if it means not needing to puzzle out someone else's work, our unfounded optimism that the new thing will inevitably be better with fewer mistakes for some reason. Spolsky scoffs at this attitude, saying, it's important to remember that when you start from scratch, there is absolutely no reason to believe you're going to do a better job than you did the first time. You're just going to make most of the old mistakes again and introduce some new problems that weren't in the original version. That might sound a little harsh, but a 2010 study seems to back him up on this. The series of test subjects were instructed to build towers out of dry spaghetti and marshmallows, each staggered by a couple of minutes so people who were getting ready to start could watch while others finished up their builds, maybe taking notes or learning from their mistakes. This cycling was intended to simulate some sort of generational evolution of knowledge. You get to observe what others before you have done, then jump in and either try to iterate on it or fork off and try something new. The experimenters found a clever way to influence the bias for innovation or adaptation by tweaking the test slightly. It turns out, if you tell people you'll measure their tower right when they finish building, they tend to be more creative and wacky. If you tell them you'll measure it in five minutes or so, if it's still standing, they tend to be more conservative, adapting strategies that they see working. Okay, so we have a bunch of tower builders who are innovating and trying wild stuff and a bunch of tower builders who are nervously copying whatever architectural tradition looks stable, with some minor variation. Who do you think ends up with the tallest towers at the end, on average? Both. They both do. There's no statistical difference in outcomes. Both groups improve slowly over time. This might be surprising, but if you think about it, innovators are more likely to find great new ideas, but are also more likely to make costly mistakes. Even if they discover something great, if the next generation of radical innovators are also excited to try new stuff, they aren't likely to copy it. On the other hand, incrementalists always have a solid starting point and learn from previous mistakes, but improvement between generations is slow. The spaghetti towers seem to vindicate Spolsky's contempt for optimism about radical reinvention. There's no reason to expect any novel attempt to be an improvement on what's come before. Best case, you find something better that gets subsequently forgotten. Worst case, you accidentally repeat some dumb mistake and it's a disaster. Still, if it's going to be the same amount of work to figure out the old code and debug the new code, why not start from scratch? If you're going to net similar results with a similar amount of work, why not do the thing that seems more fun? There are two possible answers to that question. The obvious answer is that spaghetti tower failure isn't quite as big a deal as hospital flight control security access power grid or missile defense system failure. If there's anything important writing on the code being continuously functional, it's always going to be better to go with the tower that's been standing and continues to stand. But there's another study that we're going to look at that directly addresses the mental labor we spend trying to understand old solutions, and it suggests that there's something important at stake when we're considering bailing on that effort and starting over. You may remember cultural evolution theory from episode 197, the idea that the real engine behind all human intelligence is our capacity to absorb culture, the expanding body of knowledge and tradition that evolves to be smarter and more adaptive over generations, as bad ideas are cold and good ideas propagate. Cultural evolution theory is a decent explanation for all sorts of seemingly unrelated stuff, like the peculiar way the human body is constructed compared to other animals, or the slow increase in average IQ, or the way human babies aren't smarter than chimpanzee babies unless they have a chance to learn from others. But here's the thing, if culture is slowly accumulating more and more intelligence over time, doesn't that mean individual humans are going to need more and more time to absorb it? Researcher Alex Massoudi noticed several phenomena that seem to suggest a general trend of increasing costs for acculturation. The average age of Nobel laureates is going up, inventors are submitting fewer and fewer patents every year, the rate of landmark scientific discoveries is decreasing. Massoudi suggests things are slowing down because people are spending more time downloading the increasingly complex information they need to make useful progress in their field, delaying the date they've finally learned enough to ask good questions and answer them, which can be a problem. At some point, the time you'd need to understand the accumulated cultural knowledge about some topic becomes equal to a human lifespan, so no more discoveries in that field. Massoudi built a few different simulations of this idea and found that the maximum accumulated knowledge of a virtual culture depends a lot on the behavior of its members. If each generation spends their time learning tidbits of knowledge at random, or copies the knowledge of their smartest ancestor, the complexity of their culture's knowledge maxes out relatively quickly, even with lots of innovation. Individuals simply won't have the building blocks they need to synthesize new discoveries if they can't stand on the shoulders of giants, or get too hung up on emulating one specific giant with all their unique blind spots. However, if individuals take the effort to compare ideas, picking and choosing ones that capture the greatest complexity of understanding, the total information capacity of their culture skyrockets. If you squint at these admittedly abstract simulations out of the corner of your eye, it kind of seems that the energy individuals invest in parsing and understanding the work that's preceded them isn't just good for helping them come up with better ideas. It's absolutely vital to ensure that their culture continues to get smarter over time. Now, I can already hear a smattering of skeptical typing from people who have grown accustomed to very small p-values. And yes, I admit, these papers are fluffy. But what I find interesting is the story they tell alongside Spolsky's maxim. On the one hand, there's no reason to expect radical innovation to net better results than simply adapting existing methods and improving them incrementally. Revolutionary discoveries end up balanced out by catastrophic failures. And even if you end up finding a really good solution to the problem, the next generation of radical innovators will just ignore what you've accomplished anyways. On the other hand, if you don't take the time to survey the landscape of past solutions, to understand what's failed and what's worked, you're not going to be able to take advantage of the accumulated cultural knowledge at your disposal, and you're kneecapping your culture's ability to build on the lessons of the past. Those who do not study history are doomed to just doomed. For me, that makes sense and paints a compelling argument for thinking twice before giving into that little voice in my head that says starting over would be easier. But what about you? Do you think Spolsky's right that reading, refactoring, and updating existing work is categorically superior to throwing everything out and starting over? What fields and subjects do you see suffering from wheel reinvention itis? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to blah blah subscribe, bless her, and don't stop thumping.