 As software developers, we build information systems that connect people and ideas. Often these ideas involve managing complexity. And when we manage complexity, it's very helpful to have simplifications or mental shortcuts that allow us to deal with complex topics in an easier way. These simplifications can also manifest themselves as cognitive biases that can have unintended consequences where we simplify and end up addressing a problem different than the one we're trying to address. Let's go back to 2002. It was the year of Linux on the desktop. Walmart had boxed sets of Mandrake Linux and having read about Linux, but being limited by dial-up, I was excited to be able to try out the operating system. My first lesson was how to partition the hard drive. My second lesson was explaining to my parents and siblings what happened when I partitioned the hard drive for the family PC. Over time, I became very excited about open source development, exploring different desktop environments, image tools, publications. I was becoming an open source zealot. And it wasn't just fanaticism for Linux, it was hatred of all things Microsoft. Except for their mice, they had good mice. It's so intense that I actively avoided this book, Code Complete, published by the Microsoft Press. It was only after several years of seeing it, and assuming that it couldn't be good because it was published by Microsoft, that I finally read it after the recommendation of many people. Many years later, when I studied cognitive biases, I realized that this was a selection bias where I had limited the inputs that I was willing to consider and was therefore in self-inflicted ignorance. Like many, during the process of learning programming, the topic of design patterns has come up. These make it very easy to foster a sense of shared understanding and common vocabulary for different problems. Before reading about design patterns, I was independently rediscovering NVC on every PHP application. I was also creating unnamed monstrosities, whereas now I had names for them. A danger of a shared vocabulary is that it isn't always shared by its share, it isn't always accompanied by shared understanding. As a developer familiar with ORMs, I know exactly what this does. I build a model and save it so that triggers a SQL update or insert statement. But not if the ORM is something like this, where suddenly it's dealing with much more than persistence. What should have been very obvious as a save operation on an ORM object now requires digging into extensive operations throughout a code base simply in order to understand how an object might be saved. This sort of setup breaks the contract of a simplification because it's simplifying, but it's no longer possible to understand what's happening. A more honest approach might be to use a different pattern. These names are deliberately using pattern names for illustration purposes. In a case like this, it might not be obvious what it does, but digging into the implementation, it's clearly revealed where persistence occurs. This kind of pattern of introducing clarity is extremely useful in development and can be a way to foster shared understanding that is not just about having common concepts, but also having a shared definition for what those concepts should be. We've heard about rapid change in the software industry and how things are constantly evolving. Sometimes it feels that one has to run just to keep up. Sometimes the quest for novelty is a manifestation of a simplification. Replacing the question, why are we struggling with implementing something or why are we less competent at this than it should be, but the question, are we using the newest possible thing? Like any consideration, there are many nuances to evaluate and the propensity for choosing something new might be warranted in many cases. It's difficult to have absolute sense about when something should be explored or when the problem is really something that's just not effectively used. With this bias, the appeal to novelty is perhaps one that most easily arises in software development teams. Another one, as developers we quickly become familiar with different tools and having the ability to work with particular families of languages can reasonably quickly understand what's going on with a new framework or new tool, at least enough to go through a tutorial and build a simple application. The challenges of a real application, say an application that is going to be developed for months or even for years, are quite different from the obstacles that one might encounter in a two-hour or four-hour exploratory session. Being mindful of this, it's important to consider how unintended obstacles might arise and how the mastery of a particular small area may not be representative of the ease with which something would be used in a larger system. These mental shortcuts can be seen as something similar to autopilot. They make it easier to get from one place to another and in many cases they work extremely well. Whether or not correct, they can cause deadly consequences. Thinking back to lessons learned from a book by Microsoft, one simple way to think about improving as a developer in both mastering ideas and avoiding the dangers of unproductive mental shortcuts is to think about embracing the reality of cognitive biases, assuming that one is not immune to the effects of mental shortcuts. To extend the scope of safe assumptions people can make. If you're reading your code base and have a strong inclination about what something should be doing based on how it's written and it isn't doing it that way, it means that every time you're reading that code in the future you'll have more difficulty than necessary in maintaining or improving it. Finally, by extinguishing common causes for unproductive mental shortcuts we can greatly increase the reliability of our interacting with the ideas in whatever project we're building. I'm as guilty of this as anyone else, but it is my goal to eliminate words like obviously, basically, simply or just when describing functionality of rather sophisticated systems that may not be obviously understood. Assuming that our inclinations are correct and failing to take note of ways in which we can easily meet ourselves astray reduces the quality of our intellectual work. So in short, recognize autopilot where it exists, improve it and avoid crashing with faulty assumptions. Thank you.