 If you housed all of your army's chief strategists in just one armored vehicle, would it be a think tank? Okay, so first, on a channel that's mainly dedicated to the idea that thinking is an enriching, entertaining, and important pastime, let me just clarify something. I don't think that thinking on the whole sucks. What I'm actually addressing here is more about a particular aspect of human reason, which does suck in a very real and measurable way. So just keep that in mind. Let's start with speech recognition. The first implementation of a speech recognition device was built at Bell Labs in the 50s, and it was built around recognizing formants, or the cadence of words. Like when someone says the word 7, there's a certain rhythm to that, which kind of looks like this. So intuitively, to translate a spoken 7 into text, you tell your program to look for a waveform that looks kind of like this and output a 7 whenever it saw it. The initial forays into dictation software were exceptionally inaccurate. Human beings are much better at determining whether a particular word is 7 or 7, because we understand things like word choice and context. So, most programmers figured, in order to make dictation software better, they should sit down with somebody who understands language really well, stuff like syntax and grammar, and develop a way of recognizing and interpreting that structure. That was the initial approach to speech recognition, and speech recognition software continued to suck for a very long time, until some programmers decided to stop trying to figure out the structure of language and just built a table of probabilities instead. Like in this sentence, each word has certain statistical relationships to the words before and after it. A speech recognition program evaluating those statistics doesn't need a formal rule about how to use verbs around plurals. It just figures it out based on those statistics. So, how did this new approach work out? One of the most famous quotes from that time period was from an IBM project leader named Frederick Jelenick. He said, Every time I fire a linguist, the performance of the speech recognizer goes up. While the linguists were tweaking complicated verb conjugation algorithms, the statistical programmers just fed transcribed conversations into their databases and got better results. Today, voice recognition software, from Cortana to Dragon to Siri, is primarily based on those naive statistical methods. You can actually see them at work if you watch Google's voice command app closely. The initial words that it processes will change dramatically as it processes later ones. We've discussed software being better at figuring stuff out than we are before. That's old news. All hail our big data overlords. But what's especially interesting to me in this example is that the best efforts of the greatest minds to construct a supposedly well-reasoned method for the analysis of speech by computers did nothing but hold back the development of that analysis. We tend to think of reason and thought as some of the most powerful forces known to humankind. And as a designer and an engineer and a guy who makes a YouTube show called THUNK, trust me, I do too. But I also think there's a definite tendency to value answers that we get from careful analysis simply because thought was used to get them in a way that we understand. Even when blind empirical methods might give us better results. Another example, Deep Blue, the chess computer that beat Garry Kasparov in 1997, wasn't programmed to be unpredictable or creative. It just brute-forced its way down a search tree of possible moves. Kasparov remarked on how often he was surprised by the inventiveness of a search function, something that had no clue what might be considered clever. The first chess programs suffered the same fate as the first speech recognition software, namely that people were really hung up on trying to make them work the way that humans thought, instead of just optimizing a really fast and really naive search function. This sort of blind empirical approach has also entered science. We all learned the scientific method in school. You create a hypothesis, you create an experiment to test it, you look at the data and examine it for patterns. But a relatively recent trend in the sciences has been non-hypothesis-driven studies, which basically skip everything except for the gather and evaluate data bits, basically shoveling numbers into a computer and telling it to look for correlations without any clue as to what they might be. Genome-wide association studies have gotten a lot of attention in the last few years, searching for genetic similarities between people with certain diseases. There have been a ton of very interesting developments in research prompted by these studies, which have discovered potential genetic bases for everything from blindness to obesity to Crohn's disease. There's also been a fair amount of criticism for the results supposedly discovered by these methods. You probably know how modern scientific papers generally report the probability that their results are just flukes of measurement, even if that chance is just one in a million. Well, genome-wide association studies and non-hypothesis-driven research in general test millions upon millions of correlations, so the probability of encountering a one in a million fluke is actually pretty likely. Nonetheless, non-hypothesis-driven research is still regarded as an important source of scientific information, not the least reason of which is that it operates in a space outside of human intuition, which, as we've seen in the past, isn't always the best predictor of scientific laws. These examples are just the tip of the iceberg. Genetic algorithms, deep learning programs, there are no shortage of these sorts of processes which benefit from getting reason out of the way and letting brute force or random trial and error just crank through possibilities. That's not to say that reason isn't still king. Evolution can do a lot of amazing stuff, but it needed to come up with animals that could do math before it got life onto other inhospitable planets. Still, you can't help but wonder how many systems we've attempted to formalize and generate flow charts for that would be better off without them, with blind empiricism running the show instead of reason. Maybe all those interviews and screening processes we do before we hire someone would be grossly outperformed by a dumb resume sorting algorithm. Maybe a naive OKCupid search program would be better about figuring out who to set up on a date than the actual users. Maybe the absolute best designs aren't going to be dreamt up by visionaries, but spit out by search functions. And maybe we need to be thinking a little bit harder about when we're the linguists who need to be fired. What sort of systems do you think would benefit from a random statistical approach? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to blah blah subscribe, blah share, and don't stop thunking.