 This video is about disciplinary action for children, not about foreplay. Don't get cheeky. A recently published meta-analysis in the Journal of Family Psychology examined numerous studies on spanking as a disciplinary measure for children. The paper's authors, Elizabeth Gershop and Andrew Grogan-Keylor, brought together data from over 50 years of research, collected from over 160,000 children to answer one simple question. Is spanking children a good idea? In short, probably not. The study shows that spanking is significantly correlated with several negative outcomes as children mature into adults such as antisocial behavior and some problems with mental health, and it doesn't seem to be particularly effective at correcting behavior or encouraging short-term or long-term compliance in children, at least compared to other disciplinary measures. But wherever it's been posted, hundreds and hundreds of well-meaning individuals have left comments recounting some anecdote about their own experiences with spanking and assuring everyone that the study must be wrong. In fact, the authors anticipate this, and note that there's an incredible positive correlation between having been spanked as a child and believing that spanking is an appropriate disciplinary technique. But the problem here isn't that people still believe in spanking, despite the evidence. It's that they believe that their personal anecdotes are sufficient evidence to discount the study. Now, if you're a fan of science or thunk, you probably already know that intuition isn't always the best measure of truth. You might have even heard the phrase, the plural of anecdote isn't data, meant to dampen people's enthusiasm for generalizing their own personal experiences. But anecdotal evidence isn't worthless. In fact, it plays a fundamental role in a lot of rigorous scientific research. It can be really important, and we should use it, but we should know what it's useful for. This is how information flows in science. You make observations and get an idea for how things might work. You collect data about those things. You analyze that data, and then, if it looks like your idea was right, you give it to other people to check. This structure is great for learning new facts about the universe because it flows from specific to general, from isolated events to grand overarching theories, from hey this happened, to hey these are the rules for things happening. Anecdotes are absolutely useful to this process. They belong right here at the beginning, as filtered information to help us recognize potential patterns that we might experiment on. Case studies are a crucial part of several fields, including medicine and psychology, and they're essentially anecdotes, which can provide useful insight into the operation of very complex systems, like the human brain or body. Researchers can't test every single combination of variables, and they can't come up with hypotheses in a vacuum. They have to have some idea of what might be going on, and anecdotal evidence is often enough to get ideas percolating as to what that might be. For example, when Phineas Gage got a railroad spike through his head and kept on living, it became apparent that the parts of his brain that the spike had destroyed must have had something to do with his sudden change in character. That gave researchers a ton of new ways to think about the brain and how they might do science on it. So anecdotal evidence has its place here at the beginning of the scientific process. The problems only arise when people try to use it further on down the chain where it doesn't belong. First, anecdotes don't belong in the data collection stage because they're loaded with observer bias. Objectively verifiable numeric measurements are much more useful for figuring out what's actually going on. Human brains aren't great at being neutral observers. The sort of stuff that we pay attention to and how we interpret it is always colored by things beyond our conscious control. For example, if you go through some sort of traumatic event and rate how upset you are immediately afterward, and then five years after that, it makes some sense that you would be less upset with some time to process. But you also end up believing that you were less upset immediately after the event. Which makes less sense. Scientific data, on the other hand, is sanitized of as much of that sort of bias as researchers can manage. Double-blind studies where neither the experimenter nor the test subjects know if they're getting real experimental medication or sugar pills are a fantastic example of this. Even if you're the most diligently impartial scientific observer ever, the expectation is that you'll construct your experiment so that it wouldn't matter even if you were biased. In contrast, our memories of our personal experiences are subject to everything from how the question is phrased to whether or not we've had lunch yet. Anecdotes also don't belong in the analysis of data. In proper scientific research, the numbers should really speak for themselves. Statistical analysis is one of the most powerful tools that humanity has ever discovered for finding meaning in numbers. CERN records millions upon millions of data points for each one of its experiments, and statistics is the only way that they can sift through that data and say with any degree of justification, there is a 99.999% chance that this is the Higgs boson. Scientists have to justify which data they include or exclude from their analysis according to strict rules. If it seems like a particular data point might just be an error in measurement, they have to use statistics to justify throwing it out. If the math says to keep it, they're stuck with it. Anecdotal evidence, on the other hand, is qualitative, not quantitative. It might feel like our personal experiences are representative of larger trends, but without real data to back that generalization up, all we can really say is, hey, this happened to me once. Some of the things that you experience are going to be abnormal. You might be in the 1% of Prius owners who have had problems with dependability, or the 1% of Fiesta owners who haven't. Without real data to compare that experience to, without rules for determining how weird your anecdote is or isn't, all you've really got is one point and no way to know if that's how the world usually works or how it just worked that one time for you. Finally, anecdotes don't belong in the peer review stage. This is what's happening with this study. People are citing their personal experiences and saying that the paper must be wrong. Peer reviewers of scientific papers are notorious for being absolutely merciless with their criticism, for finding every single possible way that a particular result might not represent what the author thinks that it does. They frequently question the methodology, the quantity of data, the statistical test use, or their applicability or reproducibility, or any number of things. But if they ever go so far as to call out someone's results, they had better have at least an equally rigorous study with compelling contradictory results. This is what happened with Andrew Wakefield's positive link between vaccines and autism. He fabricated a few data points to make it look like maybe there was some link. In response, many researchers meticulously collected huge quantities of conflicting evidence to prove beyond any doubt that no such link exists. It wasn't enough to say, hey, I got vaccinated when I was a kid and I turned out fine. They needed to show something bulletproof and better, something even more sanitized and rigorous that contradicted his findings. This is why assertions like I Feel Cold So Global Warming Must Not Be A Thing are so laughable. Scientific findings can be right or wrong, but there is always a meticulous process of data collection and analysis behind their claims, a compelling justification for believing them to be true. Anecdotes are useful for that process because they allow us to recognize potential patterns and feed them into it to see if they actually exist or if we're just imagining things. But their place is exclusively at the beginning of that process. An anecdote like I Was Spanked and I Turned Out Fine doesn't really prove anything by itself besides this thing can happen sometimes in the right circumstances, and compared to a more rigorous study, it's really not worth mentioning. I mean, it's pretty clear to me that my childhood probably wasn't typical to begin with. Have you ever fed an anecdote to a more rigorous process of analysis? Please leave a comment below and let me know what you think. Thank you very much for watching. Don't forget to blah blah subscribe, blah share, and don't stop thunking.