 Some years ago the state of Florida passed a law. Families with children who apply for public assistance must be tested for drug use. After testing almost 10,000 people, the state found that drug use among those applying for public assistance was half that of drug use among the general population. Since the state paid for all of these tests, the result was a waste of $100,000 in taxpayer money. The year was 1999. Twelve years later, Rick Scott, Governor of Florida passed a law. Families with children applying for public assistance must be tested for drug use. After testing thousands of people, the state found that drug use among those applying for public assistance was half that of drug use among the general population. Since the state paid for these drug tests, the result was a waste of $100,000 of taxpayer money. Insanity is doing the same thing over and over again, thinking that you'll get a different result. But suppose we agree that denying public assistance to addicts is a good idea. Should we trust the results of a drug test? Part of the problem is that the accuracy of any medical test is governed by three factors, sensitivity, specificity, and prevalence. Sensitivity is how often a test detects a condition. For example, to determine if someone is alive, you can take their pulse at the wrist. However, it's not always easy to take a pulse without some training, so for most people, taking a wrist pulse has low sensitivity. Living people often have no wrist pulse detectable by an untrained person. Specificity is how often the test detects the absence of a condition. For example, to determine if someone is alive, you could cut them and see if they bleed. However, since it's possible for a dead person to bleed, this test is not very specific. Finally, prevalence is how often the condition appears in the population. This is the trickiest because it depends on the population that you're testing. If you work in a mortuary, then you'll probably see more dead people than if you work in an amusement park. Many drug test kits are available through different vendors. They will often tout high levels of accuracy, but it's important to recognize that accuracy by itself is a meaningless number. Accuracy is strongly affected by the prevalence of the condition in the population being tested. To see how this works, consider the following problem. Suppose you want to sort out a bunch of blue and green tiles. Now, you know the tiles are either blue or green, but suppose you're slightly colorblind so it's somewhat hard to tell the tiles apart, so that, say, 90% of the time you can tell that a green tile is in fact green, so that if you had ten green tiles, you'd classify nine of them as actually being green, but you'd misclassify the tenth tile and think that it was a blue tile. And this corresponds to the sensitivity of your color vision. Similarly, suppose that 90% of the time you could tell that a blue tile is blue. This means that if you had ten blue tiles, you would correctly identify nine of them as being blue, but the tenth you'd misidentify as being green. This is the specificity of your color vision. What about the prevalence? Again, it depends on the population that we're testing, but suppose that we have 100 tiles and that 10% of the tiles are green. Let's see how those numbers break down. Of the ten green tiles, our sensitivity tells us that 90% of them, nine, will be identified as being green and one will be misidentified as being blue. Meanwhile, our specificity of 90% says that if every set of ten blue tiles, 90% will be correctly identified as being blue, but one of them will be misidentified as being green. And so we can sort our tiles in this fashion. Again, our specificity means that out of every set of ten blue tiles, we can expect to classify 90% nine of them correctly and misidentify one of them as being a green tile. So now if we go through all the tiles, we'll have a bag of tiles that are green, except if we actually look in the bag, we'll find that half of these tiles are not actually green. Many of them are misidentified blue tiles. And yet if we look at the overall numbers, we see that we've correctly identified nine green tiles as being green and 81 of the 90 blue tiles as being blue, which means that we've correctly identified the color of 90 of our tiles, which means we can claim an accuracy of 90%. But this is what 90% accuracy looks like in this case. Half of the tiles that we claim are green are not. So what are the implications for drug testing applicants for public assistance? Studies suggest that about 10% of the general population uses drugs on a regular basis, so they're like the green tiles in our example. Meanwhile, the specificity and sensitivity of commercial drug test kits ranges from as low as 70% to 99% or higher. So our 90% accurate test has correctly identified many of the green tiles and found many of the substance abusers. At the same time, our test has misidentified a large number of non-substance abusers. As with everything, you get what you pay for. You could pay hundreds of dollars for a GCMS test run by professionals, or you can pay under a dollar for a urine kit test bought online that can be used by anyone. If we're going to deny services because someone is a substance abusers, then we might want to make sure that those we identify as substance abusers are in fact substance abusers. Throughout history, religious leaders have told us that we should help those most in need of our help without asking whether they deserve our help. Even if you reject these teachings and believe that we should deny public assistance to substance abusers, drug testing is an inefficient way to find these people. Most substance abuse advocacy groups and counselors agree that the best tool for identifying those with a substance abuse problem is a well-trained counselor. In other words, you can tell more about whether a person is a substance abuser by talking to them than by looking at their urine. Side effects of knowledge include greater understanding of the world around you and an increased demand of evidence from elected officials. Difficulty swallowing lies has been reported by some users, as well as reduced tolerance for flawed arguments, personal attacks, anecdotal evidence, and false equivalences. Other users report becoming addicted to learning. A small percentage of users also exhibit behavior changes, such as fact-checking, rejection of internet memes, and political activism. Educational videos slow but do not stop the spread of misinformation, propaganda, and alternative facts. If confronted with a serious case of these things, seek immediate help from non-profit institutions of higher education or train knowledge professionals such as scientists, researchers, or investigative journalists.