 All measurements are subject to error. All measurements are subject to the specificity of the tool I'm using to measure it. Maybe I just used a school ruler, and I guessed that it was 0.375. That was my estimation. Then there's this sort of group of fallacies that he refers to as fruit packing, right? There's different ways that you can package fruit. And this is kind of a grocers trick. If you have old fruit that you wanna sell, how do you do it, right? So, one thing you can do is you can do cherry picking. Classic selection bias. You find the studies that confirm what you think is going to be true. You find the subjects who are susceptible to the kinds of intervention effects that you want to prove. You only choose the bright, shiny cherries to put out in front so that people will grab the bunches and then put them in their cart and take them home and realize that the other half are rotten, right? You pick out the cherries that are particularly shiny and juicy and good for what you want them to say to the person, in this case, the client buying the cherries. This happens all the time. Medical research is riddled with this kind of thing. Exercise research is riddled with this kind of thing. People are incredibly susceptible to wanting to find the information, the kind of confirmation bias that agrees with their preordained conclusions, right? James was telling me one of his students, one of his second year students, I think, they wanted to run this trial and they were asking him about all the kinds of statistical techniques they could do with it and all these other things. And he said, but you haven't run the research yet. And they said, yeah, yeah, but we wanna know how we're gonna manipulate the data once we get the data. And he said, no, no, you have to wait for the data and then understand the data and then decide whether to run certain tests on it. You know, you can't actually pre-program, oh, I'm gonna use this really cool technique. Well, do you need to? I don't know. So, another one, of course, comparing apples to oranges. Otherwise, sometimes known as the regression bias. Regression to the mean, right? A lot of times people look at intervention effects without recognizing that the sole intervention effect is going to actually be an outlier or it's just gonna be on the higher range or the lower range of what they're looking for. This happens in education. This is a classic, classic school superintendents problem. You're in a district, test scores are falling, educational outcomes are falling, parents are getting mad. So they throw the guy out, the new guy comes in. The new guy looks at the test scores and says, oh, man, our kids can't read and write. So I'm gonna come up with some hokey new program that I'm gonna call whatever, race to the top, no child left behind, who knows what it's gonna be called. And I'm gonna say, this is gonna improve results. And so they contract with the testing company, they write up a new test and they test the students claiming to be able to compare these scores to the old scores and guess what, they're higher. But if you test the students year after year after year on the same metric, they're gonna come back down to the mean. You may get a year when they go higher, you may get a year when they go lower, but guess what, those students, just because you're testing them differently and you're using some hokey program, unless you're fundamentally changing something, all you're doing is comparing apples to oranges. And this happens all the time in, as I said, in all sorts of health related, medical related research. It's very, very easy to say, oh, this group, very small, very isolated, very selected, had this result and therefore it's going to apply to this group. You see this all the time, and not the least of which is mice or lab rats or whatever, comparing to humans, but even different groups of humans, with different ages, different profiles, et cetera. And then the last one, which he calls apple polishing. This is just distortion bias, right? This is just where you shine up the numbers, you put some fancy charts in, you do things to make them look a lot bigger or more significant. Famous cases are where they just adjust the axes on a graph. This is the very famous cholesterol intervention with a particular drug, cholesterol intervention showed dramatic results. When you realize that the scale on the left actually was going by tenths rather than whole numbers, wow, you know those bars went way down every time, but in reality in terms of the actual variation, it was like that, right? The chart doesn't even change because they've adjusted the numbers to make the bars look way different. Or sometimes you notice, this is the funniest thing, I see this in journals sometimes, they put a very, very, very small note that they've done it on a logarithmic scale. You're like, of course that stuff's gonna fly up and there's gonna be huge differences. I mean, it's how you're presenting the data. So the fundamental, of course, which probably, I hope at least a lot of you know, is that people mistake the problem of accuracy for the problem of precision, right? And then even that people fundamentally don't understand what precision is. Pull this off of Wikipedia, it's pretty good show illustration of accuracy and precision. Accuracy just simply means, are you actually getting the correct value? If there were some disembodied, perfect way, perfect world way of knowing. Now, if you say, if I gave a bunch of my students in a physics class, I gave them pots of water and I said okay, go determine what the boiling temperature of water is. Okay, under standard pressure and other conditions. I'm pretty sure I know what the reference value is. Right now it does depend, there are some variations for anybody that's thermal physics knows that there are some variations in the shape and the composition of the material. But if I give them a standard set of things in which to boil water and ask them with fairly accurate thermometers, I know what temperature water's going to boil at. Now if I have five of the kids in different teams come up to me and they all tell me that water boils at 108 degrees Celsius. One of them says 108.2, 107.9, 108, 108.1. That's a lot of precision. But it looks like my thermometers might be off by about eight degrees. Versus accuracy, which is a representation of is it actually getting the correct answer? Precision can be an illusion. Precision simply means repeatability of results. But if you've distorted something, if you have measurement error, if you're doing the same characteristic thing wrong over and over again, you're gonna get a lot of repeatability. It's like the, I put it up later, it's like the Texas sharpshooter fallacy. A guy out in Texas is bored, so he shoots up his barn one afternoon. Then he decides, wow, there's a nice little group right there where all the bullets are pretty close together. Then he goes out there and he paints a target with that as the bullseye. And he said, look how good a shot I am. And all of his friends say, oh yeah, you're a pretty good shot. All those bullseyes you hit. Now, of course, he just shot the gun randomly. Some of them happened to group together and he puts the target there. Precision can be an illusion, okay? Now, this has real-life consequences. You say, oh, this seems awfully academic. What are the real-life applications? Well, for people interested in optimal health, people interested in optimal exercise programs, people interested in maximizing these things about themselves, guess what? Blood tests, research studies, things that you do to try to find out what's best for you have these problems, measurement errors, precision problems. Now, you think blood tests. I know a guy, he's one of these biohacker guys, and I've talked to Jolly about this, and there are people who run their blood work like every month, and they run their blood work and as soon as they see something wrong, they act. As soon as something's out of range or what they consider to be a range, they act. And they completely ignore the fact that these blood tests have certain sensitivity. The kinds of changes that they may be noting are actually smaller than what the blood test is gonna run on a time-after-time-after-time basis. They may not understand the kind of variation that comes in with which lab you send it to. So this great example, the patient goes to the doctor, doc, my self-stocker 5,000 app has detected I have pre-diabetes, and then the caption says, one day, the over-monitoring alert of Dr. Abbott's monitoring monitor became quite hysterical. The problem is that if you fall prey to the illusion of precision, the potential consequences are you get a lot of false positives. If you don't actually understand what those tests are doing, what they can measure, how precisely they can measure it, and whether or not those measurements are repeatable. The best example of this, you're gonna take your blood, you're gonna have, let's say, some of the guys might be concerned about it, you're gonna test your testosterone. Notoriously difficult to measure if you actually know what's going into measuring free and testosterone in the blood. It varies by time of day, it varies by stress level. It can vary even a lot, just by which lab you have it sent to to do the assay. So you get a lot of false positives, and then you act on those. That's the heuristic. Heuristic is, oh, some expert knows how to design this blood test, I'm gonna go act. I'm gonna go do this program, I'm gonna go eat this food, I'm gonna do these things because my blood tests said that I have this condition. Well, guess what? You might not actually have that condition. You don't have repeatability, and you don't even know what accuracy is yet. Another problem, you may find idiopathic conditions. You may find variations in your blood work, you may find variations in your response to stimulus in a training program or a diet program or whatever that actually is something very, very highly unique to you. All of these tests that have reference values, those reference values are literally just the lab's averages over the human population. You may actually run a little bit hot. You may run a little bit cold on some of these measures, as so to speak, not literally temperature, but your blood work may actually just naturally be higher or lower because some variation in your genetic makeup, some variation in how you function, so you're gonna find some idiopathic condition that you try to treat that actually isn't causing you any problem. And then the worst possible consequence, I wish McGuff were here to hear this because I know he loves this one, you might engage in what they call iatrogenesis, doctor-caused pathologies, right? You go to a doctor, say doc, doc, do something. The doc does something, it's the wrong thing and it makes you worse, right? These are the worst kind of intervention effects. When the doctor says, oh, you know, you're feeling a little bit this, you're feeling a little bit that, you've got whatever syndrome and I'm gonna throw you a drug, I'm gonna throw you a treatment and it's actually gonna cause you harm and then you're gonna be chasing another problem. You're gonna be chasing the problem that's been created, not by any organic condition, but by the doctor or the practitioner themselves. So this is a real problem. You have to be very careful about using numbers, using statistics. Of all the heuristics, of all the biases that humans have, statistics, risk, probability, all these things are very, very difficult for people to have the proper intuitive sense because in a sense, our brains are not designed for this in today's world. So another illusion, the illusion of authority, right? This is a pretty easy one. You put a brain scan into a research paper, people are more likely to believe the results. You can have the exact same text. You can have text that says that in sophisticated language that says that the research doesn't actually have any findings, put a brain scan in it and people will believe it. Right, the text basically says, yeah, we didn't really find anything statistically significant, we didn't find anything, but the headline, brain scan, boom, people believe. Oh, if I eat more carrots, my brain will be healthier, whatever, right? Just graphs alone, right? Just bar charts alone in research papers, in news articles, will give the illusion that something sciencey is going on. Don't fall prey to this. Just because someone can quantify something doesn't mean that they have anything good to say. I actually pulled this, this is funny, this graph. I pulled this off of, I was just looking for public domain bar charts. I happened to find this one. This was on a website of a woman who's apparently a science fair consultant. All she does is hire herself out for probably hundreds of dollars to kids who are working on science fair projects. I thought, man, that's a racket. I need to get into this. This chart says, oh, you can present the data one way or the other way, and I recommend that you present it this way because it gives more drama between the two variables. I don't even know what the research is. It's something about the difference between coffee and water. The worst kind of sciencey stuff that you see today, infographics. God, I hate these things. I mean, they're beautiful. Sometimes they're really, really creative, but I pulled this one just for James because this happens to be about the number of UK health clubs and gyms, and they've got this guy doing a bench press and they've got all of these weights out here that are used to represent the numbers in the size of the plates or the size of the barbells or dumbbells or whatever represents different variables. I looked at this chart. I figured out maybe five or six different conclusions that I could reach based upon the data presented here. Now, those conclusions were contradictory. In one conclusion, I say, oh, Brits are getting healthier because there's obviously more gym memberships being sold. Another one, I said, oh, private memberships are on the way down and more people are going to public gyms because there's a different change in the ethos of British working out that it's a more public thing and people are becoming more concerned about it. Or, oh my God, the absolute number of gyms is dropping and so therefore Brits are no longer concerned about their health, no wonder we're all fat. You can come up with whatever conclusion you want because you just present this data and it looks kind of fancy. So don't fall prey to this. The illusion of authority, forget about reading medical journals. Oh, well maybe I'll just go to the experts. I want the experts to tell me what's best for me. Well, unless you're ready to become fluent in how to understand research design, how to understand statistics, how to understand epidemiology, how to be super critical of the fact that I don't know what John Ioannidis says. Over 50% of medical research is basically bunk. Published medical research can't be replicated. Statistical methods are so bad that the paper is so riddled with errors. And yet people who reports this stuff, the media. What do they say about it? They tell you, oh my gosh, Dr. Oz says do this and you'll burn fat, do this and you'll do that. This is what they call the Murray-Gellman amnesia effect. This is actually something that Crichton came up with, John Crichton. Murray-Gellman was a famous theoretical physicist, won the Nobel Prize in the 60s, and Crichton actually named this effect for him because he said I had a conversation with him about it once and he's much more famous than I am so I named it after him because it'll give it a lot more authority, kind of as a joke. But the Murray-Gellman effect is described basically by Crichton as the following. You're reading the morning newspaper. You come across an article. The article is the journalist is not only just stupid, they've got it backwards. Wet streets cause rain, right? I mean they literally have cause and effect backwards and you think to yourself, what a fool, right? This guy is such an idiot. And then you turn to the next page and you trust what they have to say, the same paper, the same journalist, you trust what they have to say about the state of the world in the Middle East or you trust what they have to say something else. The Murray-Gellman effect, think about Muscle magazines. One convention returning speaker from Austin, Texas, 2012 and actually Anthony Johnson, the CEO of the 21 convention said it's one of his favorite speeches, the Austin, Texas one. And let's hope this one is two. Eric Daniels, let's do it. All right. Thanks. All right guys, hopefully you're fully caffeinated, ready to go, the two systems model, okay? And this is basically what I'm talking about with the two ways that you can forget about the auto regulation of the brain. That's primarily what the brains work. Once you start thinking, there's really two systems he says that you can think incredibly, incredibly distorting. So one of these is the illusion of precision. Had this result and therefore it's going to apply to this group, right? You see this all the time and not the least of which is mice or lab rats or whatever comparing to humans but even different groups of humans with different ages, different profiles, et cetera. Burns Businesses to Great Businesses published this book, claimed that he understood everything that the CEOs and leadership teams did to make them great, published the book, made lots of money, and guess what?