 Well, in this lecture, I'm going to recap a little bit of where we've been and where we're coming from. And there are some things that I would like to, I think, are worth reinforcing. And so I'm going to elaborate a little bit more on them. Remember the first lecture we began with key bending demonstration. Psychically, of course, and we had people write down what they could see, what they witnessed, and then their best guesses as to what happened. And this I've done several other times as well as demonstration over a period of at least 30-something years for lawyers, for undergraduates at the University of Oregon, but also elsewhere. And it's quite consistent what the results have been. Most of the people, I'm snowing, writes an adequate description because it's just too much to, you don't know in advance what to put in there. And that's the point. If you don't know what to be supposed to be looking for, you're not going to get good information. And this is the problem of eyewitness testimony. It's a problem of observation. And one of the things when we talk about how to think about dubious claims, it's important that you should not waste your thinking on useless information. And much of the information that's going to be supplied to you for you to evaluate a claim is going to be useless. And that's why we focus on the need for replication, but also the need for observation, which is prospective rather than retrospective. So most of the cases that happen, people see miracles and stuff like that, but they weren't planning to look for it. They didn't have a systematic plan of what they were supposed to be looking for. And scientific information, by the way, it's only the last 400 years or so that we do have something called scientific method. So it's a very recent thing. It's not something that's compatible for most people's way of looking at the world. Indeed, last time I gave a university, the state of Oregon hired me to give a workshop some years ago to many of the people who run institutes for rehabilitating people from drugs or other things. A lot of these facilities throughout the state. And the state had passed a law that the techniques that people use in such a facility should be evidence-based. And this had created a tremendous backlash, if you were, because none of these people had any idea what evidence-based meant. So I had to run a workshop to teach in this. And I was evaluated by the people who took the workshop. And one person said I was great. Everyone else said I was the worst person you could have. And they pointed out the reason for it was that I wasn't tolerating other ways of knowing. I was emphasizing that if you're going to really know something, you've got to get good data. Garbage in, garbage out, right? And they felt that they knew without having to have that kind of information. There were other ways of knowing. And I wasn't tolerant of other ways of knowing. And so it's not that the use of good information is hard to, it's not something built into us. In the use of science, where you have the luxury of ahead of time planning for observations, planning to make the observation of the conditions where you have calibrated your instruments, you have a standardized procedure, you know what to look for and what you can ignore and so on. This is what science is all about and this is how you get trustworthy data. And if it's, and the more you deviate from that plan, the less trustworthy the data is. And that's no problem of eye witness testimony. So we focused on that. And we had some examples which try to give you some feeling about the reasons why we need to have trustworthy data. And we talked about why smart people can be so stupid. And in that we, I short circuited some of the things that I should have, I think I should elaborate on a little bit more. We talked about intelligence and intelligence testing and so on. And I told you about one book which I recommended by Keith Stanovich called What the Intelligence Test Miss, basically. And other people realize that intelligence tests are not very good predictors of a lot of real world phenomena. And there's different reasons why that's so. Intelligence testing, by the way, it's a long established tradition and the research and science that goes into it is pretty strong. And eventually, most people who study intelligence, drive into two kinds of intelligence. There's what they call crystallized intelligence and fluid intelligence. Crystallized intelligence is, if you take an intelligence test, you're asked things about facts. Knowing certain pieces of information about the world. This is something you accumulate. It's not innate or anything. You learn it. But the intelligence test is testing you to see how much you have learned that ordinary people ought to know. That was the idea. But the other kind of intelligence, the fluid intelligence is being able to deal with new information. New problems. It's more like a way of, to what extent you can handle logical thinking and tricky types of thinking. And intelligence tests measure that as well. Now, how is it that being intelligent, though, turns out to be a very poor predictor of whether you're going to be sensible, whether you're going to be taken in or not taken in, so on. And it turns out to be a bad, bad predictor of that. And being intelligent, as we demonstrated in at least one case, and we'll talk about some other cases like that, very intelligent, competent people have acted in very, very stupid ways. Now, I know stupid is a term of, it's a pejorative term, but let's face it, stupidity is stupidity. And I don't mean to downplay them, but I don't think that we should encourage stupidity. So I don't think we have to be nice about it, either. The problem is, and the way I look at it, people have different notions about why intelligence isn't a good predictor of smartness, basically. And it's not a good predictor because it does predict a little bit, but not strongly. It's not a good predictor because it's a measure of what I would call capacity. And there are two kinds of tests, that psychological tests. There are those tests, what I would call measures of capacity, what you attend to under ideal conditions. Aptitude tests, intelligence tests belong in that category. However, they don't, but the fact that you have this high capacity doesn't mean you're going to use it intelligently. I mean, you're going to apply it rationally. And so there's a distinction made between rationality and intelligence. Most people have sufficient, not all, but most people have sufficient intelligence to be able to handle almost anything if they really want to have the right attitude. If they don't have the disposition. So most of what we call rationality is more of what we call a cognitive disposition. It's a desire, a want to get to the truth. A desire to really look at the problem and figure things out to get the right answer. And it turns out a lot of intelligent people, we have, that's not my phone. No, fine. Okay. It turns out that the, I sort of lost my train of thought because this is why we ask people to turn their phones off before we begin these things. Okay, where was I? I was, yes, okay. And it turns out that even when I was first beginning running my skeptics toolbox, which I began in 1992 in Eugene, Oregon. I run every summer, we have this thing called skeptics toolbox where the idea is to teach people how to be nice skeptics. Sort of a mismanage of skepticism. Okay. Among other things. Even when I began running that some of the people come to that workshop had no desire to think what we call rationally. They were more interested in what we call confirmation bias. They wanted to have, have, have look for information and things that would fulfill their prior beliefs. And when you have that kind of an attitude you're consumed by that, you're not applying your intelligence in the rational way. And as you know, lots of people for various reasons. There are people out there on the street corner and we saw yesterday who are asking you to believe, be saved. And it's somehow there's a value to them of not having good information and not having a scientific approach to this thing. To believe in faith is somehow a value to them. If you have such values, all the intelligence in the world is not going to be much good, right? So this is some of the things we went through. Now I want to review also, I didn't review as much as I can. We used a few problems to get across some of the aspects of why it is that rational intelligent people can go astray. And one of the tax honours I gave you, so let me show you that again, was one, the cell people with five tax honours. This is one by Keith Stanovich. I think in one of his books he's written about four or five books. Can you make that clearer? I'll go through it though a little bit. I didn't go through as much as I should. One of the problems examples, remember one problem I gave you was the Jack, Ann and George. Remember the names, okay. The problem was that... Can I have it here? Yeah, okay. I'll take it out of this thing here. Here it is right here. I'll read it for you. Jack is looking at Ann, but Ann is looking at George. Jack is married, but George is not. Is a married person looking at an unmarried person. Well, some of you were here and now know the answer. Some of you weren't here, may or may not know the answer. And we're given, this is very important, you're given three alternatives. A, yes, B, no, and C cannot be decided and determined. And almost everyone here picked C. And I would guess most of you would pick C, too. Only because I know that you have minds that work like mine, which is not too good, okay. Now, if you don't, you weren't here. This is a difficult problem because it violates the principle of the cognitive miser. The idea of the cognitive miser is one of the reasons why one of the taxonomies that goes this way, okay. If you notice this flow diagram, so on the very left there are two right here and here. These are the two basic overall categories by which Stanovic tries to put in the different ways that we go astray. The cognitive miser refers to the fact that we often go astray because we defer as a default to the immediate answer that comes to us through our autonomous mind, as it's known. The autonomous mind is the one that is automatic. It's the system one, they call it, about thinking. It's what Kahneman calls fast thinking. By the way, Kahneman's book has just come out again as a paperback on fast and slow thinking. And if you don't have it, it's a chance to get it. It's also a good book to read. And so civil psychologists like Kahneman and so on, many cognitive psychologists have their version of a two-tier system. There's this automatic system and then we have this slower, more cognitively consuming system, which is later in evolution. And one we don't share with the rest of the animal world. We share the first system with the rest of the animal world. And it's one that enables us to do logic and to be scientific and stuff, but it's one that consumes an awful lot of cognitive capacity. And it can only usually work on one thing at a time. And it's very slow, but it can override the automatic system sometimes when it's necessary to override it. And it's most valuable when you're dealing with new novel situations. When you're dealing with repetitive situations, sometimes you can trust your automatic system. That's what expertise is all about. So anyways, the cognitive mindset means that you tend not to want to erase those resources or use them when you can get away with something simpler. Kahneman talks about this in terms of attribute substitution. The idea of attribute substitution is that you use a simpler, instead of solving the real problem, you aim for a plausible answer which is not the real problem, but you're solving being a simpler problem, but it can be solved by your automatic mind without too much thinking. And this problem of Jack looking at Ann and Ann looking at George, Jack, you know, is married and George, you know, is unmarried, but you don't know anything about whether Ann is married or unmarried. And so there's some uncertainty about there. So at this point, your automatic mind tells you it's undetermined, undecided, because you know nothing about Ann, whether she's married or not. And you leave it at that. That's what a cognitive mindset would do. But if you stopped and really wanted to really attend to this very carefully, if you thought about it, you could do what's called this jump in thinking. What if Ann were married, okay? So then, okay, Jack is married. He's looking at Ann, who's married. But Ann is now looking at George who's not married. So in fact, at least if she were married, an unmarried person is looking at an unmarried person. Do I have that right? No, no, I've got it wrong. Okay, so if Ann were now, now you go back and this is what's called this jump in thinking. She takes a lot of work to do this. If Ann is unmarried, then automatically an unmarried person is looking at a... I'm sorry, was it George who was married? It should be, okay. It's looking at George who was married. So either way, but it's not always Ann. An unmarried person is looking at a married person, whether she's married or not. So it really ultimately doesn't make a difference whether she's married or not because either way, an unmarried person, not necessarily Ann all the time, is looking at an unmarried person. But you see how this is difficult for us to think about, but we have to use our cognitive resources and it's very difficult. So the cognitive minds are ordinarily defaults to the simplest answer, which is... And by the way, it turns out when you give them these three alternative answers, where did I put that? Yeah, okay. So if I don't put these down, actually more people are going to solve it because you didn't automatically have it. But now you put these down immediately, this primes the automatic answer and it saves you from having to waste your precious cognitive resources so you're going to go for it. Unless you go for it, you're hooked, okay. So that's the idea of the cognitive mind, sir. He's going to explain a lot of the problems that we get into is because of the cognitive mind. We don't fully do that. Another example to illustrate the cognitive mind approach was the bat and the ball. Let me put it that way. Those of you who weren't here, this is a favor of Danny Cowan. He likes to use that for his purposes, especially slow thinking. But it's an old problem. The bat and the ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost? Okay, now those of you who weren't here, those of you who are here know, but those of you who weren't here, your automatic system, automatic is going to feel like saying $10, right? You want to say $10. And that's fine until if you let your other system kick in, you think about it and you think about the answer very closely. If it was $10, if the ball costs $10, how much more would the bat cost? Was that? No, no. If the ball costs $10, how much more would the... You got the dollars and cents. Oh, is that a different one? Yeah. Oh, $100, okay. I'm sorry. This is a dollar. I'm sorry. But it's the same principle. Okay. Thank you. I've just been corrected by my official corrector. Okay. That's why we need an audience. You people are the proxy for all the people who are going to watch this online. Okay. So one half of the picture is the cognitive mindset. The other half of the picture is what he calls mindware problems. Mindwares is knowledge. The parallel with the crystallized knowledge into intelligence as opposed to the fluid knowledge, which is this is closer to that intelligence. Okay. So mindware problems have to do with two things, two kinds of mindware problems. What we call mindware gaps. You just lack the necessary information you need. And this is a lot of problems and probability that we have. And in dealing with scientific data and stuff like that, we just don't have the training or the background to have it. We don't have that knowledge. We don't have that knowledge. We haven't been trained to have it. But then there's another kind of insidious kind of mindware problem. That's what's called contaminated mindware. We do have information, but it's contaminated online. We're going to come across that a bit later when we get into what's called the Matthew effect. And it shows that we always come to problems, every problem we can't do otherwise, already with background preconceptions and information. And if that information is bad or wrong, we're going to come out very badly. That's why it's important to stuff your mind with good stuff rather than bad stuff. Garbage and garbage out. Okay. So I just want to make sure you've got that distinction right. And we get it right because I skipped it over too much. We went into the problem, but I didn't pay too much attention to that. Now I've got a couple of other problems I want to give you. And I wrote it down just by hand. So this is unusual. This is handmade, this one. I just made this up right now. I just wrote it down by hand. And it's very rare to get anything that's homemade now. Because we have PowerPoint, all this other stuff. So this is very valuable stuff. In fact, we're going to auction it off after the course. Okay. This is a lily pad one. I thought I had some printed out on it, but that's okay. I wrote this out just now. But the lily pad problem is a lily pad grows so that it doubles on each day. It doubles in area, doubles in size. So there's a pond and there's a little pad on air. And each day it grows, it doubles in size. Okay. And it keeps doing that until it completely covers the pond. You get that picture? You want these ponds you don't want to jump into anymore? Okay. So on what day of its life, let's say on the 20th day of its life, it completely covers the pond. Okay. So on the 20th day, the pond becomes completely covered. So the question is, on what day of its life was the pond half covered? And I'll give you some alternatives like, was it the 5th day, the 10th day, the 15th day, or none of the above? Okay. And you can think and so on. I hope you think. So how many people would go for none of the above? Well, about half of you. So, you see others got some, so someone's from, anyone for the 5th day? No? The 10th day? Okay. We got one at that day. 15th day? Okay. We got some 15th day. Okay. So we got a few who got the others in somewhere too embarrassed to put their hands up. That's too difficult. You're not in cognitive mises. You're arm mises or energy mises. Okay. The point is of this one is that the pond will be half covered the very next to the last day on the 19th day. If you think about it, because if it's half covered on the 19th day, it stumbles in size. Next day on the 20th day, it's going to be fully covered. Do you get that? This is a point where sometimes it's often useful to work backwards. And again, having your automatic system is not going to do that for you. All right. Now let me give you another example here. I wrote this down here. I'm not sure any people want to value it, so I can destroy this so I can fold it. Say this is a standard piece of paper from, you know, you get these papers in reams of 500. And this is, let's say, I think I, once calculated, about, doesn't make any difference too much, but let's say it's about 00, I think it's .004 inches. Now, how did I measure that? Anyone have any idea? Yeah. Folding. No, how do I know it's 004? Oh, you say I fold it in order to get the measure. Right. Oh, I see what you did. That's a good idea. I never thought of that. We're doing it. That's pretty close. I did this. I took the ream of paper, 500 sheets. I measured it. That's easy to measure. And then I divided by the 500. Okay. So it was. So. It's much better. You don't have to. Yeah. But say something like that. Okay. Now I can fold this and double it. That doubles the thickness now, right? So you can imagine. You can see how thick it is with, and I just doubled it. I can do it again. And you can imagine how thick it is. Okay. I can do it again. It prints out by the way. You can't do it more than six or seven times. There's a big fight about that. In fact, some going the web. It's interesting to check it out. Some that was some time ago. I remember some girl in England as a project. She worked to work it out. She got she sling the paper out. She took it and stretched it out and she broke it. He got the world's record how many times you can fold it. Still, it's not a great number. But it used to be six or seven was the maximum. You could fold a piece of paper. So in fact, because of the Avogadro's number and other problems, you can't fold it too many times. But imagine, because that's another thing, we got imaginations. Imagine that we could keep, continue folding it. And we continued folding it until I have folded it 50 times. You know, I doubled it. I doubled it again. I doubled it again. I doubled it again. A few times. Sit back, relax. And imagine how tall, how thick this final bundle will be. And is anyone willing to make a guess on the basis of their intuition? Not their science, but their intuition. Because we just want to... Okay, he knows the formula for doing that. But someone want to make a guess about their, in terms of how their intuition tells you. This would be your first system one. What would it tell you that after I done, kept folding it until I did it 50 times? How tall would that be? How thick would it be? Anyone make a guess? See, you're too embarrassed to... You don't have the confidence in yourself to... All the way to the piece of paper 50 times? Yeah. That's as thick as that 500-picker ring you started with. Just now there's this paper ring. Okay. You're sure, but if your intuition... This is a JPL engineer. Okay, she's going to do it scientifically. All six times. Yeah, now most of you don't know it. Don't even have any feel for that. Big numbers. Big numbers, but it's pretty big. What would you say, how big would that be? No, don't do your science. Do it intuitively. Okay, so she knows how to think that way. Most of us don't. 99.9%. I figured that out, don't know. So how big would it be? Yeah, but a real thing in terms of would it be as tall as the hotel? How tall would it be? What's your intuition? Don't write. What's that? Big. How big? Intuition got to be fast. We don't want it to slow thinking, okay? That's the idea. Okay, he's got this big, okay. Okay, you ready for the answer? Yes. Okay, the answer is it would be 79 plus million miles. About three-quarters away from here to the sun. Okay, so you see how counterintuitive that is. We don't have a way of thinking naturally. It's not part of our natural way of thinking to think in terms of exponential growth. Linear growth. What is going on here, one aspect of this is called the anchoring illusion, the anchoring heuristic. What you're doing in your own mind as I keep folding it each time, the first several folds are very, very small. And you don't realize that exponential growth is huge. So let me show you something here. Oh, I did have that. Okay, so you should see, okay. But you look at here, this is one way. This is smooth. This is continuous. Actually, it's discontinuous. But it's easier to make. So this is an equation I used. But okay, so you look through most of the folding, you see we're very close to the baseline here. But right at the end, this is the way exponential growth goes. It is in a big explosion, looks like, right up to the top there. And it's like a lily pad thing too. If we look at this backwards, we can see better. I said it was 79 million miles, right? Over 50% of that is done in the last fold. And if you look at that, so the last fold is, and here I'll do it this way. I'll show you on this thing here. I took the percentage of the total distance. We take 100% of the 79 million miles, right? The last fold covers half of that, which is a huge number, right? But it's taken up with the very last fold. Then you go down here, the very last is huge. The dollars is huge, but very quickly you get down to very little. And by the time you get down to even at 30, we can't show it on this graph because the very first 30 or so are so small. But this is the feature of exponential growth. And it comes up in science a lot. We keep hitting it in many ways, but people have no intuitions about it. And so it's very important. This is another striking example of the limitations of our minds. And of our mind, this is about a mind-ware gap, by the way. It's because we weren't trained to think that way. We weren't trained in, in fact, if this wonderful book, by the way, by Brockman, puts out these books, and this one, the book is called, this recent one, it's called, This Will Make You Smarter. And everyone has, you've got the most important people in the world, almost. The most smartest people in the world. I think there may be over 100 people. They're allowed only to write a page or two at most. Of what they think is a very important concept that could make us smarter. And one of the people who writes there, he says, getting a notion of, thinking in terms of exponential growth and that kind of thing, and thinking in terms of powers, numbers to the certain powers, and getting a feeling for that, so you could understand the Richter scale. For example, if an earthquake goes from six to seven on a Richter scale, it doesn't sound like much, but it's a huge, because it's an exponential type of thing. And, okay, so given all that, I want to now, I think, deal with getting to, and I want some bank paper, yes, here we are. I want to now get you ready for the next lecture, which is going to be introducing finally a framework for helping you use your resources wisely. It's a framework, not necessarily the framework. Almost any framework, I think, could be helpful. But I'm going to teach you a framework, which we'll use for the rest of this course, to evaluate claims, dubious claims, okay? And I'm going to base it on, we're going to use what's called hypothetical thinking, okay? Again, I'm doing this by hand. This is, okay, I even can spell it correctly, right? Thinking. There's a couple of good books by a psychologist named Evans. Jonathan, I mentioned him in the previous lecture. Jonathan, Sebastian. He's got a few other names, too. These British left it filling a lot of names. You've got T, some other names. Evans, ultimately. I guess if I had a name like Evans, I'd want to put some other distinguishes in there as well. But anyways, he has been for many years, many, many years. He's been studying a British psychologist who studies thinking, rational thinking and logical thinking. And he's done a lot of it. So he has one of his books, and one book he's called, is simply called IF, which is a form of hypothetical thinking, is IF, then, hypothetical thinking. And then he has another book called Hypothetical Thinking. And it both deals with the logic of hypothetical thinking, and mostly it's dealing with the psychology of it as well. So hypothetical thinking, and that's what I'm going to introduce you to, because I'm going to use that as the basis of my framework that we'll introduce in the next lecture. It's IF, then, type of thinking. And much, almost everything you do in science, it can always be fit into that kind of a framework. IF, and usually that comes after the IF is called an antecedent. That's the antecedent. If the antecedent is true, then something should follow, and that's called the consequence of the antecedent. So that's the basic, simple framework. We're going to complicate it, obviously, as you can imagine. But let's talk for a while just of that framework. How many have heard of Venn diagrams? They are attempts, some people, to visualize logical statements and stuff like that. And in this case, we're going to say, okay, if A, then C. How would we diagram that? We could diagram it this way. A, that's everything that's in A is in this circle there. And everything that's the consequence is in that big circle. So what that says is that everything that's A is also contained within a set of things that are C. Now, let's look at a couple of possibilities here. We can have A can be true. We can say A is so, okay? So we, that's called affirming A. Let's call it that. It's called affirming A. We say, okay, A is true. Because this is hypothetical. It says, if A, then C. That doesn't mean that it's so. But now we say, but A is true. What does that tell us about C? Well, that's affirming the consequence. And affirming the consequence is known as an invalid piece of logic. And why would that be so? Because if we say C exists, C can, something can be C and not be A. Can you see that? Because there are other, and that's the bad way. It could be that all A's are C's are very close to that. There are only very few number of C's. But it could be a lot of things that, a lot of reasons why something could be a consequence without A being the predecessor of the cause of it. And that's called, that's an invalid. We can affirm, what's that? I'm sorry? I don't understand what you mean, Kathy. The consequence. Okay. If you affirm the antecedent, that's this thing here. That's called, you can give it a Latin name. That's why a procedure is it is. That's a valid, because we say all A's exist there. If something is an A, then since all A's are in C, so whenever there's an A, there's got to be a C. And that's called, it's given an even name. I think it's called MODOS, in case you want an interest in Latin. I'm sure you all are. MODOS ponens. Yeah, that's what it's called. In case you want a Latin name. Logicians have given it that name. There's no good name for this, affirming the dissonant and valid syllogism, okay? Okay, now we can go, we can deny. We start with this general statement, but then we can deny that the antecedent. We can deny that A is the case. What would that tell us about, does that require that we also have to deny C? You can see that we can get rid of all the A's and there's still be C's. And so that's an invalid syllogism. Not syllogism, a conditional statement, they call them. These are called conditional statements. We can deny the consequence. What about that? Would that be a valid thing? If you'd denied a consequence, would it enable us to say that A must also be false? Not exist? Yeah. Okay, because if we deny C, C contains A, so we say C isn't there, then A couldn't be possibly there. And this is called a valid syllogism, a hypothetical conditional, okay, whatever it is. And it's has a name too. It's called modus tollens. Again, you learned it here if you didn't know it before and you got two Latin words, two Latin phrases you can speak and enjoy it. All right. Now, this is going to be the basis of our framework. It's in some way the basis of all scientific hypotheses and stuff like that. And now I'm going to introduce you to someone you may have all heard of, a philosopher's science called Karl Popper. How many have heard of Karl Popper? Okay, a few. Others haven't for some reason. A few years back, Popper was the darling of many scientists, but also he was the worshipped by skeptics. He was the fact that they had idols of God's, Karl Popper was the God. And he come up with this notion of falsifiability. And basically he was saying any scientific, anything that pretends to be a scientific theory has to be falsifiable. And he attacked as good, he gave us examples of pseudoscience. His two big ones was Freudian psychology because there's no way of falsifying it. And Marxism, no way of falsifying that as well. Those were his two favorites. Problem. I knew Karl Popper, by the way. Only a little bit. I was writing my book on water witching with my colleague, Vote, who was an anthropologist, published in 1959. So it was back in the 1950s. We wrote it at this institute for research, which is an independent think tank behind Stanford University. And every year they would bring in independent scholars to mix. And so on. And Popper was there that year. I was there with my colleague, Vote, while we were writing our book on water witching. Well, anyways, Popper was, he was an Aristotelian in my mind, anyways. And Popper, along with many other philosophers, said the only kind of real knowledge is gotta be deductible. Deductive logic is the only kind of logic. What's called inductive logic, you never can ever show that's true. And most scientists, most philosophers and scientists themselves would say the basic approach in science, every time we test hypothesis, we use a conditional statement saying if H, I'm looking to see what my time is, but okay, if H, then P, P being a predicted outcome, okay, we'll use those terms. H is some hypothesis and P. This is the basis of all science, this approach. If we do an experiment and we test and if P comes out right, we feel that confirms our hypothesis. At least gives us more, doesn't actually say it. Popper pointed out that this is an invalid conditional statement that we just showed you here. Because if you affirm the consequence here, that's invalid. Because it could be a lot of other reasons why. So Popper was very fearful, this was, couldn't be science. He was, he was a philosopher of science. Anybody was also attacking the extending philosophy of science at the time, the positives. And he, so that was how he made a big reputation. He said, this can't be the way science works. But then he said, let's take the conditional statement, let's take one that is valid. And that one is if H, then P. And instead of affirming P, if we can show that P is false, that would show that hypothesis is false. So this we can use it to falsify until legitimate in his mind because it uses deductive logic, which can be true knowledge. This cannot be true knowledge, it's inductive. And he said there's no way you can show inductive, induction to lead you to knowledge. All science depends on this. All science is inductive. The whole world is inductive. The whole world is based on probability and chance. But even God like Popper couldn't accept that. So Popper even convinced some scientists, most scientists realized, we don't have to behave this way. We don't go out and make a hypothesis just to show that they're wrong. If we're hopefully, we dance in the streets when we can verify our hypothesis this way, even though it's not certain. And we're going to get into that in the next one. I'm going to show you how scientists handle this situation. But still, scientists recognize that everything is probabilistic. We never have certainty. Everything can be revisable. But we hope we can get there closer and closer, but it's still always going to be probabilistic. And if you don't like a probabilistic world, you're going to have to get yourself into some other world. Everything is inductive. And unfortunately Popper's no longer with us, maybe. But he was a good philosopher. He got denited for this notion idea. And simple ideas sometimes get very powerful. But even a falsifiability notion is wrong. Even this in the real world of science, it turns out philosophers show that even this is not true. You falsify, falsifications oftentimes turn out to be wrong. So even this is not absolutely certain. But in terms of the logical conditionals, it made him feel good. And he felt this is a big revolution. He was able to suddenly save science and he was able to tell scientists, that's what you do. And scientists were scratching their head, some of them said, hey, that's neat. That sounds great. We now use valid deductive rules and we get valid deductive knowledge. But he was aware that most of science, 99% of time, is getting successes with their predictions. And each time they get a success, they increase their believability of the truth of this. But in Papa's mind, in his theory, his book, the whole approach, it cannot be any inductive knowledge. He finally had to give in a little bit and said, well, there is a notion of corroboration. A successful outcome like that corroborates but doesn't add anything. And corroboration doesn't mean much at all. But corroboration was really secretly sliding in the other side, the fact that scientists do feel more confident about their theories when they predict right. So even a man like Papa can be wrong, okay? Even wrong using, I'm sure Papa used system two thinking a lot. But we're going to, so now you know all about conditional statements. You know all about, yes, I am aware that I'm about to finish and I'm going to wind up now a little bit early. And next time we're going to use this conditional thinking for our framework. And because it is the framework that science uses as well. We're going to have to complicate it a little bit because a man named Duhin, and then others eventually found the reality was that this is a little too simplified. So even if you falsify this, and it's one of the reasons why falsification alone is not good enough, is because you may not be falsifying hypothesis. It may be false because there are other conditions that weren't fulfilled. And then what we call initial conditions and or auxiliary conditions. And we're going to have to put those into the equation. There are things like when Newton has his laws about object falling in a feather and a heavy cannonball falling down. They land about the same time only if you assume it's in a vacuum. If it's not in a vacuum, you have the atmosphere and other stuff, retards the feather more than the ball. And so that in the real world, the balls hit the ground faster. But in the Newtonian system, they don't. But only if you make the assumption that it's in a vacuum. And that's just one example. And we'll come to that next time.