 I think we'll start this out with a demo. We got a hell of a change as you're going to have eight hands. Under my mouth! Loud! You grabbed by the pumpkin full of her. Yeah. We might be wrong. We're funny, but not always a joke. I'm not choking myself. Woo! I got a little tight in that last video, so I'm dead to just leave the pressure a little bit. Doing my own experiments sometimes has problems. Hey, at least I don't have any ethical things to worry about as long as I'm studying myself. Anyway, all right. If you haven't noticed, we haven't touched ethics at all in our videos. We'll let you figure out why. All right. So, let's get... Now I've distracted myself with how unethical I might be. All right. Baseline logic. Here we go. So the logic of single-subject research ties around the concept of baselines and baseline logic and baseline analysis and comparisons and all the different types of research designs that involve baselines. Why? Because all of them do. All of them. So the basic variable, which is your baseline, and you go back to a baseline by the other anyway, it's all confusing. But basically all it means is that we have different conditions. We're going to compare those conditions, and we're going to get a bunch of measurements inside of each condition first. Then we're going to make comparisons. Why? Because if we do it right, we have three things that come about as the result of the use of baseline logic as our core piece for making comparisons, experimental comparisons, and drawing experimental conclusions in our field. Then we're going to put this all together in a bunch of fancy ways, and those become your experimental designs. But back to those three things. Prediction, verification, and replication. PBR. All right. Not PBR. PVR. All right. So, sorry, flashbacks to my old days in high school and drinking beers and it was PBR. We thought it was great at the time. I'm so old I drank beer in high school. All right. I'm certainly sorry for all you people that I went to high school with that I'm turning this all in right now, but it's okay. All right. So prediction, verification, and replication, and that's the whole point here, right? So we got a couple of things we need to worry about when we start figuring out the prediction piece. All right. So we're going to capture a baseline. Actually, let's try it over here. There's our baseline. All right. So where are you at right now? All right. That's baseline. How many cigarettes did you smoke today? 43. How many cigarettes did you smoke tomorrow? 22. How many cigarettes did you smoke the day after that? 37. All right. So up and down and up and down and up and down and up and down. There's a little bit of variability in that, right? So that's a little bit of variability, not too much. So if we think about baselines and prediction, we need to figure out something called a steady state, right? So the entire logic of our field is based on the methodology and procedures that we've been talking about with this series of videos. So it's important to understand all the pieces. One of those pieces is the steady state logic. If we establish a steady state of responding, and there's an interpretation as to what a steady means, and we'll get there. Then we can make a prediction about what would happen next. Whoa! There's that prediction piece, right? So steady state logic is if the behavior is relatively stable, relatively being important, then if we make a guess about what will happen in the future, all right, that's the prediction piece. And then of course, with the experimentation, we're going to verify that that actually is what happened or didn't happen. And how do we do that? That's where we talk about, yeah, condition switching and reversals and things like that. And then we're going to replicate the whole darn thing with more condition switching and reversals and so on and so forth. All right, so let's get back to steady state logic. Steady state does not mean this. Behavior does not have to be exactly the same every single day. How many times did you swear today, Ryan? 14. How many times did you swear yesterday? 12. Day before that? 13. Day before that? 15. Day before that? Yeah, I had a bad day. That was 22. Then back to 8 and so on and so forth. And you get this pattern, right? So baselines are not perfectly stable. They're like this. And if they're really relatively tight with each other and there's some numbers you can use to figure it out, but don't worry about it. Just do an ocular analysis. Does it look like you can predict where the next data point is going to be? If so, then you're probably stable. Right? So we have that steady state baselines nice and stable. It's going one way. We can have an ascending baseline. It's going up a trend and it's still bouncing a little bit, but it's going up. It's got a trend where you can have a decreasing baseline. And it comes down, right? So it's a little bit of variability. Or you can have an extremely variable baseline. I didn't smoke any cigarettes today. I smoked three packs. I didn't smoke any. I smoked two packs. I smoked three cigarettes. I smoked 44. That's some variability, right? So those are the different types of baselines, basically, that you can have. It all leads back to the concept of predictability. Can you predict where things are going to go next? Because if you can predict what the next thing's going to be, then you can test to see if that's what it is that you got under this new condition or so on and so forth. So if we get a baseline, we establish the level that somebody's responding on, and then we implement an independent variable to try to change it. If the behavior stays at the baseline level where we would have predicted if no independent variable is present, then the independent variable didn't have an effect. Pretty freaking obvious, right? There's a little more nuance to that, but you get the idea. How many data points do you need for a baseline to predict something? I don't know. I don't know. Everybody's going to tell you at least three, and I'll tell you honestly, three is three enough to predict a stable baseline? I don't know. Maybe. Maybe not. Probably not. Cameraman's back there going, no way. Brad's like, oh no, I'm a BCBA. I know better. Right? Hear it from a BCBA. Can you predict? It says three enough, Mr. BCBA. See? There you go. All right. So it's a minimum, right, folks? The more you get, the more you'll be able to predict. Then we go on and we do our verification procedures. In other words, we run. We're switching a condition. We change something up. All right? So we implement our independent variable in some place. So now we have kind of a design, right? So we've got our baseline here. Now we're going to phase change that because they have that sound when they happen. Right? So there's your phase change sound. So we will record that and play that every time we have a phase change. You probably think it's recorded, but I've done it so many times, it's just the same damn sound. All right. So there we go. Phase change. And now we test what our prediction was. Did the behavior change? Right? So now sending baseline, we start an intervention. Now we get a descending line. Oh, well, we predicted that our intervention would reduce behavior or reduce the acquisition of that response, whatever it was. And I'll be done. It did. How beautiful is that? All right. Then we go. But we're not done because confounding variables could have come in and also caused that. It might not be our experimental variable. So what do we have? We have to come back and finish it off and we go back to an intervention. I'm sorry. We go back to baseline for a replicability piece. Right? So we replicate our baseline condition. We go up. Yep. Look at that. It comes back to baseline. So we have baseline going up and then we got it going down during the intervention. The behavior went down during the intervention. Now we're going to go right back up again. We'll start picking up again when we remove that intervention. ABA design. So your first baseline, your first design is an ABA design. Don't forget that we're not going to be focusing on ABA designs. The replicability piece is important for a bunch of reasons. I'll come back to it in other videos. But I want you to remember one thing that replication is about believability. You replicate your findings. You replicate this to make it believable because we have some problems in our logic. One of the problems in our logic here is affirmation of the consequence. We use the logic of affirmation of the consequence to draw these conclusions. The problem is that the logic at its core by itself is faulty. It's called the fallacy of affirming the consequence. So if A, then B, and so if you observe B, A must have happened, right? That's the problem. There's lots of things that could have also happened with B, or instead of A. So if A, then B, right? So if A happens, then B. So if we observe B, do we know that A caused it? No, lots of things could cause B. So our job as experimentalists using this logic, as scientists using this logic is to replicate things and make that logical error okay by doing more replications, by designing your experiments in ways that allow you to rule out those extraneous variables, by allowing you to rule out the confounds. One of the ways that we do that is by noticing that when your phase change happens, whoosh! When that phase change happens, behavior changes right along with it. It's not perfect, but it's the tool we use. Then we go back to our replications. That speaks to believability. We all know that it's a fallacy to affirm the consequence. Everybody knows that in our field. But you have to work around that because that is the core logic that we use. And then again, the replication piece will come back to replications in another video. There you go. Believability! That's what it's all about! I know you like these videos, and you should be sharing them and subscribing to our channel. We need more people. You know what you really should do? You should grab a fist full of dollars and show up and throw them at our channel maybe once in a while because this open source stuff is for the birds!