 I think we'll start this out with the demo. We got a hell of a chain, as you're going to have eight hands. I'm going to drive them in the clouds! You're going to have to buy the function full of it. It's a bit of a blustery, chilly day today, but what better day to introduce you to the concept of schedules of reinforcement. So the first thing we think about when we think about schedules of reinforcement is that we need to know what they are. So the basic idea being that we can deliver reinforcers for all sorts of things, and we can deliver them at all sorts of different times. But I think the best idea is to actually put them on a schedule. In fact, I mean, it's not my idea. Really, it's really not my idea to put them on a schedule. Skinner was playing around with this like in the long, long, long time ago. He wrote a great book called Schedules of Reinforcement. Mind you, I suggest you read it because we're going to boil it down into just a few minutes here. Anyway, so as we get into schedules of reinforcement, I just want you to think about the different types of schedules that we have. The most basic schedule of reinforcement is one that goes awesomely with early reinforcement, which is a continuous reinforcement schedule. So by early reinforcement, I mean when you're teaching a response, the first thing you want to do is reinforce heavily early on. So we put that on a continuous reinforcement schedule, meaning we're going to reinforce every single time the response happens. It's kind of like taking these steps. So each time I put my foot down, it's getting reinforced. That's a continuous reinforcement schedule. If some of these steps were false steps and they fell out from underneath me, that wouldn't be continuous. It would be horrifically scary, and I'm already scared enough as this is. So continuous reinforcement, great for early training when you're teaching a new response, but it doesn't do so good for response maintenance. So when we think about... It's not very large up here. I'm not small. So when we think about continuous reinforcement, we have an issue with what it does and the effect that it has. So sure, it's going to hook a behavior. But what it's not going to do, folks, what it's not going to do, folks, is maintain one very well. If I want to maintain responses like that, I probably don't want to put them on continuous reinforcement schedules. I would probably want to put them on an intermittent schedule. Ooh, chilly willies. So intermittent schedules are pretty cool. They're when you don't reinforce every single response that happens. This started out in a lab specifically as an accident where Skinner was running out of pellets and he needed to make more pellets. He needed to make his pellets last over the weekend in order to reinforce the rats. And he just decided to feed them every other time they engaged in the behavior. And what we saw using a kiln of recorder was that the rate of responding changed. And that rate of responding, when it changed, here's the serendipitous finding, right? Didn't anticipate that. So that rate of responding changed and boom! All of a sudden we have our new subfield with regard to reinforcement, which is the schedules of reinforcement. So, whoa! Ooh, this is wobbly. That's kind of fun. I don't know why I'm actually on all these play around toys. They're just kind of fun. So these schedules are reinforcement. Intermittent schedules, what we found is there's four basic types. So we're going to go into those in another video and the effects that those have. But those four basic types of reinforcers or patterns of delivery of reinforcers had completely different effects on behavior. And those completely different effects on behavior have all sorts of things to do with how well the behavior maintains, how well that behavior generalizes, and how resistant to extinction that behavior is. So, one of the things we think about with continuous reinforcement is the moment that you stop reinforcing somebody when they're on a continuous reinforcement schedule, that behavior is going to go away rather quickly. They probably know, no, in quotes. The behavior is going to go away because it's no longer being reinforced, basically. So under intermittent schedules, that discrimination of when that behavior goes or when the behavior is no longer being reinforced decreases. You can't discriminate very well when you're on those intermittent schedules. There's more to it and we'll get into that. I'm just kind of fumbling through it a little bit here because I'm on playground equipment and whatever. But anyway, so when we think about real world applications, there's not too much that we're really getting into with continuous schedules other than early type training scenarios where you're first training somebody in something, maybe a new sport, maybe a new job, maybe how to deliver lectures in the park. Hopefully there's lots of immediate continuous reinforcers for that, but on the other hand, who knows, it doesn't really matter in terms of that because the behavior of delivering lectures about self-management, about behavior analysis in general, have been intermittently reinforced heavily throughout the years. For example, I never know when a good lecture is going to happen. I don't know if the one that I'm delivering right now is going to come out great or if it's going to just come out with a big old steaming pile of dog poo. I have no idea. So it's basically intermittently reinforced and you all tend to do that down here and in one of those buttons, sometimes people press the like button, which maybe it's reinforcing, maybe it's not. I don't know, not everybody presses them. So I'm going to deliver these things anyway, whether or not you like it, which basically means it's on a really heavily intermittent schedule of reinforcement and it's just going to continue. In other words, the behavior is going to maintain. So maintenance of behavior is what you're after. When you're thinking about intermittent schedules of reinforcement and I don't know, we'll come back and talk about the four basics. Do you know the fixed ratios, the variable intervals, the variable ratios and the fixed intervals. We'll talk about those on another video. See you. Take care.