 So, what we're looking at here is something that we call schedules of reinforcement, and the idea is that reinforcement can be scheduled at a particular rate. In other words, a rule of how frequently a behavior will be reinforced, if it's reinforced at all. This also includes things like how many times do you have to do the behavior, how long do you need to wait to do the behavior again, to get a reinforcer again. Now, we can think about a continuous schedule of reinforcement, right? So, a continuous schedule would be, if you do something, then you get reinforced immediately for it. Thanks for looking at the light switch. Flip on the light switch, lights come on. So, that's continuous schedule of reinforcement. Every time you flip the light switch, lights come on, or they go off depending on which one is you, how it's working, or which direction you're going. Soda machine, same sort of thing. You put your money in, you get your soda out. Those actually don't maintain extremely high levels of responding. It seems like they do on the surface, but they're actually very weak schedules, and we'll talk about why they're weak here in a little bit. So the ones that maintain the highest rates are intermittent. In other words, if you have to flip that light switch several times before the light comes on, that'll actually maintain more behavior. Think about how many times you have to actually flip the switch, and this is the type of behavior that it's maintaining, right? Casinos and things like that, they're all about these high rates of responding. How many times will you actually press that button on the slot machine? Every time you press that, then you think of that as one response that reinforced you a little bit every time. Well, then at the moment it stopped reinforcing, you'd walk away, right? But with an intermittent schedule of reinforcement, you never know when your payout's coming, right? So there's a lot of different types of these intermittent schedules, so let's go ahead and look at them. There you go, a little sound. You know what, the ratio of schedules, ratio of schedules are really straightforward. We've got two different types of ratio of schedules. Number one, you have a fixed ratio schedule. A fixed ratio is dependent on the number of responses being performed, right? So a fixed ratio is that if you had a fixed ratio four, for example, you would have to make four responses before you received a reinforcer for those responses. And then you'd do four more and get a reinforcer, and four more and get a reinforcer, right? They have a variable ratio schedule, which is where if we had a VR for, a variable ratio for, you would have on average every four responses you'd get reinforced, right? So that VR is going to maintain more behavior because you never know when it's actually when the reinforcers are coming. FRs, right? So an FR produces this very interesting stepped effect, right? In other words, you start this ratio run, right? The moment you start responding, right? Then you finish your activity, okay? That's called the ratio run. So if you have an FR for, you're basically going to be taking a break, doing nothing, doing nothing, and then you're going to start responding, right? Make your response four times, whatever that response is. And then you get reinforced for it, and then you're going to take a break. That break after reinforcement is called the post-reinforce your pause. Sometimes we call it a procrastination pause because whenever you start somebody on a schedule of reinforcement, a fixed ratio schedule reinforcement, they actually wait before they start responding. So in other words, there's a bit of procrastination. The take-home rule to this is if you're working on a paper or something like that, papers are types of fixed ratios. If you think about it in terms of, well, you have to write a five-page paper. That's basically five pages of responding that you need to do in order to receive your reinforcer, which is the grade. But what you see with fixed ratio schedules is that people are with, like, writing a paper. People put it off and put it off and put it off and put it off and put it off. At the moment you start it, guess what? You tend to finish it, or at least that major section that you're working on. So that's one of the applications that we know about with fixed ratio stuff is that a fixed ratio, when the organism or when the person starts that ratio run, the moment they start responding, they tend to finish. So if you're trying to do something with regard to, and you're finding that you're procrastinating a lot, kind of one of the keys is just give yourself any opportunity to actually start it because you'll likely finish it. It's not a fixed ratio schedule. Variable ratio schedules, because you never know when the reinforcer's coming, you don't know how many behaviors you have to do, then that's going to be a strong, steady rate of responding, as we say. All right? The organism, the person will be responding quickly and they will be responding continuously, and they'll be making, they'll be taking very few breaks, if any. A great example, this is the casino. Just go look at people playing this slot machine. They just sit there and they press the button, they press the button, they press the button, they press the button the best. They don't take many breaks. Just kind of keep pressing the button, you know? So that's the idea there is that you're on some type of variable ratio schedule. We also have interval schedules. Interval schedules are a rule that's dependent on one behavior, all right? But it's dependent on one behavior after a given amount of time. So you're going to get reinforced for that first response after a given amount of time has passed. And so like a fixed, a very fixed interval schedule, a fixed interval 10, let's say, is that the first response after 10 minutes is going to receive reinforcement. Then again, it only takes one response, but you've got to wait that 10 minutes. So what you start to see with the fixed interval schedule is this, it's called a scallop, right? So behavior tends to speed up. You get more behavior the closer you get to that interval, that reinforcement interval. So at about nine minutes, the organism starts responding really fast, whatever that is, whatever we're talking about there. So the kiddo will start looking up at the clock. In fact, you guys all do this, you know, in class, you start looking up a clock at the clock right before class gets out. People start packing up their stuff, those types of things. They do all that right before the class gets out, not after class gets out like they should be doing. All right. Anyway, the point is, is that your behavior in the classroom is on a fixed interval schedule. One response at the end of that at the end of that interval will get you reinforced, getting up or packing up at the end of the class is reinforced. It's not reinforced until the end of the course or the day lecture. I guess I was playing around with some of the audio features on this anyway, limited hold, pretty straightforward. This ties in with interval schedules are after given an interval, a response will produce a reinforcer, but it only exists for a certain window. Right. So this is like the buffet, you know, breakfast buffet or something like that. The McDonald's breakfast is a classic example of a limited hold. You can make a response, but you've only got a certain amount of time to make it. So it's 6am to 1030am or whatever it is, you know, so the idea is that there is a hold on that duration schedules performing a given behavior for a duration of time for a period of time. Right. So I think I gave you a bit of an example earlier about working, you know, studying this my studying behavior and then that being reinforced by access to the truck, and access to working on the truck. So I was using the pre map principle, but I was really doing a duration schedule. It was. So I would have to study for one hour or two hours or three hours in order to earn my reinforcers. That's the duration is that that particular thing. It works really well, but you have to have a continuous behavior that you're working with. It can't be a discrete sort of