 All right everybody, welcome back. It's Veronica Howard. Let's talk about time-based schedules. Now time-based schedules are pretty confusing for folks and these are always misrepresented in interest-like textbooks. So take a little bit of time with this because I promise this is a place where a lot of people struggle and I don't want that to be you. When we're talking about time-based schedules, let's begin with our interval schedules. In a fixed interval schedule, the reinforcer is delivered for the first response that occurs after a period of time has elapsed and that period of time is the same every time. So it's a two-parter. You have to have a period of time that elapses or passes and then you have to emit a response and that first response is going to contact the reinforcer. A variable interval schedule is similar. You've got a period of time that's going to elapse or pass, but what makes a variable ratio or excuse me, a variable interval schedule different than a fixed interval system is that the amount of time can be longer, sometimes shorter sometimes, but on average it's around whatever your VR number is. So forgive me, I may unintentionally sometimes say variable ratio. In all of these scenarios, I'm going to be talking about interval-based schedules. Fixed interval three seconds, for instance, means that three seconds have to pass and then if a response occurs, it contacts reinforcement. A variable interval three means that or variable interval three seconds means that sometimes it's going to be five seconds and sometimes it'll be only one second, but the period of time has to pass. In that period of time you cannot earn the reinforcer, but once that period of time passes, if you emit the response, you will contact the reinforcer. What's special about interval schedules is that there's always the possibility that there are going to be some responses that do not contact reinforcement. Remember, if you respond within the interval, no reinforcement. Because there are some responses that will not contact reinforcement, all interval schedules are intermittent schedules. Some responses will not contact reinforcement, so bear that in mind. Let's talk about what these look like. So remember that when we're looking at patterns of behavior, we're typically looking at something called a cumulative graph. What I'm showing you on the screen here is what the two different categories or types of interval-based schedules look like. In a fixed interval schedule, it produces a very consistent scallop pattern of responding, and what that looks like is as the interval begins, right after the reinforcer was delivered, we see very little responding, almost like a post-reinforcement pause. But as the opportunity for reinforcement gets closer, we see that the learner responds more and more quickly until the reinforcer is delivered and then we have a break. So we have this kind of scalloping pattern until the reinforcer is delivered. This is what a fixed interval schedule looks like. A variable interval schedule, just like the variable ratio schedule, it was very consistent and smooth. A variable interval schedule will again produce a super smooth pattern of responding, very consistent, very robust. And the thing about a variable interval schedule is because it's so unpredictable, because it's so kind of removed from each specific response that the learner emits, the variable interval schedule is the most resistant schedule of reinforcement to extinction. This schedule of reinforcement is going to continue to help the learner emit the response in the absence of very, very lean time. So if you want this behavior to maintain for a long time, use that variable interval schedule. There are some general benefits overall to using interval-based schedules. Remember they're intermittent. So that means that using an intermittent schedule of reinforcement, generally speaking, makes these schedules more resistant to extinction. We see that there's decreased reinforcer ineffectiveness from satiation. So interval schedules by the nature of the fact that they are intermittent, we're less likely to satiate the client on the reinforcer. But interval-based schedules, again, because they are intermittent schedules, they're really, really bad for shaping or teaching new behavior, because there can be some responses that don't contact reinforcement. And even if it is the correct target response, if it occurs within that period of time, that blackout window of the interval, it won't contact reinforcement. So we could miss or lose some target behaviors that we actually want to reinforce if we're using an interval-based schedule. Instead, if you're teaching a new behavior, use a continuous schedule reinforcement. And just like with our ratio-based schedules, if you lean out or if you increase the requirement for reinforcement too quickly, we can see ratio strain. If we were to go, for instance, from a fixed interval three seconds where, you know, after the three seconds occur and we emit the response, then we earn the reinforcer. If we go from a fixed interval three second out to a fixed interval three hour, then the learner is probably going to contact extinction. We're going to see that the response dies away very quickly. So when you're leaning out your schedule, go slowly, work over time, be patient. Interval-based schedules are also pretty confusing because people have no frame of reference for them. They're very difficult to understand. We often don't think about them in our day-to-day life because we kind of lump them into this idea of deadlines. We like to think of them as deadlines, but they're not. Let me tell you the difference between an interval-based schedule and a deadline. I think even Miller uses this example. Congress has to pass all of their bills before they go home for summer session. So there's this idea that the frequency of bills produces kind of scalloped pattern. So if it's scalloped like this, does this make this pattern of passing bills? Is this interval-based responding? Well, no, because remember what I said was Congress has to pass those bills before they go home. So they have a deadline. They have to do it before the response. And we said that an interval-based schedule means that a period of time has to pass. And then when there is a response, it earns reinforcements. So this is the opposite. You have to do all of your responding before time ends or you won't earn the reinforcer. This is something else. This is not an interval schedule. This is a deadline, also known as a limited hold. Another example, placing bets at a race track. If you were to call your bookie and say, I want to put 10 on gold thunder, you cannot place bets after the race begins. You cannot place a bet on a sporting event after the sporting event begins, which means you have to place your bets before an arbitrary time passes. You have to do your responding before. This produces a scalloped pattern just like submitting homework right before the deadline. You see all of your responding, but you see the increase before the deadline passes. This is not an interval-based schedule. Interval says during this period of time, any responses will not contact reinforcement, but the first one emitted after will contact reinforcement. That's interval. This has a deadline. If it has a deadline, it is something called a limited hold schedule. A limited hold schedule is one where the reinforcers only available for a period of time before the deadline occurs. After the deadline, any response will not be reinforced. So our Congress example, this is a limited hold schedule, not an interval schedule. Another example would be studying for the exam. You can study up until the exam, but continuing to study after the exam is not going to pay off because the opportunity has passed. You can't place your bets after the horses are released because the opportunity has passed. These are limited hold schedules because they have that deadline. An interval-based schedule is the opposite. The reinforcer is available only after the interval has passed. Any response that occurs before the interval has passed will not be reinforced, whereas in a limited hold schedule, you have to respond before the deadline to earn the reinforcer. So remember, a limited hold schedule is one that contains a deadline. Let me give you an example of something that truly is an interval-based schedule, and I hope that this will help. So imagine that you cannot complete your quizzes for class until after I post them, after I make them available. In this case, if you log on before the quiz is available, you can't take it. You can't contact reinforcement. And those are based kind of on a weekly schedule, right? I say that I'm going to try to close them by a certain time, but if you log on too quickly, you're not going to be able to see them. It's only after that period of time has passed that you can contact them, because in our case, I don't have a firm time when they're available. This is probably a variable interval schedule where sometimes you log on a little bit earlier, you see them. Sometimes you log on a little bit later, you see them, but you can't predict how quickly they're going to be available. In this case, we're looking at a variable interval schedule, responding too quickly, logging on too early before the quiz is available. You can't contact it, you can't earn the reinforcer. It's only after the period of time has elapsed that you can contact the reinforcer that is an interval-based schedule. As opposed to you have to respond before the deadline passes and you earn the reinforcer, that's limited hold. In an interval-based schedule, you have to wait for the time to pass for the opportunity for reinforcement to make itself available. These ones are pretty challenging, but I want you to be able to tell the difference between these because they are important. Functionally, what they look like is very similar, but it's just a question of where do you see more responding. In any kind of deadline schedule, you're going to see a very steep scallop right at the end. We all do it, and what I want to leave you with is that stuff is natural. We often get down on procrastinating, but I want you to know that it's a very natural, very human response. It's a result of a limited hold schedule. Everybody does it. If you understand that deadlines do that, though, if you understand that that is the pattern of responding, that that particular schedule is going to do, then you can change the environment to decrease the probability that it will occur. Now, coming back, we'll talk about the difference between, and how can you tell the difference between? Are you looking at a ratio schedule or an interval-based schedule? But thank you so much for joining me. I'll see you next time.