 All right everybody, welcome back. I'm Veronica Howard. Let's talk about ratio schedules of reinforcement. When we talk about ratio schedules, typically we're talking about one of two categories, either a fixed ratio schedule, which is one where the reinforcer is delivered after a fixed number of responses. That number of responses is the same every time, or we could be talking about a variable ratio schedule where the reinforcer is delivered, but the number of responses that it takes to earn the reinforcer can be different from trial to trial. Sometimes it might be two and other times it might be eight, but on average the number will be around whatever your VR schedule is. Now note that in a fixed ratio schedule, we could also be talking about continuous reinforcement. Fixed ratio one is the same as a generic continuous schedule. Both of them refer to the idea that every response is going to contact reinforcement. Ratio schedules tend to be introduced in a VR number or FR number, and that just tells you the number of responses that you have to emit to contact the reinforcer. An FR one means that you have to emit one response to contact the reinforcer, and every one response will earn the reinforcer. A FR seven means you have to emit seven responses to contact the reinforcer. A VR seven means that on average you'll be emitting seven responses, but sometimes it'll be less, sometimes it'll be more, but on average it's about seven. As I mentioned before, schedules produce a very specific eclectic pattern of responding. So I want you to also be able to understand how to read what's called a cumulative graph. So a cumulative graph is one where the line never goes down, and it shows you through the pattern through what you see in the line, the kind of rate of responding. The closer that you get to perfectly horizontal, the slower the behavior is happening, because what happens is we have time and this pen is always moving along the horizontal axis, and then every time there was a response there's just a little tick up in the pen. So the closer you are at a horizontal the fewer responses are occurring, but the closer you get to completely vertical the faster the response occurs. In this case what I'm showing you is what happens in a kind of fixed ratio pattern. First we have a high rate of responding. Remember the closer you get to perfectly up and down the closer you get to 90 degrees the faster the behavior is happening, and in a fixed ratio pattern we often see a very fast rate of behavior, and it tends to be faster when we have larger ratio requirements. So it's going to be steeper if you have an FR50 than if you have an FR5. We also see that once the reinforcer is delivered, as is indicated here with this little tick mark, once the reinforcer is delivered we have something called the post reinforcement pause. This is where the learner sort of takes a break and maybe consumes their reinforcer, and remember those little tick marks there show you that that's when the reinforcer, that's when the reinforcement was delivered. Okay so what we see in the ratio patterns of responding is that a fixed ratio pattern produces what you see on the right. It produces something called a stair step pattern where it goes up and it goes over and it goes up and it goes over, and it looks very consistent every time. You can see that there's this break, there's always going to be these little breaks after the response occurs, so you're going to get a very particular pattern of responding. What you see on the left is what happens when you use a variable ratio schedule. It's very smooth, the responding is very consistent, and you don't get as many breaks because remember when it's an unpredictable schedule of reinforcement the learner could potentially contact the reinforcer on their next trial. So you see that the learner is going to be responding, responding, responding, responding just as quickly as possible to earn as much of the reinforcer as possible. The difference here would be like the difference between a slot machine where people just pull on that slot machine for hours, right, because it's always based on the number of times you pull, but the number of times you pull it's different every time. It's very smooth, very consistent, very robust rate of responding, versus something like, you know, give me 10 units of something, give me 10 responses, and then you earn your reinforcer. You'll get your 10 responses, then the learner will take a break. So when and under what circumstances would we use this? Remember there are some benefits here to different schedules. You use them for different reasons. In ratio schedules remember that an FR1 or a continuous schedule of reinforcement that's really good for teaching your behavior. It leaves very little ambiguity or very little uncertainty about what the the reinforcer or what the target behavior actually is. We also see that if you have a leaner schedule of reinforcement, if you go from an FR1 out to like an FR3 or an FR5 or an FR10, your lean ratio schedule can help combat satiation. It means that the schedule is more resistant to extinction, but some limitations of ratio schedules. Remember that ratio schedules may not work well when you're teaching a new behavior if you use a very lean one. The exception to that is an FR1. You want to make sure that you're using an FR1 to teach new behavior. If you use an FR10 to teach new behavior, you're going to get a lot of extinction induced variability. The learner doesn't really know what to do. They'll just give you a lot of weird behavior. We also see that when you progress really quickly, when you go from that FR1 or that nearly continuous schedule of reinforcement and you lean out very quickly, you can produce a phenomenon known as ratio strain. Ratio strain occurs when the requirement for reinforcement is increased too quickly, where the learner has to suddenly change and emit a lot more behavior to earn the reinforcer. You want to be careful when you're leaning out your schedule that you don't go too quickly. Let's see if we can identify some of these different schedules. Tell me whether this scenario is a fixed or a variable ratio schedule. You're a server, you're a waiter, tips are your reinforcer, not every diner tips. What schedule, fixed or variable ratio are you working under? This is probably a variable ratio schedule because we were very clear, not every diner tips. If you were a server in every diner tips, then you're going to be earning the reinforcer every time, which means you're on a fixed ratio one schedule. In this case, we're looking probably at a variable ratio schedule because not every diner tips and you cannot predict which diner is not going to tip. That means that sometimes you're going to get the reinforcer after every two diners and sometimes maybe after every four, depending on where you're serving, that can change that schedule of reinforcement. If you're working at a college coffee shop, you might get more tips than if you're working in Branson, Missouri with tourists and older folks. In this case, we're probably looking at a variable ratio schedule because it's less predictable. What about this scenario? Your grandma's searching garage sales for rail treasures. Most of what she finds is junk, but every third sale has a great item at desk. What schedule of reinforcement is this? The clue here is every third. In this case, attending the garage sales, searching around, that's the behavior. Every third behavior, every third garage sale has something that your grandmother buys, so it's very consistent. Every time it's every third, we're looking at a fixed ratio three schedule reinforcement. Let's do one that's a little bit more difficult. What about these two kids? Occasional praise from mom might keep us on track when we're helping out in the garden. Jimmy has praised for about every five weeds. Carolyn has praised for every five weeds. What schedule is Jimmy operating under? What schedule is Carolyn operating under? Jimmy earns praise around every five, which means that we're probably looking at a variable ratio five. Sometimes he's going to earn the reinforcer every third. Sometimes he's going to earn it for the eighth, but on average, it's about five. Carolyn, however, gets it every five, which means she's on an FR or fixed ratio five. So Jimmy's on a VR five. Carolyn is on an FR five. So this, I hope, helps give you some grounding, some clarification about what a ratio based schedule is. We've talked a little bit about how to read a cumulative graph and talked about some of the advantages and disadvantages. Remember that a more continuous or more robust schedule of reinforcement, one where you earn a lot of reinforcers are great for teaching you behavior, but pretty bad about maintaining that behavior in the long run due to satiation. However, you want to be careful when you're leaning out or thinning out that schedule to make the earner give you a little bit more, because if you go too quickly, you're going to have something called a ratio strain. I'll see you guys next time.