 I think we'll start this out with an example of what the Helmholtz chain is going to have in hand. I wonder if I'm alive. You're grabbed by the function. Yeah. In order to understand it. So I suppose we should just... Intervals, folks. Has to do with time. Plus one response. Right? So remember, we're talking about intermittent reinforcement. And when we're talking about intermittent reinforcement, we need to think about the delivery of... the delivery of reinforcers, not forever response. But we're going to do this with a time component in it. So we have variable, interval schedules, and we have fixed interval schedules, just like the other one's fixed and variable. So when we have this fixed interval... sorry, we have a fixed interval schedule is what we'll start with. We're going to say that the first response, after a given amount of time, is reinforced. Okay? So whatever that reinforced, wherever that response is, it doesn't really matter. So the point being that... if we're going to be on, let's say, a fixed interval, five minutes. So the first response after five minutes is going to be reinforced. Okay? Fi2, the fixed interval after two minutes, the first response after two minutes is going to be reinforced. You get how this works, right? So what this does is it kind of sets up a timing scenario with the organism. So the organism, again, the fixed schedule produces an interesting pattern of responding. That pattern of responding is a scalloped effect, right? So what we do is we have a pause. We have this sort of pause, just like the other fixed schedule. So we have a pause and it starts to accelerate. So they'll get more and more responding as you get closer to that interval, the completion of the interval. The moment you get the completion of the interval, bang, reinforcer, right? But then behavior flattens out. Okay? You go on a post-reinforcer pause and then you start the whole thing over again, right? So you get this interesting scalloped effect, which reminds me of always one of my favorite cartoons. It was drawing like cartoons. And that was the behavior analysts at the beach and the waves and anyway, the FIs and everybody was just laughing. And it's funnier when you see it, not when you tell it. It's a visual thing. Maybe we can put something up in the background. I don't know. Anyway, so fixed interval schedules, they're not super resistant to extinction, but they're not bad. I mean, bad, good, whatever with regard to resistance to extinction. The other thing we have is a variable interval schedule of reinforcement. So variable intervals are like what you might imagine, a variable interval. So on average, after a certain amount of time has passed, the first response is going to be reinforced. So a Vi10, maybe the first response on average after 10 minutes will be reinforced, which means the first response might be reinforced after one minute. The next response might be reinforced after 20 minutes, so on and so forth, but it's an average of a 10. So a Vi10, right? That maintains a strong steady rate of responding, basically, because you never know when the reinforcers are coming, but we don't have to worry about the no part. That's being mentalistic and public, and that's almost a crime. So, never mind. Anyway, so Vi's and Fi's, so those are the other schedules that we talk about, the basic schedules. Then we start to put all these things together, folks. And when we put them together, I mean, we can really put these things together. We can do all sorts of stuff, concurrent schedules, mixed schedules, multiple schedules, mixed times, all these different things that we can do. So really what we have is then the whole other video on the combination of all these simple schedules. But the simple schedules are really what you need to know about to start to understand behavior. So I think there's been a theme lately of being on slides. So here's another slide. Oh, that hurt. Got it. All right. So I don't know if there's an interval here or a ratio, because we've been on two slides today. I think I've been reinforced both times because I didn't fall on my bum, but so I don't know. Wow, that was actually rather fast. I learned the lesson today of sliding on a wool, on metal with wool. It makes you go, whew, you go rather fast. That kind of hurt my wrist. Anyway, so variable interval, fixed interval, interval schedules of reinforcement, intermittent schedules. That's all I need to know. There's probably more, but you can look it up. Read the books like I did. See ya.