 and children. Young, Krantz, McClanahan, and Pulse in 1994. Again, an article from my birth, my birth year. The year I graduated high school. Anyway, so winter 94. I always start the articles opposite. This is a good article. This article sucks. It doesn't suck. It's just tricky. Anyway, so here we go. Notes. They had four subjects. Some people call them participants. I like to call them subjects. And if you're wondering why I have this, it got thrown at me in a previous video. So it's response prevention. So we're shaping our shit. I saw that coming. Very slow. All right. So four subjects. We had a bunch of stuff. Anyway, so for those of you that don't remember what generalized imitation is, it is imitation that is generalized. So if you go way back to the Stokes and Bayer article and video that we've recorded previously, they start hinting at the fact that maybe you can train generalization as its own response. Maybe it's its own opera. Maybe it's a response class, so on and so forth. So, okay. And then you get the whole response, generalized imitation stuff that Bayer was working on, which we'll cover in another video, I'm sure. And zip forward to 1994 and everyone goes, look, you could train generalization as a response. And then some people are going, nah, probably not. Maybe there's more limits to it. So they followed up on some of those limits. So the limits seem to be, and they do a great literature review of this, by the way, including an article by my advisor that I didn't know he had written. Anyway, so there was a great article, or is it the great review of the literature on basically how generalized imitation kind of fails and what's the limits of it. And in general, sorry, that wasn't intended. They describe the scenarios where it fails because of that maybe the generalized imitation only happens across response to certain topographies maybe. So like think of it as subclasses. And I don't like to think of it as that sort of real or reified, if you will. I like to think of it as okay, I understand maybe there's some limitations here. I don't like the term subclasses for a lot of reasons. It starts to make me think of, what do you call it? Step theories of change. All right, so what do they do for kudos? And they basically trained them to generalize. And they had several different response, we call them. Shit, topographies, if you will. Okay, so a visual of the vocal toy models vocal to the so they would we have we had vocal models, which was just a so my cookie, you help me tie issue that was for one and Seth had, I ride a bike, you hug me and do a puzzle. Hi, do you had having happy girl eat cookie. Right. So they had all those things. And then there was a different one for David. He got to do really cool ones. He had vocal toy models. So he got to play with, he got to play with his, his, whatever you call it. Anyway, so we'll be playing with this. And I was like, Oh, whoops, maybe it's a generalize. Now, you get the idea. So he's playing with stuffies or playing with toys. Whatever it is, right? So you're doing those sorts of things, you're playing with the toys and you're making vocalizations with them. And then they have, they do this for all these different things. They see toy play models and then also pantomime models. And they did them in sets. It was really complex sort of study, but it was really cool. Two thirds of the models were trained, right? So two, and then one third were probes. So you did a whole bunch of these things. They had 20 sets with 27 trials each, nine vocal, nine pantomime and nine toy play. So you'll have to read to get a little more detail out of that. And I'm going to come back to that in a minute because I got really frustrated when I was reading this article, because I couldn't figure it out. I'm going to explain why in a minute. All right. So they looked at percent of match and non-match within six seconds of model. So if you present the thing, the model and then did the kiddo match or not match, did they imitate within six seconds? Okay. And then again, the probes were, they're not going to reinforce you if you do it. And then the regular trials, the training trials, they're going to reinforce you. And they had different types of reinforcers. They were vocal and they added food to it for everybody. So so model alone. I just went over that one, the model plus praise, but they added food with the verbal praise. And let's just get to the results of this. They're really cool. There's four gigantic visuals in here. And because of copyright issues, I can't show them to you, but I can look at them. And look at that. Oh, it's really clear. So basically what you show is they show in there. I can show them here, but you just can't see them. Take that one right there. So there's four of these things. And what you have in the top half is the training scenarios and the bottom half is the generalization scenarios, right? So the probes. So the probe trials. So you should see in generalization that when you are training them, you're getting more generalization. And you're getting more copies. What was the term that they used? More successes, more matches versus non matches, and it should switch over. They used a multiple baseline design. This is, it was great, right? They did a really good job. And they got exactly what you would expect that you can get generalization within these things. But what was really cool. And what was interesting was that they were able to generalize their responses per each one of those things. Like I said, so the vocal model, right? And then model. What was the other one? So we've got, where's the scenarios? Oh, so the toy play responses, like I was doing with this dog bear thing. And then the vocal with toy. And then the promotion or the pantomime responses, just no words, right? Just doing different things. And they were able to get these behaviors to generalize with any one of those classes, but they didn't get the generalization across classes. Okay. That's where the cool part of this article is, is that they're showing the generalized imitation doesn't just go across the board. So each one of these rows in the multiple baseline here, these conditions in the baseline is the different classes. So you would get the generalization within, you can see it crystal clear, right? So all those different probes and prompts and stuff. But they did not get between. Otherwise, you would have had the generalization happen before the intervention started. And so during baseline, you would have seen it. So anyway, they got really consistently for all the kids, there's a big issue with one of the kiddos and needing some additional instruction. But that's beside the point of this article. So take home message from the article. If you listen only to the article is this, generalized imitation, maybe a thing, but it's probably limited to topographies of responses or tighter groups. It's not going to generalize outside of it. The generalized imitation is limited as the point. Awesome. Love it. However, I just, it bothered me. I had 17 cups of coffee this morning while I was trying to read it. Oh, I promised I was going to tell you something else. I sat there and couldn't figure out why this article was important for a long time. Why? Because I didn't understand it. Repeatedly, I read this thing twice and before this video, I'm like, I don't get it. What's the big deal about this article? Because several people recommended it and they were like, Hey, you'd like to cover this and it's a good article, but I couldn't get, couldn't get it. And it was because I missed one sentence. I'd written it down on here, but I kind of missed the sentence when I was reading it and trying to understand it and integrate all this stuff into my verbal repertoire. And it was about how many different stimuli they tested and how they rotated it around. Right? So the 20 sets of trial, or they had 20 sets with 27 trials each. And those nine vocal, mine, the blah, blah, blah, that little section didn't click with me, matched up to the graph. And finally, all right, made sense. So my point is, is that sometimes you get an article and you read it and you move on. Don't forget, these things have been through a whole rigmarole of getting published. And why would something in the Journal of Applied Behavior Analysis get published? And me go, huh, I think that my response was the wrong one. So I went back to the article, re-read it again, figured it out, found that sentence. The aha month. So, all right, there you go. So we all make mistakes. So keep that in mind. Sometimes an article doesn't do, you just don't get it for a while. So again, let's see and pick different pieces of it and keep trying to figure out what the problem is. Last point, if you take the Stokes and Bayer article from 77 and you put it in the context of what they did, I kind of think that they didn't go far enough. Like I think they're, they made their point that you generalize limitation and there's a limit to it. But if you take the idea of what's going on with generalized, with training for generalization and programming for generalization, maybe they just didn't train sufficient exemplars. Maybe you can get it to response across response classes. If you, or the response to about topographies or something, if you train that, like, I just feel like I know that there's more to it and I know they've done more work and I know that more work has been done. But one of the limitations that I saw and I was like, I'm not going to completely conclude that you can't teach generalize a generalized imitation across the board based on this one article, probably some more out there that might back that up. They seem to reference a bunch that has evidence of that. However, I wonder if it's just a failure to program the technology effectively. Maybe. Anyway, another article. That's the dust. Me and Ubu, we're going to go have fun. See you.