 It's a good thing I got tools around to take care of the problems of my life. With my tools. Sorry folks, we're trying to introduce you to a paper. An implicit technology of generalization. It's a good one. As most of them are when we put them on our channel. Because we're not going to put bad ones. That's an idea. Maybe we should put bad ones on the channel. Maybe we have it. You just haven't figured it out yet. Anyway, implicit technology of generalization. And if you're wondering why I look like I belong in space at the moment. Or on a baseball field with weird glasses. It's because I'm trying to teach you a point about this article. Which is that supposedly you can train for generalization. And it's possible, if we combine a bunch of articles together, that you realize that you might be able to train for over generalization. Which might simply be the case here. So you get the idea. Alright, because I don't think there's going to be anything too scary about using tin snips on paper. And having stuff jump back into my eyes. And or potentially somebody throwing stuff at me and hitting me in the helmet. Which that is like people. Anyway, I might get away with a smaller head back when he was playing baseball. A smaller head than me. Because we all know I have a big head. Alright, sorry. Stokes and Bear, 1977. We can do this article really quick. Or we can do it really slow. So we're going to try to find a combination. Even though I've got two pages of here. Are you ready? So the basic idea was that Stokes and Bear did a review of basically generalization training. In articles, it was 100 and somewhat articles that they reviewed. Quite a few articles that had been published in behavioral literature. In 1977. So still early in the official field. But still a lot of stuff being published. And then there was obviously behavioral articles published before 1968. But anyway, side note. So generalization at that time up until this article. Essentially mainstream thinking of generalization was essentially a side effect. A positive one in a lot of cases. But it was essentially a side effect. It was a passive process. It wasn't active. You didn't program for it. Now, of course, everybody's right here. They're like, how are we having this discussion? Give you some historical context. That's what we're having this discussion. Is it that we do that now? Well, here's why it was this article, right? So Stokes and Bear, they reviewed the journal articles out there and I said, Hey, you know what? We're going to look at all the ways people have addressed generalization. And I think the important core point that again, I've already said was that it's up until about now. This was a not now is in 77. So it was a pass. It was considered passive. So you do your discrimination, train discrimination, train discrimination. And you ended up getting generalization. So people would think that that's just what happens when you do discrimination training. So sure, in a sense, but they thought a different approach. They thought, well, what happens if you can't just program for it? Isn't it a response kind of into itself? And we'll get into that in a few minutes. All right. So there were nine general categories of different ways in which they identified. There wasn't, they didn't exist prior. They did it after the fact, right? So post hoc is, well, these are the nine different categories of generalization procedures that we have found. So we're going to zip through those really quick train and hope also known as spray and pray. So let's see, across experiments. So again, they're looking across a whole bunch of experiments, experimenters that they would do this generalization training across time. They're to training basically a huge number of these articles that did this train and hope approach showed successful generalization like 90%. But it was almost accidental and the focus was still like on this weird O level response. Like the organism did it. Like it was just not something you could build into your programs. Just did it. Anyway, so sequential comfort, sequential. I hate it when I write stuff so quickly or so early in the morning, but I have no idea what it is that I said. Sequential model. I'm just going to read the principle. It's easier. Sequential modification, consistently coming up with different scenarios, different stimuli and so on and so forth being nice and sequential about it. And then kind of forcing the organism to experience all these different versions of things and blah, blah, blah. It works for generalization as well. However, it gets you type of rigidity. We'll move on. Introduced to natural maintaining contingencies. I have a note here on this one. Introduces behavior traps. It was really funny. I was like, I was reading this. I'm like, behavior traps. And then literally I turned to the next, like the next three lines that like a behavior trap. I've read this article too many times. I knew it was coming. I didn't even know it. So anyway, so behavior traps, et cetera. Not technically generalization, but really it's more about transfer stimulus control. So as we come down this list, you're going to start to see or hear or whatever that we are going to be more about programming. For generalization. So the next step introduced natural. Oh, I just did that one. Train sufficient exemplars, right? So this one's interesting. If you train too many, you have sequential. I still can't read it. Modification. So if you train too many of sequential modification, but if you don't train enough, you don't have enough exemplars. So you're going to find that happy medium. Well, where is it? No, nobody knows. In fact, it's an area that's still probably ripe for research because it's probably different for everybody based on your learning history, which we'll find out when we get into this article and then other articles about generalized imitation. So because all of it's different all the time. And that's just the way science is. All right. So train six sufficient exemplars. So you don't have to train all examples of a scenario. So if you're trying to teach me about cats or fuzzies or whatever those things are, what is that? The stuffy. What do you call them, Brad? Stuffed animal. Stuffed animal. Yeah. All right. So we could probably come up with a bunch of different names from or a bunch of different examples. Like you go to my daughter's room. More of them than I care to admit. Because that means I pop them. Anyway, so we could show all of them or we could show a handful of them. And if you show a handful and you could present one as what is this? Describe their label this or tack this or whatever you want to say. And blah, blah, blah. And so it's a stuffy. Like congratulations. But if you do too many, it's going to be too rigid when you get a stuffy that they haven't seen. They're like, I don't know what that is because it's not a stuffy in my, you know, concept formation. Anyway, train sufficient examples. You all know this stuff. Train loosely. These are the guys that came up with all this stuff. It's right here. There's a whole section on training loosely. And they reviewed all the articles associated with training loosely up until that point. I passed it. Let's see. Don't be overly strict. Brad, you do a lot of training loosely. You talk about it all the time, right? Yeah. There's an example. All right. I said the iron, you know, to show training loosely, we must not be loose. So the point is, is that in the experiments in the journal articles, they're like, we're most trained loosely. But in order to demonstrate the training loosely in the journal article, you can't train loosely. It has to be a very rigid condition of training loosely. So there's a certain level of ironing here when you start to look too deep at some of this stuff. I think that would feel good. Yeah. Not really distracted. All right. Let's see what else. Indiscriminable contingencies. This one's awesome because they refer back to like everybody. And what does it mean in indiscriminable contingency? It literally means that you can't tell what contingency you're on. You can actually train that. It's really kind of cool. So think about resistance to extinction, right? So if I get you hyper-resistant to extinction, you just be like engaging in behavior period after the extinction. Don't care. Reinforcers, who cares? They're irrelevant because I'm so resistant to extinction. You don't even know what contingencies you're operating under, right? Essentially, this is about removing all the setting events so the organism can't tell what contingencies they're operating on. There's a lot of delayed reinforcement involved in here. In fact, it's a really good way to get generalizations to delay the reinforcers, which is a horrific way to shape behavior but a great way to get generalizations. So these things start to get contradictory with each other. It really depends on what you're trying to do. And as my notes here, in an odd sense, it's a type of training loosely, which sounds really weird to think about. But really what you're saying is that I've got this rigid contingency and then I'm going to start to make it less rigid. I'm going to put it on a, it's not going to be a fixed. It's going to be a variable schedule, right? And then you're going to just start to turn it out and go crazy. And then you may delay the reinforcers. So the organism gets into a scenario or a person gets into a scenario and they just like, I don't know what to do. I'm just going to do something that's worked in the past. Whoa, generalization. And they don't even know how to reinforce contingency. Program come and stimuli. This is really good. My kid will play hockey. So when I was reading this, I was like, hockey stop sign. I even, I was so important. I wrote it down clearly. So a hockey stop sign. Now I don't know if you've seen these, but everybody knows what a stop sign is, right? So you've got a stop sign and up in Canada, the kids had, I don't know if they do it in the States now, but up in Canada, all the kids at their age, they can't hit, you can't check in the back, right? So you check in the back, you can hurt somebody really bad. That's just, it's a no-no. So what did they do? They all stitch a stop sign literally right there. So instead of just sitting in practice, tell the kids, don't hit people in the back. Don't hit people in the back. What do they do? They put a common stimuli in the environment that you want them to stop the behavior, right? So boom, you get up there, you're skating, you're up, oh, stop sign. The kids know what to do. It works. Like it's really cool. So make training in the real world similar to, similar to the training environment by using specific stimuli. So you can train that in your training environment by bringing in the stimuli that will exist in the real world. This is all, like this is all creepy. I just kind of a summary of the article. I think it's a great article. Mediate generalization, develop a skill that is useful in more than one scenario. For example, if you want to cut things, learning how to use these, in fact, I didn't learn how to use these, but somebody's like, there's 10 steps, why didn't you use, I'm gonna teach you how to squeeze it. No, I learned to use 10 steps by learning how to use scissors. So I probably don't even have any of the irony. All right. Anyway, so no, no scissors. So that was a scissors response. We're just generalized. So it's a skill that's useful in more than one scenario. So then train to generalize. Here's the important one, the one that I love the most. I'm going to speak about it very little because I think we're getting to the end of this video. So most important thing is that maybe generalization is a response into itself. Can you train it? Can you train an organism, quote unquote, to generalize? Can you train a person? Can you train me? I don't know. I can't. Maybe that's how all of this stuff came about. Maybe I was trained to generalize a little too much. So anyway, the point being that can you reinforce the behavior of generalizing normal responses? Another term for this might be creativity, but we'll leave that at another date or for another article in talking about Karen Pryor's work. So, so yeah, train to generalize. And that may be an interesting thing. And I'm going to wink wink nod nod and generalize imitation here because that's an article that we'll probably cover someday. So there you go. Stokes. There. 77.