 I think we'll start this out with the demo. We've got the Hilbert chain, as you're going to have eight passes. I'm going to drive a bit loud! You're going to have to buy the function fuller. Yeah. We might be wrong. We're funny, but not always a joke. We're going to drive a bit louder. Ready for this one? This is the Abba design. Bet you're figuring out what it is. It's an Abba design. I don't have any idea what this design type is. We do call them multi-element designs, right? Why? Because they have multiple elements. Ironically, so does an Abab design, but we don't call it multiple there because there's only two. Sometimes our field drives me crazy. Anyway, so this is when you have more than one intervention type, right? So you have an A baseline. You have a B intervention. Then you have a C intervention number two. You might even have intervention number three, B, C. Where you put them together, right? So then you could really get tweaky and you could have a D. So then you could have an A, a B, a C, a B, C, and a D. Then you could have an A, D, and a B, D. Then you could have an A, C, D. Well, you couldn't have A. Oh, sorry. I totally screwed that up. Should we try that again? We're not playing the guitar. Thank you, camera people. So... A, C, D, C? Somebody caught it. You weren't supposed to say anything, man. No. Anyway, sorry. Let's go back to that. So A's are baselines. B's are the first intervention. C is the second intervention. And D is the third intervention, right? So you can have then a B, C type intervention where you put them together. Then you can have a C, D type intervention where you put it together. Then you can have a B, D intervention where you put it together. And then you could have a B, C, D intervention where you can put them all together and do it that way. So the problem is pulling this all apart empirically and establishing a pattern with which you can test this stuff, OK? So I've thrown up some fun over here and you look at these different types of combinations. And the problem is, is that we're not doing everything that we need to do in order to for sure establish which one is the most effective. Why? Because if we do that, we're going to have to start throwing in lots of A's too, OK? So A, B, A, B, all right? So now we have that comparison of A, B, A basically. And then we would need to get an A, C comparison in there. Then we would have to compare B's to C's. Then we would have to compare A's to B's to C's. I'm sorry, A's to B's plus C's and you get the idea, right? So we have to make all of these comparisons back to baseline in order to draw the logical conclusions and it gets very long, very complex in order to pull it off. And to make matters worse, we're going into sequence effects. So if we want to tease out sequence effects, then we've really got to do a bunch more work. So imagine the complexity of a multi-element design where you only have a B intervention and a C intervention but you want to tease apart the sequence effects of presenting B before C and preventing C before B and also comparing the effects of C, B together in relation to either C alone or B alone in comparison to baseline. Just talking about it gets confusing as you might imagine but pulling it off and waiting for stability within each condition well you could start your experiment this year and you might get done when you're old and notice I didn't define old because there's just so many layers that you would have to work through here. So multi-element designs are really awesome and they're cool because you can actually make all these different comparisons but you do not have that absolutely rigid level of control or independent or I want to say that today internal validity that you would have with the traditional withdrawal design. So you can't build them that way and you could make them very complex but it's very, very challenging to do and highly unlikely that we're going to do that in the real world. So typically speaking they have lower levels of internal validity. You can't draw the conclusions about functional relations as well as what you may like and they're really sensitive to sequence effects unless you balance those out. So if you're going to do A, B, C, A I guess maybe then make sure you get A, C, B, A in there somewhere too okay and then you can intersperse your A's to make sure you're controlling for some carryover effects and practice effects. It just starts to get really complicated folks. These types of designs are just insanely confusing but they're cool because of the fact that you can start to tease apart these things and let's say you did put a B against a C and against a B, C and you find out that the C is better than the B of a B, C together so then you can start to target your interventions just with that whatever B and now you can put that into an experiment by itself and see you get my logic. So you can use these things as a starting point to start building on them. If the data is giving you a hint about which one's the most effective then okay then kill the other ones and then start with the one that seems to be the most effective when compared to the others. Lots of fun to be had here and lots of experimentation that can last you the rest of a lifetime. In fact, I suppose you could build a career around it as many of us have. We haven't quite figured out what reinforces us here at Psych Core but we know one thing's for certain that you could probably maybe possibly do some reinforcement for us by liking, subscribing and sharing. It might keep the videos coming because who knows it's a pretty damn thin schedule that we're on and who knows when we're going to reach that extinction break point and it just starts to go downhill. I don't know when that's going to happen. So prevent it, please.