 So Kevin will be talking about the necessity of construct and external validity for generalized causal claims. The stage is yours. Where's the clicker? Is there a clicker? Did you have a clicker? Oh, is this it? Yeah, that's the clicker. So that's green button. Oh, the green button. OK, I got it. OK, thank you. OK, over the past few decades, social science has seen the rise of something called the credibility revolution that promotes a deductive approach to quantitative causal inference. And the writers of the credibility revolution helpfully point out that just finding a statistically significant contrast, like between a treatment and a control group, is not enough to make a causal claim credible in this deductive sense, that we also have to make an assumption of internal validity to make the claim deductively true. And while different designs might have different kind of technical or ancillary assumptions that you also have to make, really it's the assumption of internal validity. That's the core assumption for the credibility revolution, and it's what drives it. And while the authors of the credibility revolution emphasize internal validity, they almost entirely neglect considerations of construct and external validity. And I think you guys probably already know that. But if you need evidence for it, you can look at all the famous textbooks on quantitative causal inference. And typically, they don't even have index entries for either construct validity or external validity. And it's not just this literature. But I think science as a whole takes internal validity as the paramount or premier consideration for causality. And then a lot of times, things like construct and external validity are sort of afterthoughts, if they're thought about at all. And what we want to show is that just like internal validity, construct and external validity also are necessary for preserving the deductiveness of causal claims. And also, they're essential for just the accumulation of scientific knowledge. And the way I'm going to show that is I'm going to make use of this wonderful paper from Gerber Green and Larimer that they published a number of years back in the American Political Science Review. And this is one of my very favorite papers that's ever been published in the APSR. And what they did was they did this what's called a get out the vote experiment. So in the run up to the primary election in Michigan in 2006, the authors sent out different postcards to registered voters urging them to vote. And what they wanted to know was, well, which of the different messages was most effective? And they did it as an RCT. So they randomized which postcard different registered voters got. So some of them got what we could call a postcard A. And if you were to read that, you would see it's a civic duty message. It's your civic duty to get out and vote. Other people were randomized to postcard B that it's more complicated. But if you read it, you'd see what they're doing is trying to use social norms to create social pressure to get people to vote. The outcome was just whether the registered voter actually voted. And obviously, they expected that the social pressure postcard to have more a bigger effect than the civic duty one. So remember, the credibility revolution wants us to be deductive with our causal claims. And so we have to be transparent about the premises, which means that we have to be transparent both about the assumptions and also the evidence. And so for the assumptions, because they did the randomization, they can well warrant an assumption of internal validity. So they make that assumption. And then they also make these other technical and ancillary assumptions for the RCT. But they do not make any assumptions in the paper about construct or external validity. And then for evidence, they did find a statistically significant treatment effect. And in fact, the registered voters that got the social pressure postcard were 8% more likely to vote than those that got the civic duty one. And for this field, that is a huge effect. And if you think about it, it's an effect that's big enough that it could actually change an election. And so it's a really important finding. And so for their claim, they conclude, and this is a quote from their paper. Their claim is that their results demonstrate the profound importance of social pressure as an inducement to political participation. So they write their claim in a way that makes it clear this is a result that's important for democracy, not just for a political scientist, but for society as a whole. They're listening for a second about how their assumptions connect their evidence to their claim. And so the evidence is you can see across the top row, that they have variables that track postcards and recorded votes in that one time in place. But their claim about social pressure causing political participation in this general sense, notice is more in terms of these metaphysical events that occur in the real world, or what philosophers would call ontological reference in nature. And I can tell you that the readers of the APSR care about these ontological reference. They don't care about the actual data that the authors collected. They care about the claim in the real world. And so let's think about how their assumptions connect their evidence to their claim. So I mentioned that they did not make an assumption of construct validity. But if they had, they would have been able to connect their variables to the actual cause and the actual effect. But because they didn't make assumptions about construct validity, we have to grade that assumption out. And likewise, an assumption of external validity would tell us how the results might generalize outside of their study, and maybe where the results might transport to, and maybe where it doesn't transport to. But again, they didn't make any assumptions about external validity. So we have to grade that one out as well. They did make an assumption of internal validity. And so we can say, and they demonstrated that the probability that a registered voter voted differed across the AB postcard text in this one time and place. But notice there is nothing about internal validity that connects their evidence to their claim. So you could think about internal validity. If you only use internal validity, that essentially disconnects the evidence from the claim. And so as a result, their claim is deductively false. But if they had made the assumption of construct and external validity in addition to internal validity, then the claim that they made would be deductively true. When we share this paper with our applied statistics colleagues, a lot of times they get really agitated with us or they get upset. And they'll say things like, geez, come on, dude. Other people have already told us that construct and external validity are important. We all read Cook and Campbell in grad school. And now it's Shadish Cook and Campbell. And blah, blah, blah. We already know these things are important. So you're not telling us anything new. And what I want to emphasize is that that is not our argument. So we completely agree with Shadish Cook and Campbell that construct and external validity are important. And in fact, they're foundational for good science. But that is not our argument. Our argument is that in addition to that, construct and external validity are also necessary for preserving the deductiveness of a causal claim. And they're as necessary as an assumption of internal validity. And omitting any one of those, it's because omitting any one of those assumptions makes the deductive claim false, right? And that in turn would undermine the goal of the credibility revolution to have this deductive understanding of causality. And for this meeting, what I want to do is go back to the slide I showed earlier and emphasize that internal validity in and of itself does not enable the accumulation of scientific knowledge. Because what internal validity tells us is that a cause occurred in the course of the experiment at that one time. But there's nothing about internal validity that tells us anything about what was the cause, what was the effect, and how the causal effect might generalize to other settings. So if you're using internal validity, for example, in your, as your selection criteria for which papers to include into your meta-analysis, right? Then you have no guarantee that your meta-analysis is coherent. And so I just loved Professor Sena's presentation earlier because it's really an exemplar of how to think about doing meta-analysis in a coherent way. So that was wonderful, oops. So as you all know, if I were to write a paper that used observational data and a correlation coefficient to make a causal claim, and I submitted that paper to a political science journal, the referees would not only reject it, but they would angrily reject it. And they would say, how dare this author write such terrible things? And that's fine. I don't have a problem with that. But what we wanna do is to make people, referees and all of us equally forceful with our concern about construct and external validity because all three of those assumptions, internal, construct and external, are necessary for deduction, for generalization, and for the accumulation of scientific knowledge. So thank you, that's how you can see our paper. And I yield back my time. That's what we say in political science. Thank you for finishing on time. That was excellent. We have two minutes for questions. So please use the microphones if you have any questions. Jonathan Fuller, NIH. So I'm just wondering if your claim is that we should be using deductive methodologies to establish things like construct and external validity, or if you're just pointing out that we can't make assumptions about external validity and purport to say it's deductive unless we consider these assumptions. And so what methodologies are you kind of advocating for establishing that? Yeah, thank you for asking. So I didn't have time to say why our applied statistics friends get upset with our paper. In applied statistics, people are very comfortable with using a randomization to warrant internal validity because it's part of the apparatus of making inferences. But what we say in the paper is that the only way that you can establish, that you can warrant assumptions for construct and external validity is through qualitative research. And so what that means is that the good science then, good quantitative science would usually be interdisciplinary working with teams that are good at also doing qualitative methods. And the reason they get so bad is what they wanna do is just collect their data, make their assumptions, push a button and make a causal claim. But doing good causal research is a lot harder than that. So yeah, thank you for asking, yeah. Thank you. Thanks, really interesting. Cristobal Young, Cornell Sociology. Okay, so I love this and I just wanted to share an anecdotal piece of research. Study on the effects of migration on people's labor market experience in the future. Does migration help people's livelihoods? Tricky, causal questions. So they found a volcano in Iceland in the 1970s that erupted and led maybe 700 people to move off a tiny little island in Iceland to the mainland because lava covered their houses. And what they showed is that for the young people that actually really helped their labor market future, but for the older people, it didn't. And I was just sort of like, wow, I guess we really don't care about external validity anymore. But you know, so I'm really curious, what do you think we ought to be doing differently? Or how can we build on this to make that external validity claim? What is it like a checklist? What kind of criteria? Yeah, it's just sort of what I mentioned to the previous question is first of all, just being more self-conscious about kind of warranting assumptions of construct and external validity, even if it's just from your intuitions to be transparent that it's about, these are my intuitions of how the constructs match my variables and what are the conditions that enable my cause to transport. But then if you could do it in addition with using good qualitative methods that helps to even give it better warrant. So yeah, I think I'm out of time, but thank you so much. Thanks for your questions.