 We see more and more polls being used for policy purposes, but often they don't explain how they're conducted. What kind of methodology is necessary to get a really valid poll? Yeah, this is a fundamental point. I mean, I like to say that running with data is like running with scissors. It's pretty dangerous. The fundamental, before we accept any data, is to understand how these data were obtained, by what methods. Did they adhere to the basics of probability sampling? It's informed by the theory of inferential statistics in which we know that from a randomly selected subset we can draw inferences and conclusions about the full set. This is essential. And therefore it's really important in looking at any data first to stop and ascertain how it was obtained. We need a full detailed statement of the methodology, description of the sampling approach, the field work approach, and the data processing approach. We provide that in our surveys, and I really call upon any other data producers to do it, and data consumers to stop. It's too easy to get seduced by numbers without checking out the veracity, and it is essential to do so. What is the difference between simply having a poll, a statistically valid poll, a probabilistic poll, and one which uses the proper inferential methodology? Well, first, a probability sample is fundamentally required for representative survey research. And I've seen surveys out of Afghanistan and elsewhere that are based on convenience samples, which are in effect spurious and really can mislead as easily as lead to an appropriate conclusion. Given good methodology, we still have data that are only as solid as the questions asked and our ability to understand the answers. A number with a percentage sign is in and of itself a commodity. You can get them anywhere and they don't take you anywhere. What's essential is looking at what was asked, the full questionnaire, the context of the question, the parameters, whether it was understood by the respondents, how clear it is. Questionnaire design is in many ways the forgotten stepchild of survey research and is again, after methodology, absolutely essential in understanding what you've really learned. Then the next step is to do the analysis, to work across the data set to draw conclusions from your data, which are informed only by data that are statistically significant by differences that are meaningful, but by a broader view and understanding of it, not just taking a number and latching onto it. A last point is that you see a number and a percentage sign and you think you're talking about laser surgery on your eyeball here and you're not. The survey, even a well-conducted one, is an estimate and that's the best it can be. We can estimate population values, sentiments across a population with good accuracy, but we can't get carried away with small changes in data. We shouldn't overdo it, right? The purpose here is to get a sense, a good estimate of attitudes and beliefs and perceptions across a population. Good survey research is invaluable for that purpose, but can't be pushed too hard. We see a lot of polls even in the U.S. government that have no source, no methodology, there's no explanation of how they're reached. How do you know whether any poll like that is valid? You don't, and the answer is to disregard it. You know, I've been at the forefront of survey standards in the media for many years and put together a long while ago a system at ABC News where we vet any survey research presented to the network for its validity and reliability before we report it. Any survey research reported at ABC News goes through this process first, and those for which there is non-disclosure are rejected. There's no way to know if research is valid and reliable without being able to inspect a detailed statement of the methodology, the full questionnaire, and indeed the marginal or overall results themselves, because that's where surveys go wrong in bad methodology, in suboptimal questionnaire design, or in cherry-pick or misleading analysis. Now we see some other polls that talk about being statistically valid, but don't indicate at all whether they are probabilistically valid. Does statistically valid really mean anything in getting accurate poll results? It means something. It's a place to start. As long as you have a probability sample, you can ascertain whether the differences in data or the changes over time are statistically significant, but you can have statistical significance without having practical or meaningful significance, and that's where you have to apply your individual analytical skills, look across the dataset, and come to a broader conclusion. So statistical significance is necessary to ascertain whether you have meaningful or practical significance or meaning in data, but in and of itself doesn't provide it. If you have to look at a poll result, either in the media or government reporting, do you have some tests which would indicate whether it is credible? Is there some easy way to have sort of a checklist of what you should look for to decide whether the poll is being properly presented? Well, the first place to start is with full disclosure, because if you don't have full disclosure of how a survey was conducted, what was asked and what results were obtained, then you have no basis for making the next steps of judgment and figuring out if it's reliable, if you can hang your hat on it. If you don't have disclosure, you should stay away from the report. If you do have disclosure, then we get a little more technical. We have to look at the methodology and determine if it was appropriate, if it met the standards of probability sampling, and look at the questionnaire and see if it is well-designed and uses optimal question wording and ordering, avoids bias, and look at the analysis. It's a time-consuming process, but it is so easy to get seduced by numbers, it's essential to stop and pick them apart, because we can be misled quite so easily as led by data if it's obtained by inferior methods. You talked about seduced. How often do you see another phenomenon which is called cherry picking? You only pick the results that support your point. Very often, and that's why one of the other things we look at when we evaluate survey research is the sponsor of the research. It does that sponsor have a dog in the fight. It's helpful to know that. It doesn't determine in and of itself whether the data are cherry picked, but it can be a good indicator of that. So knowing the sponsor and the motivations of the sponsor in producing this research are helpful as well, although where the rubber meets the road is in the product. And you have to look at the conclusions and the analysis and look at the data and see if one is supported by the other. When we do our work, for example, for the media client, for ABC News and for others, we release a detailed, highly detailed description of the methodology, the full questionnaire and marginal results, and I think that these are essential in the evaluation that has to be done. You also use controls, don't you? A variety of questions to make sure that you don't bias the results you have oversampling in some cases. Are there other ways to make sure a poll is valid? Sure, there are substantial controls at every step of the way. We spend a lot of time combing through the sampling plan, the sample sources and the sample design to make sure that it's appropriate and the best it can be. In terms of fieldwork, our interviewers are highly trained and travel in teams. There are back checks both in person and when possible by telephone to confirm some of the answers, and back check some of the questions. A substantial number, I believe 30% of our interviews in Afghanistan are back checked in this way, an important step. And then in data processing, there are stringent quality controls put in in terms of assessing the data and looking for straightlining or other departures from the norm in data production that would indicate a problem with the interviewer's performance. Gary, thanks very much, but let me just ask you, if you were in government, a student, a policy user of these polls, do you have any last pieces of advice as to what to look for? Well, I would say that good data are absolutely invaluable. They give us a window on the attitudes and preferences and behaviors of populations in a way that is otherwise impossible to know and obtain. At the same time, we need to approach data skeptically and with a full commitment to ascertain its reliability before we run with the results. That's essential and that's the takeover I would give. It's a lot more challenging. It's a lot harder to stop and check it out first, but particularly with something that is as powerful as data, it's essential. Gary, thank you very much. Thanks, Tony.