 So, today we're going to have a discussion on how to design a questionnaire. And as you would know that in survey research, the survey instrument is the questionnaire and how you design it, what are the elements, what are the things that we should avoid, what are the things that are required, that is what we are going to discuss. So, let's go to the presentation. I'm sure you can see the screen. Can someone confirm whether you can see the screen? Yes, sir. Thank you. I'll start off with talking about the importance of survey research. According to this book by Saris and Gal Hofer, this book is called Design, Evaluation and Analysis of Questionnaires for Survey Research. And in this particular book, he says that in the field of economics, about 39.4% of the work is by survey. In sociology, it's about 59.6%. In political science, also it's 28.9%. In psychology, it's huge. And in public opinion research, it's almost 95%. So, we can understand that across a lot of our allied fields, survey research is the important method for our researchers. Even in the field of media and communication, the majority of research is through surveys. And in surveys, we have these questionnaires, which are the most important part of a survey. Now, I'll just talk about the term used for it. Generally, if it is intended to be administered by an interviewer, then it's known as an interview schedule. So, a questionnaire could be something that somebody is filling up. Or an interviewer is asking questions and filling up the answers by himself, or it could be a face-to-face or a telephone interview. For our purpose, we are including both these things. I mean, both the interview schedule and the questionnaire in our term, questionnaires. So, we are not using it differently. It's the same term for interview and the same term for interview schedule and for questionnaire as well. So, questionnaire is basically the medium of communication between the researcher and the subject. At times, as I just told you, it is administered on the researcher's behalf by an interviewer. So, if your field worker is doing the interview, he's sitting with that interview schedule. But even then, it is the medium of communication between the researcher and the subject and the person who is being researched upon or who's being interviewed. This is a problem. If we are dealing with questionnaires, which are poorly phrased, we'll explain what poorly phrased and even if the order is wrong, then it will be extremely misleading. And these are the things that have to be avoided. As we go along, I'll keep on talking about that. So, this is just an illustration just to suggest that if they are poorly phrased, then they may be misleading. Whatever is suggested may not be the outcome. And that's very important because if you do everything very right, the sampling and all the other processes. But if your instrument itself is faulty, then, of course, you will get misleading results, wrong results. So, it's important that the survey instrument, the questionnaire, is devised very scientifically. This is what Sheetsley had to say about surveys in 1983. And this is very important. Before we start, we must realize what we're dealing with. So, questionnaires are usually written by educated persons who have a special interest in an understanding of the topic of the inquiry. And because these people consult with other educated and concerned persons, it is much more common for questionnaires to be overwritten, overcomplicated, and too demanding of the respondent. So, we must be very careful of not overwriting it, not overcomplicating it and not making it too demanding. So, there has been a lot of research on questionnaires itself. So, right at the beginning, I wanted to emphasize on the importance of simplicity in research. As much as simplicity is important, and exactly to that level, objectivity is also very important. So, McCannell in 1974, he said, you know, like an anthropologist approaching an alien culture, you must be approaching, you know, surveys like somebody who's an out, is a person on the outside. And the established frames of reference should be seen as a hindrance. So, you should not go back by your own background or the established frames of reference. Basically, the designing of a survey begins with a choice of a topic. So, you have a topic that you've decided to put on, and based on that particular topic, you have to ask certain questions. So, this decision will influence how my questionnaire is devised or how my survey carries on. The choice of the topic is the most important part of designing a survey. The choice of topic will lead me to what are the important variables? What are the important variables that I want to find out? What are the important variables that for which the data has to be collected? So, that is the second part of designing a survey or designing a questionnaire. Choice of operationalization, again, very important. Say, for example, if a question is, do you like cricket? This is a straightforward question, and when we start talking of questionnaires, many people will think, oh my God, questionnaires have to be easy. I mean, anybody can ask questions and there's no big deal in asking questions. But just let's think of this very small question. Do you like cricket? It might mean two different things to do different people. To one class of people, it could mean, do you like watching cricket on television? And to another class of people, it could mean do you like to play cricket? So the same question, how it is operationalized, depends on how we view our respondents to be and how we regard our questions to be simple or otherwise. So this choice of operationalization is a very important part of designing the survey instrument. After you have devised it, you have to test the quality. Through pre-test and pilot test, we'll discuss that as we go along. Then we have the final questionnaire and from there we decide the sampling frame and the sample design and then finally it is administered through our field workers, to the people who we regard as our respondents. So basically these six or seven steps are there in the designing this and we'll be talking about most of these steps as we go along. Which questions to ask? And that's very important. As I said, every question should be related to the survey's objectives. Because if it does not relate to the subject's objectives, then we must have a very strong reason for asking ourselves, why should that question be there in the survey in the first place? So your survey plan has to confirm the question that arises out of our research objectives. From our research objectives, we think about the survey plans and based on those survey plans, I will frame my questions. If there are any questions which are not supposed to be there, then I should know that what will I do with that data? Because after asking questions, I'll get a lot of data out of the survey. So what do I do with that data? Will that be useful in any sense or not? If I don't regard that to be useful, then probably it should not be there in the questionnaire. So if I want to find out about the behavior of the people, that how do people behave? Say for example, after watching a particular movie, a violent movie or whatever, then we would have to formulate our questions to establish what people do. So our questions, what would you do? What would you not do so on and so forth? We'll come to the exact formulation in later slide, but right now I'm talking about what are the different contexts which could be available there. Another is belief questions. What you think is true? What do you believe? There are ways of framing it, but it could be, as I said, the question context could be behavior. It could be belief on establishing what people think is true or what people regard as their own belief. Knowledge questions. How much do you know about a certain issue? How much do you know about a certain case? So we could be using a survey instrument to find out the knowledge of the people or the accuracy of their beliefs as well. Or we could also try and find out about their attitude to find out what they think is desirable. Whether they agree with a particular contention or do not agree or completely disagree with a particular contention. Another question is an attribute. So they're about their characteristics. It could be about their age, their income, the time they spent on online classes, so on and so forth. So these are the five question contexts that are others, but these are the main ones. You could be asking questions about their behavior, about their belief, about their knowledge, about their attitude or about their attribute. Questions can be straightway interrogative, using all those five W's and one H or whatever. Why do you think or how much time you spend, so on and so forth? The questions could be an interrogative sentence. It could also be an imperative sentence where you are giving instruction that mark the closest in the following. Or in the following sentence, what do you think is closest to your belief or whatever? So it could be interrogative, it could be imperative where you're giving instructions or it could also be declarative. So I don't want to get into the basics of the grammar but the same question can be formulated in so many different ways. You could be using straight forward questions, you could be using straight forward instructions or you could be just putting straight forward declarations. So these are simple ways of putting out how to ask questions. Questions can be open questions, they can be closed ended questions. Open ended, we do not give the respondents any option. They are supposed to write about the question in their own words. It could be a paragraph or for an online survey, it could be just a text box where they're supposed to write down their views on that. So what kind of questions have to be open ended and what kind of questions have to be closed ended has to be very clearly decided. There has to be very strong reasons for deciding what are the open ended questions and what are the closed ended questions. In closed ended questions, I provide options to people that among these which do you think is the reason for Joe Biden's victory. So you are providing or you are providing the options and there are arguments for and against because if you provide the options first, then you are priming the respondents to think of these as important. So very often, it starts off with an open ended question and then it goes off to a closed ended question or a closed question. Some of these open ended questions could also be pre coded especially for interview schedules, not for self reply questionnaires. So there you have a list of answers and the codes are there. So the person whose interviewer, he is coding the response. So the response, the options are not given to the respondent, but the interviewer who's sitting with the interview schedule, he knows he has a list of the possible responses and he's coding like that. So it could be an open ended, it could be closed ended and if it is open ended, it could also be a pre coded response. This is from Belsen in 1981 and I just wanted to emphasize how a wording of a question can cause people to misinterpret the question itself. Especially if the researcher himself or herself is not clear about the questions that he or she's asking. So this seemingly, it's very innocuous. How many days of the week do you watch television? And then I mean weekdays and Saturdays and Sundays of course and daytime viewing as well as evening viewing. So this was an experimental situation where this question was asked to people. 14 out of 59 failed to interpret this question as a request to mention a numerical answer that you have to give how many days so that they did not understand. A large majority did not interpret the days of the week as meant by the researchers. So the days of the week is which the, how many days of the week that you're saying they did not interpret it like that. Five persons out of 59 did not interpret you as asking to you the respondent. They thought they were asking about our family. 15 persons interpreted watch television as have the television set on rather than actually paying attention to a program. So all these subtle differences are important for researchers. And 28 out of 59 did not interpret them usually as intended. So as you can understand, this is an experimental situation and through a proper experiment, it is being seen that wrong wording of questions will cause a lot of confusion among the respondents. That is something that all of us have to be careful about. An important consideration to remember when designing questionnaires is the concept of validity and reliability. Very quickly, I'll explain what is validity. Validity means they are measuring what they are supposed to measure. Suppose I ask people whether they have bought this book or not, I ask them whether did you buy, did you buy the book called India Connector? They might say yes, no or whatever. But just buying the book does not mean that they are reading it. We cannot call them as readers. So this is just something about, they're buying and they're reading are two different things. So if I'm using a question which asks them whether they've bought a particular book and I assume it to believe that they are readers, then I will be wrong. So that is the question of validity. Validity means they're measuring what they're supposed to measure. Your instrument should measure what they're supposed to measure. Reliability means that if the question is posed to the same person on different occasions or to a similar person on different occasions, it will get similar responses. It's not that the same question at different times, the person will respond differently. So that question of validity is about whether it measures what it seeks to measure. Reliability, whether it is consistent across similar respondents are consistent with the same person over different times. So questions of validity and reliability have to be understood while designing such questionnaires. There are problems with validity because a lot of times people would want to give answers which they seem to be the most politically correct or which according to them is socially most desirable or they would not, especially when they're talking to an interviewer or even if they're writing it themselves it wouldn't want to be seen as offensive or doing things which is socially not regarded as proper. So it is the job of the researcher to make them comfortable and to say that it's very normal to have a reaction like that or whatever and ensure that this problem of social desirability is taken care of. But as researchers we must always be cognizant of the existence of this kind of a problem that respondents might not give me the correct answer or an honest answer but they might provide me an answer which according to them is socially desirable. Often if we ask them to estimate that according to your estimate over a period of one month how much time would I spend? They do not have that information or they cannot accurately estimate that. So if your question you ask them to estimate something then we may not go by just their response because it might not be proper. It is what just comes to their mind immediately. So that is again is a problem with faulty questionnaires. Non-attitudes, they might not have attitudes always. If you assume that they will have attitude or they will have a kind of a response whether you agree or not. There are many things they will simply not agree or they're not bothered or they're not interested. So they might answer because they want to complete the questionnaire. They want to be good to you but then the response will not be proper. They might not have opinions on all topics. So we have to be aware of these problems of social desirability of an inaccurate estimation and they might not have opinions on topics that are being asked about. So before we start, we must be very clear about the instructions we provide to our readers. Using everyday language, if you use too much of technical language then it might not make sense to the person who's answering. Write in complete sentences and that's very important. I mean, especially in the SMS age, we're used to writing just one single words. Oftentimes when we post a poster of our webinar or whatever, people ask one line questions, link question mark. So they just do not write a complete sentence. So it will be proper for you. You just can't write just like age question mark or whatever. You have to appear kind there and you have to write, please write down your age. So writing just simple words doesn't make good sense. It might save you a lot of energy, but for a questionnaire, we'll talk about asking one question at a time, very, very important. We'll talk about the problems of double-barreled question because if in the same question you're asking two different things then you will not get the right answer. Be consistent with your use of response options. If you keep on changing your response options throughout the questionnaire, then that will cause a confusion among the respondents. Use consistent wording and phrases throughout the questionnaire. As I said, even before starting this particular presentation, I explained to you that questionnaire includes interview schedule. So throughout the presentation, I must be consistent with using the same phrase. Use, yeah, same thing. Watch out for leading questions. Leading questions where you give them the option. Do you think Joe Biden is a good president or whatever? So when you're leading them, then probably it's not a good thing to do. They might just try and agree and give you a shortcut answer, but then that is not the job of a researcher. You have to find out the correct response, the honest response. So be careful for reading questions. As I said, one major problem with questionnaires is that often they are double-barreled. Ask two questions in the same thing. So do you think Joe Biden won properly and he will be a good president? So you're using two things. So they might get confused and might just answer the first part. So never ever used double-barreled questions. Double negative, you're using two negatives to say a positive sentence. And that again is to be avoided. Double negative is wrong and it confuses people. Don't you think he should not have resigned? So you have two negatives going there. So putting two negatives in questions, strict no-no. Implicit negative, there are certain words for Biden, Bard, you know, recalled. So they are implicitly negative. So when you use such terms, you should understand that the respondent will take my question as a negative question. And you provide too many long lists in a dropdown. Long lists, say for example, which state do you belong to, that is acceptable. But over certain things, if you provide a very, very long list, they might not have the time to go through the long list and your answer may not be, the answer they provide you may not be proper. So these things have to be avoided. You must make sure that they're not double-barreled. They are not double-negatives. They're not implicitly negative and over long lists are to be avoided. So now I provide you a checklist for the questions. First of all, is the language simple? Can the question be shortened? Is the question double-barreled? If in case it is double-barreled, split it and make it two questions. Is the question leading? Are you providing an opinion yourself? Or are you leading them to an answer? But isn't this a long movie? I mean, if you're leading it, then you're not getting the true response. Is the question negative? As I keep on repeating, negative questions are to be avoided. Is the respondent likely to have the necessary knowledge? The background knowledge, we assume them to have. Do they have the knowledge for that particular question or not? Do the words have the same meaning for everyone? Because this is important that, we provide the same stimulus to all the people. So if the same question means different to different, different to different peoples, then there will be questions of reliability as we just spoke of. Or there will be lots of measurement error, which I'll talk about later on. Is there a prestige bias that is almost the same thing as the social desirability thing that I said? Okay, it's prestigious to do this or it is not prestigious, so they might. So if there is a prestige bias, it has to be avoided in the question. Is the question ambiguous? Is the question too precise? If it is too precise, then the answer will be very short. Is the frame of reference for this question sufficiently clear? Does the question artificially create opinions? Are you yourself creating opinions, which as a researcher you should never do? Is personal or impersonal wording preferable? Should it be in third person or should it be about you? So as we've seen a moment back, if you use you very often, then they might take you to mean your family as well. So at times, we have to be very explicit with what we are suggesting. Is the question wording unnecessarily detailed or objectionable? If it is unnecessarily detailed or objectionable, we have to avoid it. Does it have a dangling alternative? I mean, if you're starting off with big subsidiary clause and then you come to the question, again, that will be a put off for people. It just comes straight way to the point. Does the question contain gratuitous qualifiers? Do you that, even if it leads to, do you agree with this, even if it leads to a decline in standards? So if you provide a qualifier, then you are, as again, this is a kind of a leading question that do you support Biden, even if it leads to a danger for the American society or something like that? So these gratuitous qualifiers need to be avoided. Is the question a dead giveaway? So there is just one answer to that. So these kind of questions are, if you do not get variability in the question, then as a researcher, I'm not doing a right job. One kind of question, so with these kind of things, we get one kind of response. Another kind of question is where we get ranking, where we ask people to rank certain things and I'm sure you've heard about the ordinal variables where certain items are ranked according to what people feel is the highest and what is the lowest, so on and so forth. So what kind of questions are ranking questions? It could be a product characteristic, then how strong, how consistent, how sweet, so on and so forth. The frequency of use, which is the one you use the most, which is the one you use the second most, the third most, so on and so forth. The ranking question could also be about the recency of use, which is the one you've used most recently, which is the one just before that, so on and so forth. Again, which one according to you is the most expensive and to the least expensive. So there we might ask, or ease of comprehension, which ones are the easiest or which ones are the most difficult to understand. So these are the kind of questions where we might ask respondents to rank them, because through ranking we'll get in the nominal variables and from there we'll be doing our analysis. So this again is one kind of a ranking questionnaire where we straight forward ask people that listed below the set of issues that can influence the way in which people decide to vote in general elections. Place one in the box next to the modem model, second to the next second most, do not place the same number in more than one box. So there are one, two, three, four, five, six, seven, eight. We ask people to rank them. So when we give such kind of a questionnaire, we are assuming they understand all these things. So we are giving it this kind of a questionnaire to people who have some kind of education and who understand what taxation means and social welfare support means and reducing immigration means or whatever. Or even if you're using in one of the vernacular languages, we must be very clear about using the terms are consistent and they are valid. Means they mean the thing that they set out to measure. So this is one of the ranking questionnaires provided to the respondents. This is one of the Likert scale kind of things which is balance scale. This is five point scale. So using the scale on this card, please indicate how effective are the management and staff in carrying out their work? So are they highly effective or they're not at all effective? So this is a balance scale where you have a neutral scale as well. We have unbalanced scales also. I cannot show you all the types, but I'm showing you some of the types. The problem with scales is that, that is the order effect. Generally it is seen that people go for the ones on the left and they do not go for the ones on the right. So this is a problem. Many a times you're just manually answering the question. And there is a question of agreeing, acquiescence. Very often you would tend to agree unless and until it is something which is extremely not agreeable. So often if it is a certain case, you would tend to agree with the contention than not agree with the contention. And these are all effects that have been measured. These are all established things about the problem with scales. The central tendency, we often do not want to go to the extremes. You would have, you might see that even when you are answering, you might go to answering at the central, at the middle level, not going either of the extremes on the either side. And another problem is a pattern answering. Well, you know, you go, just try the first, the second, the third and fourth. So there is a pattern in answering. You might remember the Big Bang theory where Sheldon and Leonard, they all ask, you know, give each other questionnaires and they immediately realize that, okay, they can see a pattern in that answering. So they do not read the questionnaire, they just create a pattern for themselves, A, B, B, C, D, whatever, and they keep on answering that. So these are the problems with scales that we have to be careful about. And not just careful about, while we are designing the questionnaire, we will make sure that we avoid this problem. One of the ways in which you avoid this is, you know, using these semantic scales. And if you see this carefully, these are one, two, three, four, five, six, seven point scales and you're asked to see whether it is boring or interesting. So if it is boring, then it will be closer to this. And if it is interesting, it will be there. But if you see that, it's not always negative here and positive here. The second point itself is that you're writing whether it is important or unimportant. So immediately at the second item, you have flipped the order. It is not in the same order in which the first question is. So that is one way of taking care of the problem that I just discussed about the order effect. That some of these items, they are flipped. Their orders are flipped. So there are certain positive attributes on the left side. As you can see that important, relevant involving. And there are some things which are negative, like boring, unappealing means nothing. So we are using different orders. We are using these questions in a different order. We are putting out these options in a different order. This is known as a semantic differential scale. At times we might want them to pick more than one answer. And this is where some of the adjectives which are favorable, some of which are unfavorable, some are neither. Please stick the boxes beside the gratis that best describe you as a person. Most people choose three or four, but you may choose more or fewer you want. So you're very clearly telling people that you might require three or four or more you require, but you're telling your priming your respondents that I'm looking for three or four responses here. So whatever you describe yourself. So from that, as I said that, we are looking for variables and from those variables, I will draw certain inferences. So through the questionnaire, through the server instrument, we are looking for these variables. At times where absolute numbers are not possible, we provide ranges. It could be the range of age, it could be the range of income or whatever, but we have to ensure that the most popular are in the middle and the other ones are at the other extreme. So we must create the range in a manner in which the most popular ones, they fall in the middle, whether it is income or whether it is age or whether it is any other thing. So that's important for a kind of a normal distribution. We have paired comparisons, we have pictorial scales at times, where you give a thermometer and you ask them to explain or mark that in a scale of warmth, or it could even mean emojis, provide emojis from laughter to sadness and you ask them to scale that. So it depends on the kind of respondents. If your respondents are a lot younger, then you might go for pictorial or graphic scales. At other times you go for numeric scales and there is also the staple scale where a particular question is put on a minus 5 to 0.5 scale and they are asked to measure that, each of those sentences. So there are a lot of scales which agree. Now we are gonna talk about the ordering of the questions. So the way these questions are ordered is very important. So as I said, one of the ways is putting out the open-ended questions first and the closed-ended later on. So we take the general questions and from the general we go on to more specific. At times there are certain questions, funneling means whether you satisfy that condition, then you go to the next question or you go to another section. Do you watch Big Bang Theory on television? If yes, answer the next one or else go to the next question. So we have these kind of questions which funnel the respondents to a particular direction. So we have to use sections that flow and are natural and we'll have to see that because we have to divide the questionnaire into all these sections. We'll talk about the ordering of these questions in more details right now. So as I have just said, survey respondents are sensitive to the context in which a question is asked. Say for example, if you provide the options first, then that context will stay on with them if they are answering an open-ended answer. So the meaning of any question can be altered by a preceding question. If say for example, I talk to people a lot about how they're spending, about government spending, that you know how government is spending for welfare measures and so on and so forth. And then if I ask about taxation, then probably they will not feel bad about taxation because the order of the question, you know, makes them think about the government spending first. So how you order that question is very, very important. So if you're ordering it in a manner in which you're leading respondents to a particular answer, then that order has to be changed. Otherwise you'll end up with a faulty instrument which gives you misleading answers. So generally we commence with questions that the respondents will enjoy answering. Even in normal interviews, this is a thing that make the respondent comfortable, make the interviewee comfortable. It should be easily answered questions that you know, they don't have to think a lot about. They should be factual questions, you know, which are about facts where they don't have to think too much. We generally, there are certain questioners which start with demographic questions such as age, medical status, income, et cetera. But very often these books on questionnaire design and survey research, they advise us to take it at the last or at a later part. Do not start with demographic questions, especially people are sensitive about questions about their income, et cetera. Ensure that the initial questions are obviously relevant. I mean, we said, you know, make it important for the respondents to enjoy, but it has to be relevant to the stated purpose of the survey. If it is not relevant, then we are not doing a good job. And then from easy to a more difficult question. So we start off like that, then we go to the more difficult questions. And from concrete to abstract. So we go from, you know, questions where they are talking about facts and figures and about, you know, the knowledge. And then you go about abstract questions about their attitude and behavior and so on and so forth. Open-ended questions should be kept to a minimum and if possible, placed towards the end. Prove questions into sections. So same kind of questions into similar sections. So this helps structure the questionnaire and provides a better flow. Make use of filter questions. I just said one of the questions that whether you watch this or not, you know, are you a viewer of this program or not? Because my question or my research objective could be about, you know, talking to people who view that particular program. When using a series of positive negative terms to form a scale, mix up the positive and negative items. As I just showed you, that when you are giving them, you know, a scale, then it should not always be positive on the left and negative on the right or negative on the left or positive on the right. You have to mix up the positive and negative items to help avoid an acquiescent response set or, you know, so that they do not end up just answering in a particular pattern. They actually read the question and then they answer. So it's important that we mix up the positive and negative items, which we just saw sometime back. And wherever possible, use a variety of question formats so that the questionnaire remains interesting. At the end of the questionnaire, the respondents must find it interesting that, okay, this was good, I like answering that. So they should end up with that kind of a feeling. In designing a questionnaire, we make many assumptions. One of the assumptions we make is that, you know, we have the image of the respondent in mind that, okay, he would be knowing this, he would be knowing that, so on and so forth. But often these information might not be true. When we post choices to the respondent, we have in mind some notion of the dimensions. We know that, you know, he or she might answer like that. So there are a lot of assumptions, a lot of decisions we make about our respondents. So we must have a proper way of testing that. And that is why, you know, we'll be talking about pre-testing in a moment's time. So even the most well-crafted questions, you might think they're very well-crafted. It has to go through a pre-test. So what is a pre-test? You select a small sample of your target. Say for example, if you're talking about students of Calcutta University, then you take a small section of students and you make them complete the questionnaire and you ask them questions, you know, to provide feedback about the questions. I will just in a moment's time, you know, on a slide provide, you know, what kind of questions to ask in a pre-test. But very, very important to give a pre-test to small amount of people. Every big survey organization, including the Pew Research Center, the Pew Research Center says that, you know, just a day before they provide this pre-test to a small section of people, at times they're not even told that this is a pre-test, they are informed as if this is the actual test and they are asked, you know, questions after they have, you know, completed the pre-test. So that's very important to know about the technical elements or whether the structure is important or whether you're getting the right information, et cetera. People who are participating in the pre-test will not participate in your actual survey. They will be left out. So what are the items we are looking out for in the pre-test? We are looking for variation. The same question must provide some kind of variation. Just one kind of answer will not be enough. Whether it means the same thing to everybody. Whether people find it difficult. Whether, you know, these are the questions you are asking your sample of respondents in a pre-test. Whether they find it difficult. Whether they find it interesting and whether they put their attention to the questionnaire. Whether the sections are flowing naturally. Whether the order of questions you've just spoken about the order of questions. Whether the order of questions is correct. Whether there are patterns where, you know, people will be, you know, trying to look for, you know, patterns and answer that. So if there is a pattern to the answer, we have to skip those patterns. The timing, the time taken for completing the survey. That's very important. If it's a very long survey, then your respondents might not be interested in completing that. And I've already spoken about respondent interest and attention. I am again emphasizing that. So after that, you know, that is again, you know, pre-test or autopilot test. So if most people give similar answers to a question, it will be of little use to us as I just told you. So we have to ensure that the respondents understand the intended meaning of the question. So that we take care of the problems of validity. If two questions measure virtually the same thing, then the question is redundant. Unless it's you're doing factor analysis or something, then that is a different issue. But otherwise you have to drop that question. If you're doing, you know, you're doing a questionnaire to design a scale or an index, then find out whether you are able to draw a scale or an index from the pre-test or from the pilot test. Because if they do not belong to the scale, if there is an item which does not belong to a scale or which is contributing very little to the scale, then that item should be removed. They should not be a part of the questionnaire. So these are the questions, as I said, we look for feedback from the people who have answered that. We ask them, you know, which one the, so these are the questions we're asking to our respondents who took the pre-test. We are asking them to help us give a feedback of, you know, their ideas about the questionnaire itself. So these are the questions we are asking them. We ask them whether they, you know, whether any question which they were not sure to answer, where, you know, which of the questions they were not sure about, or you could ask them about a particular term that you used in your question, what they think about that. Or, you know, you can ask them that when I asked you the question about the quality of neighborhood, what are the sort of things that you consider? Are there any questions you think that people will find difficult to answer? If yes, which are those? So you are trying to find out those parts because you are deciding on a questionnaire and you just simply do not provide it. You simply do not set it rolling out into the field at that very moment. You have to at least, you know, work on the first draft. That is always the first draft. These are the things we have to take care of in the pre-test phase. There are a lot of sensitive questions. We must be very careful about avoiding or at least putting these sensitive questions in a sensitive manner. Oftentimes, these survey questions are available through various published compilations. So every good survey of what its name, they will, you know, put out their survey instrument right there on the internet. So when you are designing a questionnaire, you should consult these data. And that will save you a lot of diamond effort. So just before I end, I will just draw back the questionnaire design checklist once again. So are the research questions clear? What are the content that are we measuring? Are we measuring an attitude, attribute, belief, knowledge? We just explain what all these things are in our earlier slides. So what are we measuring? Or if the question is, you know, designed to type attitude or belief, what do you want to know about that attitude or belief? Are you looking for the direction of that attitude or belief? Whether it is in a particular direction or not? Or whether how extreme that is? Or whether how intense that attitude or belief is? So you have to be very clear about the questions that you want to ask. This is a checklist that is very important for you to have. And again, whether each question is reliable or valid or not, does it provide the answer to the question properly? Is it reliable? Will it be, you know, valid in different contexts? Is it sensitive to variation? Does it provide variability? Is it likely to achieve a good response rate? I need a good response rate. I do not need a lot of, do not know, can't say kind of answers. Does it have the same meaning for the respondents? Is this specific wording suitable? What type of response format does it require? Does it require an open or closed? You must be very clear about these questions. What level of measurement are you looking for nominal? Are you looking for just categories? Or are you looking for ranks? We have just shown you, you know, how ranking questions are, or are you looking for numerical variables? For closed questions, what kind of format is that? Are you providing them rating or scores or ranking? So again, you know, the kind of options you provide in closed questions or are you providing them with a checklist? We just showed you a kind of a checklist, you know, where they can provide more than one answer. How will the non-committal responses be handled? So this is a lot in interview schedule. This is not about a questionnaire because if somebody is himself or herself filling out the questionnaire, you can't take care of that. But if there's an interviewer, then how does he or she handle that thing? Will a middle response be included? So I showed you a balanced scale, whether that is included there or not. Is it don't know option available? Because if you put a don't know option available, then a lot of your answers will be don't know, or might be don't know, and that is not good. Or if you don't put it, then you might get a wrong answer. So you must have very good reasons for including or not including the don't know option. So are they exhaustive or inclusive or are they exclusive? So one is exclusive of the other. You have to be clear. Do you have it balanced? Are clear instructions provided? And how will they indicate their responses? And if there are indications there, are you providing them with sufficient space? And wherever they are skipping questions, whether that is easy to follow or not. And whether you are following the principle of question order. And it's about coding and et cetera, et cetera. This is, if you're using pilot testing, if not, then why will you not use? If you're using the pilot testing, then whether it is declared or undeclared, whether you're telling people that it is a pilot test or not a pilot test. So that again is because if you tell them right at the upfront that it is a pilot test, then you might not get a kind of an honest response or they might be looking at it very more critically because they're looking for errors in the question. So there are reasons for both again, as I said. So do the questions work? So again, important that the interview setting remains the same, the interviewer should not influence the respondent behavior and they must be presented with the same stimuli. That is why we must probe for answers in a very neutral way. And when we are communicating with the respondents, it's important that we inform them what the questionnaire is about, why the survey is important. And most importantly, addressing the issues of confidentiality and anonymity. And as I said that if this is not there, then a lot of those problems of social desirability, et cetera, will crop up. So important that we keep all these questions in mind and the design and the layout is extremely important as well because if the design and layout is not proper, then if it is not visually appealing, then people, especially for online questionnaires, they might not be interested in answering your questions. So this was all about the questionnaires, about the wording, the ordering, the pretesting, the pilot and all that. So as you can understand, asking questions is not a very simple job. There are a lot of decisions that you have to make. There are a lot of assumptions that you have to make and to justify. So questionnaires are a very, very important instrument for survey research. And if we use it properly and adequately, then we'll get very good results or we will get the proper result of our research project or whatever it is.