 Welcome to Conducting Surveys, part of the Research Assessment Cycle Toolkit offered by the Association of Research Libraries and made possible by a grant from the U.S. Institute of Museum and Library Services. This presentation is part of a module that focuses on collecting data, evidence, input, or other information for library assessment projects. It describes survey methods for library assessment projects, including question types and overall design. We hope the content is useful to library practitioners seeking to conduct library assessment projects. At the close of the presentation, you will find a link to a feedback form. Please let us know what elements were useful to you. After library assessment professionals determine what they need to know or want to learn as a result of assessment undertaking, and have considered the wide range of choices of assessment methods that will allow them to gain that knowledge or understanding. The process of developing one or more methods is at hand. Should library practitioners select surveys as an approach to a particular assessment project or need, it's important to think through surveys as a general method, various types of survey questions, as well as guiding principles for survey design. Surveys are generally considered a quantitative approach to assessment. They're used to estimate the current status of a particular issue, concept, or phenomenon. Surveys are often used to make generalizations about large groups of people by studying smaller groups of people and using inference. Surveys are susceptible to bias, particularly when the individuals responding to the survey are not representative of the population the survey is attempting to understand. Bias can also occur when the survey does not accurately measure what it intends to measure. In general, surveys are commonly used to describe characteristics of groups, explore associational relationships or correlations, and make predictions. Surveys come in various types, some of those are included here. Exploratory surveys help increase understanding and familiarity with the phenomenon and are typically used to inform future research. Cross-sectional surveys are designed to measure a phenomenon across the representative sample of a population. Trend surveys are designed to capture trends over time. Panel surveys are designed to follow changes experienced by the same survey respondents over time. Critical incident surveys are intended to examine some specific event in depth. Any of these types or other types might be appropriate depending on the needs of a given library assessment project. Surveys offer a number of advantages to a library assessment practitioner. For example, the distance that survey keeps between an assessment practitioner and respondents may encourage more candid responses to be shared. Additionally, surveys that are well-designed can mitigate possible bias, though of course poor survey design can also introduce or exacerbate bias. Because surveys are fixed in form, they can minimize the variability that may be present in other assessment approaches. And depending on the survey, they could make quantitative data easier to collect and analyze. While often considered a less costly and time-intensive approach than some other assessment methods, that is dependent on the survey itself, whether it is locally developed or purchased, whether the expertise for analysis is in-house or must be secured another way. Disadvantages associated with surveys include the distance between the assessment practitioner and respondent, which we also just acknowledge can be an advantage, so this issue can go either way. While distance can increase honesty of feedback, it also removes the opportunity for participants to question, clarify, or provide extra feedback that's not directly requested. This distance could also impede opportunities for authentic conversation and engagement. Surveys are susceptible to bias in a variety of ways, one of which is the introduction of sampling error by the basic truth that some people will respond to a survey and some people will not. Some who respond do so because they have extreme positions on a topic and some will not because they feel disconnected or disaffected in some way, perhaps the topic itself or the source of the survey, they are busy, they are not connected through whatever communication and dissemination process or technology is used, they are tired of surveys, the list of reasons why individuals avoid responding to a survey is long. Surveys are made up of a variety of questions that fall into a set number of types and formats. Let's explore some of the most common types and formats of questions found on surveys in library assessment practice. Survey questions can cover any number of types or content areas. Some questions might be factual, others might be focused on opinions, attitudes, perspectives, or feelings. Surveys can also include questions that gauge the respondents' knowledge about a topic or their perceptions of themselves, including their feelings of self-efficacy in a particular area. Survey questions might also probe behavior, either actual behavior in the past or present or hypothetical behavior given a particular scenario or situation. Survey questions also come in a variety of formats, including open-ended questions that allow respondents to answer in their own words. These can take longer to analyze and require different approaches than fixed-choice questions. Fixed-choice or closed questions are answered in predetermined ways, through yes-no options, scales, checklists, and so on. Responses to these questions are easier to analyze and offer less variation than open-ended question formats, but they are also susceptible to inaccurate responses, can introduce bias by confining respondents to preset options, and may frustrate respondents, sometimes to the point of their early termination of an entire survey. Typically, surveys used in library assessment include a mix of open and closed question formats. One common structure for fixed-choice survey questions is a checklist. Checklists are commonly used to allow respondents to indicate agreement, describe views, or report activities in a quick format. Checklists, prompts, may ask about participation in activities, followed by the list of activities that they may select from. Checklist-formatted questions might instruct respondents to check one or more than one response in a given question or set of questions. Another common structure for fixed-choice survey questions are scales. Scales give a range of possible answers that respondents can select. Often scales are arranged on a continuum, and typically users are allowed to opt out with an I don't know, not applicable, or other response. Not allowing opt-out choices for scale-formatted questions can force respondents to answer inaccurately or inauthentically, increasing their frustration and decreasing the validity of their responses. There are many different versions of scale questions. Some scales are ordinal, so the elements are in order with a relative position, but not separated by a clear and fixed difference. Interval, so these elements are in order with equal and meaningful value between the intervals. Numerical, which would use numbers instead of words. Descriptive, using words or phrases, often including significant detail, and so on. In some cases, categories might overlap. For example, a scale might be both numeric and graphic. A nominal scale typically is used for categories that aren't on a continuum, like identifying an academic department affiliation from a list. Next, let's look at some general guidelines for overall survey design. Typical components found at or near the beginning of a survey include a cover letter or request for participation, consent processes, a header with identifying information for the organization sponsoring the survey, and clear instructions. In these sections, a number of elements may increase the likelihood of survey completion by respondents, sending the survey from a respected individual, ideally one familiar to the anticipated survey respondents. Ensuring that consent processes are clear and easily understandable helps as well. Ensuring that this initial information includes ways the results will be communicated and used to benefit the respondent community is an important element and sets the stage for the actual communication and use of results. In terms of survey questions, questions used to identify demographic groups are typically placed at the start or end of a survey. Unless they are necessary for identifying demographic groups, they are commonly moved to the end. The main survey questions should be presented in a logical order. If the majority of the survey is fixed choice or closed questions, generally at least one open-ended question should be included at the end of the survey. At the close of a survey, you may leave space for the respondent to include their contact information and opt in or out of future contact from the project team. There are of course additional checks one should do in examining both individual survey questions as well as the overall flow of questions. Perhaps one of the most important questions survey designers should ask themselves for each and every question in the survey is, is this question necessary? Will the answer to the question be useful? And if so, specifically how might the response be used? What questions or decisions might be enabled through the gathering of responses to this question? Responses to these questions survey designers should ask themselves should not be vague. Rather, concrete answers should be readily apparent for about whether the survey question will generate information that is necessary and sufficient for the purposes of the assessment project. If one can't imagine how the response to a question might be used, one should really reevaluate inclusion of the question at all. Another check to consider is whether the anticipated respondents can answer the questions in an informed way. In other words, will these particular respondents know the answers? Or would some other group of respondents be better prepared to answer the questions? Overall, questions at the start of a survey should be on topic, easy to respond to, and interesting to respondents. In general, survey questions should be grouped together based on topic or subtopic and move from general to specific. It's important to think about question order as some questions might skew responses by creating an emotional reaction or leading respondents in some way. Check each question to see if it might be influenced by surrounding questions and consider rearranging questions to avoid this kind of context error. When it comes to language, each word in a survey question should be analyzed to check for bias, assumptions, or loaded terms or phrasing. Each question should also ask only one question. It's easy for longer questions to accidentally tie in multiple concepts or even totally separate questions, making it nearly impossible for respondents to answer accurately and authentically. Check for jargon, technical, or professional terms that may be second nature to the survey designers, but not to the survey respondents. And to be sure that you've achieved clear questions, pilot test all questions with participants similar to the intended survey respondents, and then revise based on the feedback you receive. Even experienced survey designers can let bias sneak into their surveys unintentionally. There are nearly countless types of bias that can impact the validity and utility of a survey. This list represents a few. Selection bias, for example, results when the individuals responding to a survey differ from the population of intended interest. This can happen quite often in a variety of ways. Poor design is a culprit, as are tendencies of some populations to respond or not respond to surveys. When survey samples are created haphazardly, this type of bias can run rampant. Researcher and sponsorship bias occur when wishful thinking, self-serving intentions, or other motivations influence the assessment design to produce the desired rather than actual and true results. This can be intentional or unintentional. Design bias and bias introduced by respondent interpretations are both generated by weaknesses in instrument design, instruction, use of terms, etc. Response bias occurs when low response rates cause the final sample to fall short of being representative of the intended population. Again, these are just a few biases that can impact attainment of valid and usable survey results. Being familiar with these biases is one solid step toward preventing them. When library assessment practitioners know better, they do better. A strong foundation in understanding research biases is one of the many areas where knowing more can make an enormous difference in the final outcome of an assessment project. Thank you for viewing this presentation on collecting data, evidence, input, or other information for library assessment projects. Please use the link provided to complete a feedback form on the usefulness of this information for your purposes.