 Welcome to Identifying Research Questions, part of the Research and Assessment Cycle Toolkit offered by the Association of Research Libraries and made possible by a grant from the U.S. Institute of Museum and Library Services. This presentation is part of a module that describes ways to articulate the focus of library assessment projects. It includes strategies for structuring research questions for library assessment projects. We hope the content is useful to library practitioners seeking to conduct library assessment. At the close of the presentation, you will find a link to a feedback form. Please let us know what elements were useful to you. At the core of most library assessment projects is a need to close a gap, solve a problem, or answer a question. In order to design effective assessments that result in increased knowledge and understanding, it's essential to begin with a clear conception of what the precise gap, problem, or question is that is driving the work. What specifically needs to be learned about the issue at hand? One way to attain clarity and refine away any initial vagueness around the information need an assessment project seeks to address is to express the needs succinctly. There are three common ways of doing so, articulation of research questions, user stories, and hypotheses. Perhaps the most common of the three in library assessment circles is the guiding question. Assessment is often driven by a guiding question or research question, and while assessment is not always considered capital R research, certainly from a practical and practice oriented perspective, assessment question formulation has a lot in common with research question construction. Often, the questions that drive library assessment projects are made up of two elements and seek to explore and better understand the relationship between those two elements. This is of course not the only type of research question one might encounter in library assessment context, but it's common, so let's use this type of question as an example. So there are two main elements included in this type of question. First is some sort of outcome, often focused on users or stakeholders. In other words, a larger purpose that the library seeks to create, support, or contribute to for those who engage with libraries. The second element of library assessment questions like these is a library service, resource, space, or some other expression of library and expertise. In other words, this element of these questions is focused on something or some activity the library provides. And then, taken as a whole, the question explores ways in which these hope for needs, goals, values, etc., of library users are connected to library activities or could be connected to library activities. It can be helpful to think about library assessment questions with these elements in mind. First, what the intent is, then what the vehicle is for achieving that intent, and an examination of the degree of connection between the two. Again, there are nearly endless formulations for assessment questions and some may be far more open-ended. At the same time, because these kinds of library assessment questions are so common, let's look at them in a bit more detail. One typical structure for this type of library assessment question can be described in three parts. A library offering being explored, which may be a service, resource, space, or some other expression of library support for users. A verb showing a relationship like contribution, impact, influence, etc., and an area of outcome or impact. In academic library spheres, that might often be expressed in terms of productivity, affordability, efficiency, equity, learning, etc. Let's look at some examples of questions formatted using this basic structure. Here we have some examples of three part questions focused on reading and resource use. Each one is shaded according to the parts of the question to make the structure a little clearer. One example might be around library provision of course associated reading lists or reserves and the influence that may have or not on faculty decision making in selecting course readings. Another example might be a library sponsored campus one read program and exploring the contribution it might make or not make on entering students' feelings of belonging. Each of these combines a desired outcome with a library offering and explores the relationship between the two. The same pattern can be observed in these instruction focus questions. For example, the question about the ways in which a library makerspace may support student entrepreneurship partners a library facility with a desired outcome, that is, student entrepreneurship, and sets up an exploration of the connection between the two. The same setup can be seen in these questions about library reference services. For example, do students who engage in reference consultations, the library offering, do better, the relationship, and their courses. An outcome that here might be evidenced using a surrogate of course or assignment grades. Of course, many intended needs, problems to solve or goals to achieve might be contributed to by a variety of library services, resources, spaces, and so on. One way to keep track of the ways in which libraries might make these contributions and assessments might be to investigate or explore them using an impact map. In this example, individual outcomes can be tracked over multiple library offerings. Using this kind of coordination, though very simple, can be useful in moving from an episodic approach to library assessment, the one that explores, using multiple methods and approaches, relationships over time. Thus moving from a project-based approach to a programmatic approach in which assessments can be layered over time to gain deeper understanding of user experiences, trends, and nuance that might not be revealed by a less cumulative approach. This chart can be a helpful guide in writing this type of library assessment question. It provides a number of starting phrases as well as the three-part structure. Of course, one can always turn around the relationship and start the question with the outcome or depart entirely from this research question structure and use another structure that works for the assessment need at hand. The key here is to be explicit in stating the question that is at the heart of the project, as the guiding question will help you make decisions throughout the assessment process and recommendations at its close. In research questions, as well as other ways of clarifying the central goal, need, or question of an assessment project, oftentimes the issue of whether relationships expressed in a guiding question may be correlational or causal. Understanding correlation and causation is a big topic, one that is too big to completely address here. At a minimum, it bears acknowledging that many scholars believe, particularly in social science research fields, not only that correlation does not imply causation, but also that demonstration of causation is impossible, because in non-lab, real-world situations, there are too many unknown and uncontrollable factors. The problem of unknown and uncontrollable factors is especially significant in educational settings, where students are potentially influenced by endless prior and concurrent experiences that impact research results. In randomized control studies, usually considered the gold standard of experimental research are not typically feasible. Other scholars acknowledge that while definitive causation cannot be proven, that given well-designed research, consistently strong correlations, a theoretical model that supports a causative connection and strategies that control for other factors and alternative explanations, it may be reasonable to argue that a causative relationship exists and act on that assumption. So, if a librarian subscribes to the first idea that causation is not demonstrable, then strong positive correlations are a suitable end goal for research and the results of such research and assessment can be communicated and employed to make decisions and take actions. If a librarian is persuaded by the second position that correlations bolstered by proven theory and shown to be free of all other influencing factors are equivalent to causation, then strong positive correlations should be contextualized within a larger theory and additional factors and explanations must be ruled out before the results of such research or assessment can be declared causative and then shared and used to make decisions and take actions. By the way, in the second scenario, librarians should also be prepared to explain their claims of causality to those who subscribe to the first idea of causation. Arguments about causation aside, strong positive correlations are the stuff of action for librarians. When librarians determine that particular library activities or interactions are correlated with positive learning or other outcomes, those correlations point the way for librarians to plan improvements to library services, resources, and spaces. Once a question that clearly identifies and articulates the goal of an assessment project, there are a number of additional questions one might use as a self-check or to make subsequent decisions. These are some of them. Is the question you're asking worth answering? Why is it worth answering? What is your purpose in answering the question? What evidence, data, feedback, or other information will help you answer the question? How can you gain access to that information? Are there other studies that are focused on similar questions? What did their process look like? Their results? What can you learn from those studies? What research and assessment methods are aligned with questions like yours? What advantages or disadvantages does each have? Might you need more than one approach to answer your question and counteract flaws and particular methodologies when applied to your question? Who else should be involved? Think of users, participants, stakeholders, partners, and colleagues. No assessment is an island. What do you expect to come of the results of this assessment project? Who will need or want to know about the results? How do you expect them to act on the results? These questions may help you find gaps in your assessment question and cause you to revise it or help you confirm that you're on the right track. Doing the hard work to get your assessment question formed in such a way that it's helpful in clarifying throughout the rest of the process is essential and starts your assessment process off strong. Thank you for viewing this presentation on articulating the focus of a library assessment project. Please use the link provided to complete a feedback form on the usefulness of this information for your purposes.