 Welcome to Considering Your Methods Options, part of the Research and Assessment and Cycle Toolkit offered by the Association of Research Libraries and made possible by a grant from the U.S. Institute of Museum and Library Services. This presentation is part of a module that focuses on collecting data, evidence, input, or other information for library assessment projects. It describes a variety of methods, strategies, approaches, and tools that can be used to collect information for library assessment. We hope the content is useful to library practitioners seeking to conduct library assessment projects. At the close of the presentation, you will find a link to a feedback form. Please let us know what elements were useful to you. Once one is determined to focus for an assessment project and considered the type of data, evidence, or other feedback that is required to address a project's information need, a next step is selecting methods that will offer the best options for eliciting the information necessary to address the assessment focus question or problem. There are myriad possibilities when it comes to choices for assessment methods, strategies, approaches, and tools. This slide includes a number of options and is far from exhaustive. Some of these choices for data collection are described in this presentation in the order listed here. Others are covered in separate presentations in this module. In all cases, this information is provided to help library assessment practitioners get started with an approach. More research into particular methods is advisable before enacting any approach in practice. One method for gathering possible assessment data, evidence, or other kinds of information and feedback is the recording of anecdotes. Library users often volunteer information to library workers in various ways. An impromptu telling of their experiences during a transaction is one common context for this. Many library workers can describe any number of anecdotes shared with them by users. While this approach to gaining feedback is typically somewhat informal, it can also be formalized into a tracking system. Libraries might decide to record such anecdotes in a systemized way, recording dates and times, contexts, events, etc. as they're reported or observed, related outcomes or impact, or any other kinds of information deemed necessary. Gathering anecdotes in an organized fashion provides a wellspring of stories that can be used to provide context for data collected in other ways. Another method for organizing data and information that is central to assessment is the balance scorecard approach. A balance scorecard approach links performance measures to strategies, usually in a strategic plan or similar document. In general, a balance scorecard is focused on four or more perspectives. Typical perspectives include a user perspective focused on how users view the library, an internal perspective centered on library excellence, an innovation or learning perspective focused on what the library can do to improve and create additional value for users, and a financial perspective. Connecting performance measures to these perspectives helps to connect relationships among various areas within the library and help surface choices and balances among them. For example, a user perspective within a balance scorecard approach focuses on users, their experience, and the value provided to them by engagement in library services, resources, and spaces. The internal perspective may focus on processes or how tasks are accomplished within the library, competencies, productivity, and capacity. The innovation and learning perspective centers on the ability of the library organization to grow and develop new services. It can also focus on organizational culture and professional development and training of library workers. The financial perspective dials into funding and the stewardship and balancing of library funds. Within each perspective in a balance scorecard process, there are typically objectives, measures, targets, and initiatives aligned with each. Target setting can be challenging, but basing targets on stretching a bit from historical levels is a common approach. It bears pointing out that this approach not only helps think through library strategy and incorporates assessment into that strategy, it also helps communicate strategy, both internally and external to the library. Our next method for assessment is benchmarking and the use of comparative indicators. Benchmarking, the process of measuring services, resources, or spaces in comparison to other similar or aspirational organizations, can be a useful tool to understand some aspects of an organization or process in relation to others. Composite indicators can also be used to benchmark, but typically involve more complex concepts that can be better captured with a combination of metrics rather than one or even several metrics on their own. Benchmarking is not a one-size-fits-all approach, and there are several approaches you can take to compare the information most relevant to your organization or situation. Some ways to categorize benchmarking approaches include performance versus practice. Performance benchmarking often uses quantitative data to make comparisons about some aspects of performance in order to identify gaps. Practice benchmarking tends to be more qualitative and focused more on personnel, processes, or technology. In addition, benchmarking can be internal or external. You may want to compare information across departments within your organization to see where improvements can be most effective. Alternatively, benchmarking externally can give you better understanding of how your organization compares to similar institutions. By looking at differences with other institutions, you can be in a better position to recommend changes that may allow your organization to perform more like your aspirational peers. This is one example of a benchmarking process. In this example, a series of repeatable steps would be followed for continuous assessment and improvement. In step one, which is focused on planning, the focus and plan of a benchmarking study would be decided upon. Next, in step two, data are collected to make comparisons. Then, the data would be compared and gaps in performance discerned. Finally, in the ADAP stage, goals for improvement would be developed along with action plans for achieving those goals. In general, having a systematic benchmarking process like this in place increases transparency and allows for repeatability. It also makes it easier to pinpoint where adjustments may be made to make benchmarking less labor-intensive. Composite indicators are used to measure multi-dimensional concepts that can't be captured with a single metric or data point. These are useful for attempting to assess complex concepts by breaking the concept down into constituent parts and providing a way to combine metrics that represent each part into a single value that can be compared over time. The underlying data and process for building a composite indicator can be simple or complex. However, several key elements in a robust indicator include things like establishing a sound theoretical framework, selecting appropriate data and applying data cleaning techniques, normalizing variables for increased comparability, applying appropriate weighting and aggregation methods, and conducting uncertainty and sensitivity analyses. Like any methodology, composite indicators have advantages and potential drawbacks. Advantages include the ability to distill complex concepts into more easily digestible and comparable results. In many cases, the disadvantages of composite indicators can be mitigated with transparent processes and open communication about construction, interpretation, and recommendations. Moving on to another method for gathering assessment information, the critical incident technique can be a useful approach on its own or as a part of another method. Generally speaking, critical incident technique is used to gather information about an event that was memorable to users. They are asked not about the most recent time, but about a memorable time when they had a particular experience. For example, users may be asked about a time that they remember interacting with the library and then describe what impact that interaction had on them. Here you can see a few examples. At one university, for instance, users were asked to think about a time when the library helped them and then to share what help they received and what they were able to do or achieve with that help. Customer feedback analysis is a related approach that many libraries have traditionally employed. While it can take on many forms, customer feedback analysis, generally speaking, typically consists of a feedback or comment form in which users make known their perspectives, experiences, and views. This information is most useful from an assessment perspective if it is organized, analyzed, reported, and used to provide context for other assessment data or create change based on user feedback. Two common ways of organizing customer feedback for assessment uses is to track the feedback by problem or by question theme. The Delphi technique is a method that gathers and processes the collective expertise of an expert panel. Of course, expert in this sense can be applied to any number of forms of expertise, including academic interest, for example, but also lived experience. In this way, the Delphi technique can be used not only for working with quote-unquote experts in a specialized academic area, but also users who have common experiences that are important to understand better. The goal of the Delphi technique is, generally, to reach a consensus on a topic through the elicitation and processing of a range of views. This is a facilitator-guided approach that demands significant time investment from participants. Generally, a set of questions is posed by experts on some topic, then the experts reply and their responses are compiled for a second round of review by the same experts who provide additional opinions and perspectives. This process continues until consensus is achieved. This graphic lays out a typical process for a Delphi method approach and demonstrates the significant investment and back-and-forth communication between facilitators and experts throughout the process of identifying issues and working together until a consensus emerges. Another useful assessment approach is the analysis of documents. Any number of documents can serve as an inspiration or fodder for this type of approach. Document analysis might center on user feedback forms, strategic planning documents, policy and procedure manuals, user communications like chat reference transcripts, or documents that are external to the library such as course assignments completed by students, syllabi, institutional documents, and the like. In a document analysis assessment project, documents should be checked for credibility, representativeness, and meaning. Typically, a schema and method for coding the documents is agreed upon and may be applied by a single or multiple raters. When multiple raters are used, it is often good practice to check for inter-rater reliability. Economic studies are also often employed as assessment tools. Such projects might focus on return on investment, financial stewardship or savings, comparative costs, and so on. This approach can be challenging in academic contexts as there can be significant difficulties in using financial surrogates in an academic library environment in ways that are not nearly as troublesome in, say, a library in a corporate context. Having said that, many studies have been conducted and some of these have attempted, most often in public library contexts, to develop economic calculators as an assessment approach of one kind. Environmental scanning can also be used in an assessment context. The goal of an environmental scan is to check in on and observe events in a library's external and sometimes internal environment. Typically, the focus here is to anticipate issues that might become obstacles or opportunities that the library should prepare for. There are a variety of areas in which environmental scanning might focus, social, demographic, and cultural issues, legal, ethical, or political issues, economic issues, technological change, environmental issues, and so on. This approach usually leverages other strategies including document analysis, statistical trend forecasting, reviews of publications, and expert interviews. Ethnographic methods range widely in specific use and application. Very generally speaking, ethnographic approaches are observational in nature and seek to explore and represent the worldview of the participants being observed. In some cases, assessors or researchers can be immersed in the group. Other times, other approaches are employed. Typically, the goal of this type of method is to explain what is happening in a situation, describe why the group of participants is acting as it does, and to learn from those observations and descriptions. Another assessment approach, the experiment, can be tricky in library assessment contexts. The term experiment often makes one think of a randomized control trial separated from real life, so to speak. Certainly, in the real world of academic library assessment, it can be difficult or impossible to control variables in the way one might in a lab situation. While the traditional type of experiment that often comes to mind may be challenging or not possible or not desirable at all, particularly due to the ethical problem of withholding access to library offerings, quasi-experimental and retrospective designs may still be an option. While the details of this approach, like any other assessment method, need to be considered weighed and thought through in far more detail than is possible in this presentation, library assessment does not necessarily preclude designs that consider and account for users that have or do not have a particular library experience and then comparing it across groups. This may sound simple, but it is not, as various experiences that are co-mingled with the having or not having of a particular experience are not easily teased apart. Because true randomization of experimental and control groups are often not possible, pulling from the same population may help for our quasi-experimental design like the one depicted here. Having said that, the more the groups differ, the more threats to validity there are. So this approach to library assessment isn't straightforward, which can be reflected in the terminology for this kind of approach, which is often called a compromise design. Learning Analytics is one of the newer additions to the assessment toolkit. Learning analytics can be defined as the use of institutional level systems that collect individual level student learning data, centralize it in a record store, and serve as a unified source for research, seeking to understand and support student success. For libraries, this assessment approach connects library data with data held by parts of the institution to explore connections between library engagement and user outcomes. When students are the focus, the goal of learning analytics is to help librarians, faculty, advisors, and other members of the educational team to discover challenges to learning in order to plan systemic and structural changes to remove barriers and facilitate individual level communications to connect students with relevant supports. This diagram demonstrates the ways in which various kinds of institutional data are incorporated into a learning analytics system, consolidated into a record store, used for analysis and provided to students, faculty, institutional administrators, and others to enable change and improvement. Currently, this is a new area for libraries, one that will likely become a more common library assessment approach as time goes on. Peer review is another possible approach for library assessment. This strategy focuses on asking qualified experts to assess a library service, resource, space, or other offering. Once qualified experts are identified, specific assessment criteria should be established and agreed upon, strategies for transparency put in place, and a structure for reporting that is clear and accessible for all involved. Rubrics are another tool for library assessment. While often associated with the assessment of learning, they can be applied to library services, resources, or spaces as well. Essentially, rubrics are a structured approach for describing outcomes that are hoped to be achieved by an interaction or engagement of some kind, as well as expected levels of achievement for each stated outcome. Thus, rubrics can be used to articulate clearly the outcomes of a library offering, the ones it's designed to achieve, and describe the levels of performance along a continuum of success. It's worth pointing out that there are several types of rubrics and each type is designed with a different purpose in mind. Analytic rubrics can assess the component parts of a service, resource, or space. Holistic rubrics assess a library offering as a whole. Task or performance rubrics are designed for a very specific one-time assessment, while developmental rubrics are intended to be used over multiple episodes, events, time periods, or groups. The success of applying a rubric to an assessment relies in part on ensuring a match between the assessment need and the rubric type. User experience is an assessment approach that focuses on deeply understanding users and their experiences. It emphasizes user needs, abilities, tendencies, and limitations with an overall goal of improving user interaction and engagement. A variety of assessment methods can be applied to user experience. There are also some tools and strategies that are closely tied to UX goals, some of which are listed here. The end result of user experience projects is typically proposed solutions, perhaps prototyped and tested, as well as productive collaborations. A final, at least for this presentation, approach for library assessment is the tracking and analysis of usage data. This data may already be collected for a variety of purposes including service provision. From an assessment perspective, trends and patterns in resource use and service transactions can help librarians understand user experiences and engagement. This approach may leverage service, resource, or space use counts and may involve vendor supply provided data, web analytics, or other data collection. The data may offer session data, search information, access or download counts, or other indicators of use. This data can also be abstracted to user groups by role, level, and so on. This presentation includes a wide range of assessment options for gathering data, evidence, feedback, and other information to answer assessment questions or needs. All of these approaches have strengths and weaknesses for any particular assessment project. Oftentimes, the only way to counteract or minimize the disadvantages of a particular assessment method is to combine methods. Using multiple and mixed methods reflects how we make sense and meaning as humans. We do tend to develop sense and meaning by combining information. By using a variety of approaches to listen, learn, and explore concepts, issues, and experiences, we are more likely to gather different perspectives and ways of understanding. In general, using more than one way to understand a complex issue provides greater understanding than one method would offer alone. Therefore, best practices for library assessment include applying more than one strategy to respond to an assessment information need, problem, or project. Whatever method or methods are selected to respond to an assessment need, a few other general principles apply. Some of these are shared here. Regardless of approach, library assessment practitioners should observe ethical standards, processes, and policies. In some cases, library assessment work will be strictly tied and guided by IRB and other institutional processes or professional practices. In all situations, professional ethics should be adhered to. As a general rule, get permission, allow participants to withdraw, explain your purposes, and how your data will be used. Provide your role and contact information to participants. Don't make assumptions, explain things clearly and simply. Arrange participant engagements in assessment logically and in a straightforward and transparent manner. Before working with participants for a particular assessment undertaking, pilot and change your process based on their feedback. After the assessment, analyze your results thoroughly. Make sure you know why you're analyzing in the way you chose and what limitations those decisions introduce. Don't make claims that the data can't support, and always acknowledge your positionality, your limitations, your sources, and your collaborators. No matter what assessment method you select for a project, ethical practice considerations must be thought through and implemented. Many institutions require training in this area. Among the important areas to be aware of and apply are informed consent practices, privacy practices including thorough analysis and handling of confidentiality requirements, duties of beneficence and care, awareness of power differentials, consideration of costs and benefits, rights and protections including those of ownership and access to data, avoiding biased or selective designs or analysis, and reciprocity. Finally, other criteria will influence the selection of an assessment method or approach for a given project need. Among these are the appropriateness of the method to the research question and purpose. If the method selected won't ultimately serve the intended purpose or anticipated decision-making requirements of the project, that can upend the utility of the assessment entirely. Resulting data format and type is another criterion. Will the data that result from the assessment tool or approach fit the need and purpose of the project? Does the library assessment practitioner have the capacity or access to others with the capacity to accurately analyze and report results? Does the selected assessment method align with the overall assessment plan and program over time? In other words, does it fit into the entire and whole assessment strategy? Furthermore, does the assessment choice fit the resources that can be provided? This is a cost issue. Are there sufficient financial, time and personnel resources to implement, analyze and report out results for this approach? Are the costs associated likely to be one time or ongoing? These and other local criteria may have a major role in selecting among the major assessment approaches available to library assessment practitioners. Making the best choice possible matters as the selection of methods for understanding impacts many other stages of the overall assessment process. Thank you for viewing this presentation on collecting data, evidence, input, or other information for library assessment projects. Please use the link provided to complete a feedback form on the usefulness of this information for your purposes.