 Welcome to Conducting Participatory Research, part of the Research and Assessment Cycle Toolkit offered by the Association of Research Libraries and made possible by a grant from the U.S. Institute of Museum and Library Services. This presentation is part of a module that focuses on collecting data, evidence, input, or other information for library assessment projects. It describes participatory research methods including examples of user-centered design research and community-based participatory research methods. It covers techniques for conducting online and in-person card sorts and voter voice studies as well as tips for analyzing findings from these types of studies. We hope the content is useful to library practitioners seeking to conduct assessment projects. At the close of the presentation, you will find a link to a feedback form. Please let us know what elements were useful to you. First, some background about what constitutes participatory research. Then, I'll talk about a few types of participatory research and go into more detail on conducting card sorts and voter voice studies. I'll wrap up with a few pointers for analyzing findings gathered through participatory studies. Let's start with a definition taken from an excellent article about participatory research methods that is linked below. From the article, Participatory Research, which I'll refer to as PR, encompasses research designs, methods, and frameworks that use systematic inquiry to direct, indirect collaboration with those affected by an issue being studied for the purpose of action or change. According to the authors, Vaughn and Hakez, PR engages those who are not necessarily trained in research but belong to or represent the interests of the people who are the focus of the research. Researchers utilizing a PR approach often choose research methods and tools that can be conducted in a participatory, democratic manner that values genuine and meaningful participation in the research process. PR helps researchers to more meaningfully engage stakeholders and communities in research, which in turn has the potential to create relevant, meaningful research findings translated to action. This type of study is guided primarily by the expertise and interests of the participant and much less by what the researcher feels is important or the researcher's field of study. In general, participatory research studies work well for discovery and exploration. PR studies can be used for developing a broad, holistic understanding of the problem space, broader than you might gain through structured interviews or usability tests. This type of study can reveal the larger context of where a program or space fits into a user's life or does not fit into their life as the case may be. These studies can also be used to uncover behavior patterns you may not have seen before, as well as identify unmet needs. PR is an excellent approach when the goal is to learn directly from users about their lived experiences or about how they perceive of or interact with a service, space, or program. Again, according to Vaughn and Hakez, the distinguishing feature of participatory research is stakeholder power and decision-making and implementation. PR is a research-to-action approach. I'm going to say that again because it's so important. PR is a research-to-action approach. The goal is not merely to learn more about users, but to follow up on what is what is learned to make meaningful improvements or design new high-impact services based on express needs. It's worth noting here that any research method or tool, surveys, focus groups, interviews can be participatory if chosen or designed collaboratively between researchers and users or other community partners. Like other qualitative methods, participatory studies are not good if you want quantitative results. They're not good for gathering evidence to validate particular design decisions. They're also impossible to conduct if you're not able to work directly with members of your target population or involve your participants in design, decision-making, and implementation along the way. If you must have full control over the study from start to finish, participatory research is not the approach you should choose. There are many, many different types of participatory research methods. I've included the citation to Vaughn and Hakez article here because they include a very extensive chart of PR types and methods that I encourage you to check out. For the purposes of this module, we'll focus on two types of participatory research that tend to work well for library assessment projects. One is user-centered design research, which is an iterative design process that involves users in the design of products or services that are intended for them. Methods include design thinking, participatory design research, human-centered design, and particular study types include card sorts, charrettes, and design thinking workshops. For this module, I'll focus on open and closed card sorts. A second type, community-based participatory research, often focuses on health or other social issues. This approach involves all partners, including researchers and community members, in every phase of the research process, from study design to dissemination. The example I'll focus on in this module is Photovoy. As with any research study, you'll want to first consider your research questions and goals, and then choose the method best suited to help achieve those goals. Because participatory research is meant to include users or community members from the very start, this would be a good time to involve users. What approaches resonate most with their needs or interests? As I mentioned, there are many types of participatory research. Here are just a few paired with relevant research topics. I'll pause for a moment while you read over these examples. And again, for the purpose of this video, I'll discuss tips for conducting card sorts and photovoy studies. Before we delve into these methods, I think it bears repeating that nearly any research can be participatory if co-designed and co-facilitated with users or other community members. Okay, so let's now start by talking about conducting card sorts. Card sorts are most useful after you have a general understanding of your users and their behaviors on your website. Card sorting activities are typically used to learn more about how well a website's information architecture, or IA, works to redesign or to redesign the information architecture. In short, IA is the organization search and navigation systems that help people to complete tasks, find what they need, and understand what they found. A card sort allows you to observe how participants group together information or topics related to a website. Knowing how users think about categories of information on a website can help you as you build an underlying structure, determine what to put on a homepage, or label top-level navigation or categories of information. You can spend a huge amount of time brainstorming the best label or language to use to describe something, but if you never involve your users in brainstorming or research, you won't know how effective or meaningful to the general public that language or label actually is. Card sorts are a great way to test labels or a navigation structure of a new or existing site. There are two basic types of card sorts. Open card sorts and closed. We'll talk about these more in a moment. There are multiple card sorting methods to choose from, including physical or paper-based card sorts and online card sorts. And there are also several methods for conducting a card sort. Each has its advantages and disadvantages. For instance, tests can be conducted one-on-one or in groups. Testing one-on-one gives you the opportunity to hear a participant think aloud and think through the sort and ask follow-up questions based on the categories and labels they come up with. A downside to this approach, though, is that one-on-one tests naturally require more resources and might mean that you're able to talk to fewer participants. You might conduct a card sort in a group setting and have your participants work either independently or as a group. One potential benefit of the concurrent group sort is that the team may approach, the team approach, excuse me, may provide richer data as participants work through the sort together, sharing their experiences and building off each other's responses. A potential downside of the concurrent group sort is that group dynamics could interfere with the data you collect. For instance, one or two strong or outspoken participants might sway individual thoughts. You'll also likely need to consider carefully the composition of your group so you're able to get feedback from a representative group of users. You likely would not want to group one undergraduate with graduate students and faculty members as it might be difficult for the undergraduate to express their opinion. Group methods allow you to conduct multiple tests at once, but a potential downside is that it will likely be harder to understand fully the responses of individual participants in the group setting. Online card sorts are also a viable option and there are numerous tools available that allow you to develop an online card sorting activity and gather feedback from many, many participants with little staff involvement once the test is set up. These tools also have built-in analysis tools which save time on reporting. Online tools also randomize each test so participants automatically see a shuffled deck of cards. The downside of an asynchronous online sort is that you can't easily ask follow-up questions of individual users. In many cases, the users will just click through the online card sort without providing additional feedback. Another option is to conduct the test in person but to use an online tool to guide the test. We'll call this a hybrid card sort. With this method, users will complete the test in your presence but use an online card sorting tool instead of physical cards. Benefits are that you don't have to create your own cards and you can take advantage of built-in analysis tools. You'll also have an opportunity to ask a few follow-up questions if you like. So I promise we would talk more about open and closed card sorts. These are often used to complement each other. Open card sorts require participants to organize content into categories and then label the groups of content using language that makes sense to them. Open card sorts are especially useful at the start of a project and are best for identifying broad content categories that make sense to users. If time is an issue, you can opt not to have participants provide labels for the categories to develop at this stage of your research. Understanding more fully how your users categorize content on your site is the primary goal of this test. In a closed sort, participants sort content into pre-named categories. A closed sort is best when you want to see how well users understand categories you're considering for a new site or that are already in place in your existing website or system. To make a closed set fully participatory, you might work with users or community partners to develop the categories and then test those categories with a different set of users. As I said, open and closed sorts can often complement each other and you could start with an open card sort to determine how users categorize website content and then run a closed sort to determine the clarity and effectiveness of labels you're currently using or that you're proposing for a new site. A benefit of card sorts is that they can be done without a lot of resources or overhead. You may use sticky notes or index cards with handwritten or computer-generated labels. You'll want to have some blank cards and pens for users to fill in themselves because you will almost certainly leave something out. If you and your community prefer a higher tech approach, there are numerous online tools for conducting online tests. My colleagues and I have had success with optimal workshops, optimal sorts. Others worth checking out are user litics and user Zoom, both mentioned here. So a few tips before we move to the next participatory research method. As with other studies we've discussed, you'll need to first consider what you want to test as you develop your card sorting activity. In this case, you'll need to determine which topics or pages which will become your individual cards you wish to test. It's important to limit your card sort to 30 to 40 topics or cards. Optible sorts free tests limits the number of cards for around 20 the last time I looked, so you may want to aim for that number when you design your first online card sort. Because you're choosing only 20 to 40 topics to test, you likely won't be able to create a card for every page on your website. In order to choose the cards or topics to include in your sort, consider the most important or popular content. You might want to review web metrics to identify the most frequently used content or work with your colleagues or members of your community to consider which content is most important. After you've determined the content or labels you want to test, determine which type and format will work best given the goals of your study and the resources you have. The primary difference between online and in-person tests is the quantity and level of detail of feedback you'll receive from respondents. If you're interested in gathering a lot of data and you're less concerned with why than you are with what, you'll likely want to plan a remote asynchronous online card sort. On the other hand, if you're less interested in quantity of results and more concerned with hearing users thoughts about why they feel as they do about the website organization categories and labels, you may want to plan an in-person card sort either with physical cards or an online tool. Again, this is an opportunity to involve participants or community members to design a study that will enable them to engage with questions and topics that are most important to them. And with that, let's shift to another participatory research method, Photo Voice. Photo Voice is a community-based participatory research method where participants take photos in response to prompts. A follow-up session is then scheduled to view participants' photos and discuss their responses to the prompts. Here are a few sample Photo Voice prompts. They were used in a study of Black students at a predominantly white institution in the Southern United States. Participants took pictures and provided statements about their photos and their photos and short captions provided the basis for our discussion. We did not have a structured script with a list of questions, but instead allowed the photos and captions to elicit additional feedback and guide discussion from study participants. We also worked with members of the target user groups to develop prompts that would resonate with study participants. And I'll pause for a moment here so that you can read these. You'll see that these prompts ask students to think about both their campus experience and their library experiences. They're also fairly broad and open to participants' interpretation. This enables participants to truly guide their research in a direction that reflects their lived experience and not just the expertise or interest of the people leading the study. So a few tips based on my experience with this method. We found that providing one to three weeks for participants to take and submit pictures and short captions was adequate. Providing a lot more time than that means that participants, especially student participants, will forget they've committed to the project. We created a simple submission form so that participants could easily submit each photo with an accompanying caption, but some participants just opted to email their photos. We were less likely to receive captions in the email, and when we did receive captions, it wasn't always clear which photo they were meant to accompany. The form definitely helped structure the data from the start, and it provided a nice level of anonymity for participants if they wanted that. After we received submissions, we created a simple slide show with all pictures and the accompanying captions, along with the props we provided. We did not include participants' names in the presentation, although participants were certainly invited to identify pictures they took during the discussion if they wanted to. The most important recommendation here, and the reason it's in bold, is to recruit moderators and note-takers from the same community and identity group as the participants. For our Black Student Study, it was very important that we not have white librarians lead discussion or even take notes. For this study, we were successful in recruiting Black PhD students with experienced leading discussion groups. In fact, one of our moderators even had experience with the photo voice method. Both graduate students contributed their expertise and made the project more restorative and participatory, than if my white colleague and I had tried to delete the discussions ourselves. Of course, we compensated our graduate student moderators and note-taker for their time. While we did have a note-taker in each session, we also audio recorded the session and then transcribed the recordings afterward. I would highly recommend doing this to ensure that your notes are not only from the perspective of a single note-taker who has their inherent biases. Before you record, have your participants fill out a simple consent form. Also, if you'd like to share participants' photos or captions in final reports or presentations, you'll want to ask for their consent, even if you don't attach their name. Submitting photos requires time and effort, so if at all possible, it's a good idea to provide a sentence in the form of cash or gift certificates. We also gave participants handwritten notes, thank-you notes, and provided simple snacks and drinks would seem to be greatly appreciated by our participants. And now, let's talk about analyzing findings gathered from these two methods. First, for analyzing findings from card sort. If you opted to use physical cards, photograph each sort or simply record each participant's groupings. Write down the labels each participant gives for each grouping and the numbers of the cards the participant included under that category label. Be sure to record the labels on any additional cards that participants created. You may also want to photograph the finished card sort for reference later. You'll want to do this after every test session because you'll then reshuffle the cards and add new blank cards for the next session. If you opted to use an online tool, you should consult the tool's built-in analysis and reporting feature. After testing is complete, you can analyze demographic information and user comments provided by participants. If you're interested in using and sharing quantitative information, you'll want to analyze which cards participants group together most frequently and how often participants placed cards in specific categories. For this detailed statistical analysis, you might use Excel, SPSS, or another analysis tool to show the relationships between cards across sessions. For a higher level and less detailed review, you may simply look over your notes and records of participants' labels and card numbers to find commonalities across the session. After you've analyzed and considered your data, develop recommendations for your new site or changes to your existing site. This is an opportunity to include participants or members of your community in your analysis and follow-up ideations. Regardless of how you opt to analyze your card sort data, be sure to do something with it. For instance, share what you learned with staff and users most invested in your current or proposed site and then use that to identify ways to make changes to the current site structure or the labels you use for webpages or menus. To analyze photo voice data, my colleagues and I used a method called affinity mapping where we closely read and tagged interesting quotes and identified themes in the transcripts. We started by transcribing each discussion. This is very time consuming, but was important to our process. We wanted the full research theme, including the graduate student moderators who contributed to analysis and report writing to have access to the full transcripts and not just the notes, which inherently reflect biases and understandings of the person taking the notes. We followed the process highlighted here. Each member of the research team read the full transcription of each discussion session. We made note of interesting themes, quotes, and topics on individual sticky notes, one finding per note. We then came together in a room and categorized sticky notes by theme or topic area. We ranked themes by importance or impact and discussed our takeaways and observations. We then brainstormed recommendations for improvement and refined those based on resources and potential impact of the recommendation. Record your findings in some sort of report or presentation that may be shared with stakeholders and reference later. Develop and follow up on recommendations and commit to fixing what you can. Again, this is a great point in the process to involve participants. What stands out for them as they review transcripts or the group sticky notes and overarching themes? Which recommendations seem especially important to them? Regardless of how you follow up, be sure to do something with the data. You'll recall that participatory research is a research-to-action method. Wherever possible, act on what you learn in order to improve services, spaces, programs, and experiences for users. You might also consider co-authoring or co-presenting with your participants. And you should certainly keep your participants and community members informed of the changes you make in response to what you learned. Celebrate milestones including any publications or presentations and improvements you make based on your collaborative study. Thank you for viewing this presentation on Conducting Participatory Research. Please use the link provided to complete a feedback form on the usefulness of this information for your purposes.