 Welcome to Conducting Observational Research, part of the Research and Assessment Cycles Toolkit offered by the Association of Research Libraries and made possible by a grant from the U.S. Institute of Museum and Library Services. This presentation is part of a module that focuses on collecting data, evidence, input, or other information for library assessment projects. It describes observational research methods, including techniques for conducting online and in-person observations and analyzing findings from this type of study. We hope the content is useful to library practitioners seeking to conduct assessment projects. At the close of the presentation, you will find a link to a feedback form. Please let us know what elements were useful to you. We'll start by talking about what observational studies are good for and not so good for. Next, I'll go over a couple of types of observational studies and then walk through the logistics of conducting an observational test. I'll wrap up with a few pointers for analyzing findings gathered through observational studies. In general, observational studies work well for discovery and exploration. Observational studies can be used for developing a broad, holistic understanding of the problem space, broader than you can get with interviews or usability tests. They can reveal the larger context of where a system fits into a user's life, not just in the context of a user-specific workplace, but also to set social elements to a user's interactions with a system. Observational studies can be used to discover different kinds of constraints a user encounters, including physical and social barriers. For instance, is there a physical barrier that makes a service or space hard to access? These studies can also be used to uncover behavior patterns you may not have seen before, as well as unmet needs. Observations will also give you a glimpse of natural behavior or as natural as you can get and likely more natural than what you see in usability studies or that you can learn through interviews when participants might be inclined to perform or tell you what they think you want to hear. That said, it is important to remember that participants who know they are being observed may behave differently just because they are being observed, something called the Hawthorne Effect. You will likely find that some of your participants may behave slightly differently. For instance, they might check Facebook less, read web pages more carefully, simply because they know they are being observed. Unfortunately, there's really no way around this. A synchronous observation is often as close as you can get to witnessing natural behavior. It's also important to remember that your interpretation of their behavior is always mediated by you, the observer. Like other qualitative methods, observational studies are not good at getting quantitative results. They're not good at gathering evidence to validate particular decisions and they're not good at gathering evidence for anything quickly. Observational methods can be very time intensive and even a bit boring at times. These studies are also not good at answering specific interface questions. If you want to test an interface, it's likely better to conduct a usability study or an AB test. There are several different kinds of observational methods. For the purposes of this module, we'll discuss two. Fly on the wall observations, which typically happen synchronously in physical spaces and online observations, which happen online either synchronously or asynchronously through a web recording that you review at a later point. The traditional way of doing fly on the wall is a physical space when users are engaging with a service or space, for instance, a new piece of furniture or equipment in a study area or a library service point. While it's technically possible to do a fly on the wall study of an online application or interface, it can be hard to do successfully because it's difficult to observe what's happening on participant screens unless you're very close and sitting so close will likely disrupt participants natural behavior. You can certainly try to walk by your participants periodically and catch responses of what they're doing, but that will only give you occasional momentary slices rather than a continuous stretch of activity. When studying online interfaces, you may want to design a digital observation. With this method, you'll recruit participants who are willing to record their online behavior for a period of time. You'll likely specify the length of time you're screencasting. You won't find a lot of people willing to observe, willing to let you observe their online activities for a whole day, for instance. It's not possible to do this kind of observation secretly, as you'll need participant consent. This, of course, means that your participants might alter their behavior slightly because they know you're observing them. Still, this kind of study can give you rich information about how users engage with library and other interfaces and systems. Note that observations are often paired with interviews. You might consider conducting an observational study and then talking to participants to get more detail and explanations of their behavior. Now, let's talk about conducting an observational study. When designing your study, you'll want to articulate your goals. For observational studies, your research question might be a little more open, but you'll still want to have questions and goals firmly in mind. For example, you might do an observational study to learn more about how undergraduates locate information when they're beginning a research paper or how graduate students conduct literature reviews as part of their dissertation research. Your research questions will likely evolve over time, and doing this type of study may generate additional research questions. It's important to select the style of observation, physical or fly on the wall, digital, that's best suited to the particular questions your study aims to answer. If you are interested in online information-seeking behavior, digital observation is probably most appropriate. If you go this route, you'll want to think about the phase of research or information seeking you're most interested in. For instance, their initial information gathering phase, their paper writing phase, et cetera. It often works well to request an hour-long slice of users' virtual interactions. You might also consider setting up multiple appointments with the same participant in order to observe several phases of their research process. You might decide that your research questions warrant a physical observation of users engaging with physical spaces or service points in a library. In this case, you would observe how students interact with each other and librarians, how they interact with their environment while working, how frequently they use personal or library technology, and what kinds of unrelated activities they perform while in the library, like eating or listening to something using headphones. Note that it is possible to do this kind of study secretly and that this study requires fewer resources than one that includes multiple phases or is more participatory in nature. Observational studies are resource-intensive and somewhat intrusive for the participants, so it's often difficult to conduct this type of study with large sample sizes. Practically speaking, you might want to start with five to seven participants and then expand if needed. You won't be able to draw a generalizable conclusion from a sample of five, but you'll still have really rich recordings that can generate new ideas or research questions. You'll also want to think about how long each session will last. An hour-long session is a reasonable baseline, and then again, you can expand if needed and appropriate for your study. You'll want to base your incentive on the length or total number of sessions, providing higher incentives for a longer or multi-part observation. When recruiting participants, you'll face some of the same concerns about sampling bias as with any other studies. Get the word out among as many segments of your target population as you can. You might try using word of mouth, flyers, table tents, and other methods to recruit participants. You might also consider developing a short screening survey to select your participants. In the screening survey, ask questions that are pertinent to your study. For instance, if you want to observe behavior related to online research, ask participants a little about their research project and where they are in the research. Learning this information advance will help you select your participants and determine when to observe them. For instance, if they're only starting a project and you really want to observe the end of their research project or process, you'll want to hold off on screen casting until that point in their research. Of course, with physical fly-on-the-wall studies, recruitment and incentives are not necessary. You're simply observing users interact with their environment. These studies can require a little less advanced planning and overhead than an online observation. Let's talk about some of the logistics of a synchronous online observation. Because you're recording this session, you really only need one person to moderate, so it can be beneficial to have someone else watching to help take notes or to be a second set of eyes and ears. Ideally, the moderator will be the same person who is in contact with the participant during recruitment and screening. This isn't absolutely critical, but it can help participants feel more at ease if they already have some familiarity with the person who's helping to conduct the study. While it's possible for others to sit in on the synchronous observation, I'd recommend restricting the number of people watching live. The participant might find it unsettling to know that several people are watching their every mouth move. If you go this route, you will certainly want to let the participant know that others are quietly observing. You might find that it's easiest for all involved to limit the number of people watching live and simply record the session for later viewing. Either way, be sure to let your participant know that you'll be recording their screencast and that others might possibly view later, but assure them that the recording will not be shared publicly. Regardless of whether the session is synchronous or asynchronous, you should be sure to get your participant's consent before recording them. Tell them who will see the recording and what the data will be used for. In general, it's good to restrict who can view the recording to the research team, so you may want to reserve the right to use clips and presentations or images and publications if needed. Check to see what guidelines are for your institution in terms of human research and then be sure to follow your institution's protocols. You'll also want to choose your recording platform. I'd recommend using something that is familiar to you and your users, like Zoom or Teams, Adobe Connect, WebEx, whatever is mostly, most widely used and supported in your community. If you don't have access to a recording service, you may simply point a camera at your computer screen to record your participant's shared screen. When conducting your observation project, I'd highly recommend that you do a pilot before you start your observation session. You'll want to be sure your screencasting and recording setup works properly and that your participant isn't confused by your instructions. If you're doing a physical fly-on-the-wall observation, I highly recommend testing your observation log or note-taking template in the actual space you plan to observe. Record findings in the log and then make adjustments to your note-taking guide to be sure it works well for your needs. If you end up with good data during your pilot observation, you can always incorporate those into your final findings. If the observation is synchronous, be sure to provide a very short introduction to reintroduce your project and reiterate any confidentiality clauses. And she might be catching personal interactions like social media and email, reassure participants that you won't share identifying information even if you share portions of the recording. While the physical or online observation is underway, keep in mind that you are meant to be a fly-on-the-wall. Just watch, listen, and observe. As I mentioned earlier, you'll want a fairly structured and detailed note-taking log for physical observation. This isn't as critical for a digital observation since you'll be able to refer back to the recording after the session, but it still might be helpful to take notes. You never know when you're going to have a technical failure and it's also easier to ask follow-up questions in a post-session questionnaire or another study if you jot things down that you know you want to ask about. Taking notes during the observation will also make it easier to familiarize yourself with your data and then analyze it once you're at that stage. Finally, you might consider developing a short post-session questionnaire when you ask where you will ask follow-up questions about what happened during the synchronous recording. Some of your questions may be based on the session itself, or you may want to ask questions about whether the participant felt the session was representative of their top typical online behavior. When asking follow-up questions, be sure to keep them open-ended. For instance, if you're interested in how representative the session was, you could ask them how the session compared to the past experiences engaging in the same activity or ask if there were problems they encountered during this session they don't typically encounter or vice versa. And of course, be sure to thank the participant when you're done. So once the observation is complete, it's time to analyze your findings. Observational data is very rich and because it's so unstructured, it's important to keep in mind that what you find depends on what you're looking for. Because of that, be aware of just looking for evidence of the answer you expect to find. This may lead to an unreliable reading of the data and you'll also miss out on other discoveries latent in your recording. When analyzing your data, start by reviewing your notes and the recording. The more acquainted you are with the data, the better. You might consider this process. During or immediately after the observation, take detailed notes about what you saw and what questions come to mind. Then take some time away from the data, at least several hours or even a few days so you can come back to it with fresh eyes and a more objective perspective. At that point, begin to look for overarching themes and concepts in your notes or structured observational logs. Begin to define or code those themes. Look for inconsistencies or consistencies in the data and continue to make note of what you still don't know, what requires additional research or consideration. Focus on information most relevant to your questions first. The online recordings have a simultaneously unstructured and narrative feel because of recordings of real life and it can be easy to get lost in the recordings and not know where to focus your research or efforts. Start with information pertinent to your research question but make note of anything else that's interesting to come back to later. When going through your data, consider implications and new opportunities for services, online, enhancements or research studies. Don't worry if you view your data and feel like you don't have a clear idea of what to do with it. This frequently happens with observational studies because they are by definition unstructured. There isn't a Q&A or task format you may be familiar with in interviews or usability tests. This means that you'll need to be very proactive in thinking about design implications. Be creative. Your findings might apply to things outside of the realm that you set out to observe. One of the strengths of this kind of study is that it allows for findings that may help you understand more fully of something entirely different from what you had in mind when you started. For instance, in a study about undergraduates online information seeking habits, you may find that undergraduates frequently begin with accessible sources like online videos or Wikipedia before delving into scholarly sources. Based on the finding, you might recommend that in-person and web-based instruction sessions and help docs address how best to leverage these sites of sources and where to find such sources. You should also spend time thinking about how your findings shape your understanding of the problem space as a whole. For instance, your undergraduate online information seeking behavior studies might yield findings on undergraduate perceptions of research as a whole and the types of search strategies that are most successful in particular contexts. Although such an understanding might not result in an immediate design implication, this can certainly help shape future inquiries and your approach to designing online interfaces for undergraduates. You'll likely also come up with all kinds of new research questions about the problem space you observed during your study. Once you've analyzed your data, it's a good idea to capture your findings in writing either formally or informally. Take screenshots of the interfaces in question to illustrate your recommendations. These screenshots will serve as a nice reminder of what your interfaces look like before you introduce changes. It's also worthwhile to include both your design recommendations and any new understanding of the problem you're exploring in your write-up or short report. It's important to reflect on and internalize these new types of understanding and taking the time to write out the more abstract findings, both advances your own understanding of the problem space and make sure observations available for future reflections. What you should not do with your findings is sit on them. When your findings reveal major problems with your website or library services, it's easy to say that you'll get to it later and then never actually do anything about it. Even if it takes a long time to fix, in the long run, it's much better for your users and your institution to solve problems where you can. There are certainly more than enough problems that we can't solve, so it's important to address the issues that we do have some control over. You should also be wary of following every user suggestion literally. There's a possibly apocryphal quote by Henry Ford. If I had asked people what they wanted, they would have said faster horses. Whether or not he actually said that, it's good to keep in mind that your users are not designers. If they say they want a link to Web of Science in the middle of the webpage because that's their favorite database, that doesn't mean you should put a link to Web of Science right in the middle of the homepage. What if your next user says they want to link to JSTOR? Instead, when your user proposes a solution, think about what the problem actually is. In this case, the user is saying that their favorite database is inconvenient to get to and they want it to be more convenient. Then it's up to you and your colleagues to determine the best solution for giving users easier access for the research tools they need. And finally, don't be too invested in finding the answer you expect. It's easy to get emotionally invested in something you think is true about your users. And it could be that it's true for some users, but you might also find that the situation is much more complicated than you initially anticipated. We all have our biases and it's hard to get around those. Studies like this one are opportunities to learn directly from our users and working collaboratively to design and implement a study can help even outbuy us and guide the best design decisions. Thank you for viewing this presentation on conducting observational studies. Please use the link provided to complete a feedback form on the usefulness of this information for your purposes. Thanks again.