 Hi everyone, and welcome to the web survey design workshop conducted by Kevin Fommelant. I am Sue Boffman from ARL and very pleased to welcome our project team members from the Research Library Impact Framework Initiative and colleagues from our project libraries. It's really great that you could join us today for this workshop. If you have attended some of our earlier workshops, you've heard these comments about the framework initiative, so bear with me while I share them again. Our initiative has been underway for over a year now and our teams have been very busy exploring a series of questions. These questions relate to space, diversity, equity and inclusion, special collections and researcher productivity. These are the issues that our teams are studying and our goal for this initiative is to help us understand how to address some of the most pressing questions that you all are dealing with regarding value and impact. So this initiative is funded by an IMLS grant and we're very appreciative of that. As part of this initiative, our two consultants, Kevin Fommelant and Margaret Roller, have developed a series of workshops on qualitative and quantitative research methods. Our goal for these workshops is to help library colleagues develop their skills and expertise and conducting research in your libraries. And today's workshop is a part of that series. So Kevin was with us on Tuesday and along with today's session, we are recording both and we'll share both recordings and Kevin slides and other documentation with everyone sometime next week. You are very welcome to share these materials with colleagues who couldn't be with us today. So please do that. We hope you will do that. So Kevin, without any further ado, let me turn the virtual podium over to you and thank you for being with us again today. Thanks Sue. I just wanted to thank everyone for joining me today. The presentation should go about an hour long with a brief survey exercise that's interactive that you'll get to play with online. Let me just share my screen here. So today we'll be talking about the principles of web survey design and particularly the survey structures that promote consistency in data collection. So in my own survey research work and I do a lot of surveys for healthcare research both for compliance and also for internal quality purposes. But a lot of the work that I do is generalizable to any sort of sociological survey. So I work with some small custom surveys, you know, five to a thousand people and some larger ones where I'm asking the opinions of millions of members of health plans. And so some of the things that I think about before I even before even approach collecting data from from members or participants in the survey are, you know, what are my what's my eligible population? How can I build a survey that promotes the same experience for everyone? What sort of language can I use that will make participants have the same experience across the entire the entire sample population? What can I do to reduce respondent burden? Even if there's and that can mean both the length of the survey. So the number of questions. But a survey can also seem easier if through some visual some visual shortcuts and also just from the wording and the consistency of the language you use makes for an easier experience to fill out the survey. And since our goal is to get as many survey completes and as much feedback as we can from our sample. That's one of the things that we should think about when we're designing a web survey. And you know, web surveys have a lot of advantages and I'll get into them in my presentation. But you know, of course, it's not the only mode for for which we survey respondents. There's telephone surveys, written surveys, fax surveys. This is one mode that is becoming has become increasingly common. I'm sure for younger respondents, it's the main way they interact with surveys. So I think it's important to learn how to utilize the tools on your web survey platform to interact appropriately with people from all across the age spectrum. And then across also across sort of language abilities is something that I encounter as well. So many of my participants there in my survey research do not speak English is not their first language. So when I'm designing my survey, I'm using consistent language that can be read by everyone from those who are native speakers to those who are not. And I do want to add before I launch into some of the more technical aspects of web survey design to please feel free to interrupt me with any questions. If you can feel free to do so verbally, you can also use the chat. So for survey research, we start from what are the big questions? So why are we conducting this research? What kind of feedback are we interested in getting from our participants? How do we collect reliable information that we can use? And what do we want to know to learn from our survey results? And so one I particularly like to focus on what kind of feedback we're interested in. And that's because it really helps me get into designing the structures of the questions that go into the survey. So if we want feedback about a specific service, we tailor our questions to ask about how frequent the responder interacted with the service. What was the quality of that interaction? Part of the way that we write questions is to limit the experiences that the respondent is thinking about. So we don't want to be asking about for feedback from another service from something that we don't administer or we don't have control over. So our job is to prompt that respondent to think only about the services that we're interested in learning about. And then to prime them to go back into that moment and think about what their experience was and not to allow other experiences to interfere with our data collection. Another big one, reliable information. I'll get into a little bit more in detail in the presentation. And then why do we do it? Why do we conduct survey research? And that is so that we can use it for quality improvement. And I think that's something that's applicable to most of you. Maybe you do have some compliance surveys as well. I think everyone does. But a lot of the surveys I do are yearly surveys that either are measuring something that's changed, a big initiative that's been implemented, or it's just a year-on-year improvement measure to evaluate the direction of their programs, of their services. So many research projects are about change over time, and not just about reinventing the wheel each time. So if you develop a survey for your project here, you can reapply that, maybe make some changes based on your findings to ask even better questions or follow-up questions, more detailed questions for next year, so you can get at the experiences that your respondents are having with your services. And so how this presentation flows, I'll talk a little bit about survey theory. I'll get into concept mapping, which is what I do before I even think about putting together questions, a little bit about survey validity and reliability to ensure that the information that we're getting is actionable and that we can use in our analysis portion of our work. And this is particularly important with respect to eligible population, both at the survey-wide level and then eligible population by questions. And that's where we get into get-and-branch questions, which are those which guide eligibility within the survey. Answer choices where I'll be talking about the scales that we can use to prompt respondents to give consistent information. Survey flow, which reduces survey burden for the respondent, makes it more likely to complete the survey in full. Visual representation, visual presentation, which also reduces respondent burden. We'll go through a brief exercise that I mentioned and then we'll think a little bit about data collection before we get into some conclusions. And so when I'm thinking about survey design, and this is based on the introduction, how the sort of visual representation of the introduction that I took you through, I'm mapping concepts, I'm creating questions along the way. I am also thinking about answer choice scales. But at the same time, you can see that data collection is all the way down there in the bottom right. So I'm not even thinking about how much data I'm going to get, what I'm going to do necessarily with that data. I'm thinking strictly about designing the best survey product that I can. And then the last thing that I do is release the survey. So lots of different small subtopics to think about before we even release the survey into the field. And since once we do, if we need to make a change, it reduces the reliability and the validity of the survey. OK, and so survey theory and managing behavior. And since I know that we've all taken a lot of surveys, any I think any of us have taken probably, I think I read a statistic where the average person takes about 20 surveys a year and that's only a small portion of the number of surveys they get prompted to take. So we all have an intuitive grasp about what a good survey is because you remember a survey that you completed quickly, it was easy, you didn't get frustrated in the middle. And then some other surveys where you really wanted to participate, you had some feedback that you wanted to give, but maybe the correct questions weren't asked or maybe there was some technical glitch and you couldn't finish it. And unfortunately, that means that the researcher didn't get the feedback that he or she needed to create a quality improvement plan for their given services. So some of the problems that I came up with when I was putting together this slide are here. I'm always interested in hearing about some of the issues that you've encountered. So feel free to add some to your own list. But I think that one thing that's overlooked since you know so much about your services, you know about your work. But it's just one of a lot of things that your respondent is thinking about. So they'll get a survey by email. They click on a link, but they're also eating lunch. They're also on a phone call. They're doing other things. So it's not necessarily at the top of their mind. So a lot of what does the survey design is is to prompt that person to think about the experience and to take them back to that time. Now, of course, sometimes they will just forget and that's OK. That's why we have our gate questions to to move their eligibility from the survey. Respondents will always interpret questions in different ways. There are ways of managing that, but we have to understand that although we have our our technical methods to reduce the different ways that people can interpret questions, that there is going to be some variability. So a survey that I take this week may I may answer it slightly different next week when I re-remember an experience I had or I think about the language in a different way just because the first time it hit me, I wasn't thinking about it as clearly. So it's important to realize that the methods that we're using that we're employing here to get the most valid and reliable data are best practices, but they're not necessarily solutions, perfect solutions to the problem. And so many of the things that I've talked to you individually about about some of your projects, those are those are problems that are encountered by every researcher and there are methods to mitigate those problems. But at the end of the day, we're implementing best practices and getting as much and most valid and the most valid feedback that we can. Another thing that happens with respondents is that just depending on where you put the question in the survey, you might get a different answer. And that's partially because people tend to answer more positively at the beginning of the survey compared to the end, but it's also because they've been primed to think about a situation they answered in the previous question. And that's part of why you're moving the respondent's attention from one concept to another within the survey. And there are some techniques, I actually didn't talk too much about this in this presentation because I don't use it that much. You can randomize question order in some survey platforms and survey answers. But typically, I like to actually run the survey first without any sort of randomization and then do an analysis to find out whether there is a recency effect or a larger problem with ordering questions within the survey where I'm getting some obvious statistical bias. And then if I'm able to determine that there is, I can use some randomization in my next iteration of the survey release in the next year. So that's something that I would cover not necessarily in this presentation, but in the third presentation that's about quantitative analysis. So I can answer some questions about that at the end, if you like, as well. And then a few more problems. And this is something where web surveys really come into play as a helpful technique. So respondents will skip a question if they're allowed to. They'll if they don't want to answer a question, if they're distracted, they hit the wrong button, they'll go forward and skip the question. But you can program a web survey to stop that from happening. Now, of course, you may also want to allow them to skip a question. And there are certain circumstances where that is actually best practices and I'll cover that a bit in a bit as well. Respondents will choose contradictory answers. So if you give them the opportunity to answer similar questions about the same concept, even if it's a 10 minute difference, you might get a slightly different answer than you did the first time that you asked the question. So best practice really here is to keep all of your concept. So all of your questions related to the same concept in the same part of the survey so you're not tempting the respondent to give you a vastly different answer. Respondents will answer questions while thinking about a different service. And there's a I'll talk about expository text in this presentation, which is a helpful tactic to reduce that phenomenon. Respondents will fail to complete the survey and will provide feedback that is not necessarily actionable. So of course, we want feedback from our respondents from those who have interacted with us, with our services. But at the same time, we want to limit that feedback to feedback that is useful for us for quality improvement purposes. If it's something if we're prompting respondents to answer about the service that we can't necessarily change or prompting suggestions that we can't really act on, we know where it's not the best use of our survey space. And there are other techniques that we can use to evaluate more specific requests. And I'll get into that when I talk about write in questions. And so really to summarize what I'm talking about when I'm talking about survey theory, it's to generate a predictable survey experience for everyone. And the once you do that, you've already you've really already answered the questions about validity and reliability. So if you're already thinking about how to get survey respondents the same experience, you're already off to a good start before you think about those two concepts. And so for survey validity, we're talking about what does the what does the survey measure and does the survey measure what we intend for it to measure? And so for the questions on the slide, I have an example of validity. The first question being how often did you find it useful to visit the library services desk? Never sometimes usually always the second question. How often did you find the assistance that you received at the library desk to be useful? And so you may think that they are similar questions. They both talk about a similar service. But at the same time, the second question is much more specific. It talks specifically about the assistance that the respondent got at the service desk, whereas the first one leaves a little bit more general. So how often do you find it useful? Is it the library services desk? If you're thinking about a student who maybe doesn't know too much about the services desk, they might think that means picking up a brochure or or just reading a bulletin board, which maybe you're not so interested about, you want to know about the interaction itself. And so that's why question two would give you a more valid response. So specificity in your questions will really help improve survey validity here. And then for survey reliability, we're talking about a survey and how it will produce results if you take it a second time. And how and if they respond is able to answer the question. Sorry, how response are able to answer the same questions and interpret them in the same way. And so the first example is a good example of a reliable question. In the past months, how many times did you visit the actually it's a little bit. Sorry, I'm getting a little bit freezing here. How often did you visit the Petit Library? And so I put this here because a lot of times I see questions where the time isn't limited. And so if a respondent sees that, they may answer about a visit to the library that took place two years ago or maybe yesterday. So if you've had an initiative and initiative or you're really interested in something that has happened in the past semester, it's good to put that in the question. So that improves reliability so that you're getting the respondents to think about a similar point in time. And then for number four, the less reliable question. Excuse me, on a scale of one to five, how useful did you find the weekly seminar series and then the one to five scale? And the issue here is that we're not sure what one means and what five means. And this is a common confusion that I see in some of the surveys they get by by text message is that I don't know is one the best or is five the best one being the top or five being the highest number. So you can fix that by putting the description of the scale in the question and that way you'll have a reliable response across participants, a reliable interpretation. And then before I get into a little bit more technical detail, I did want to talk a little bit about sort of survey wide issues that have that everyone is paying attention to in the industry. And then I hear from some of you that you you're facing some of the some similar challenges in your work. And I just put this in here just to show you that you're in good company that declining response rates is a is a phenomenon that everyone is dealing with in survey research. And so if you you don't get 100 percent participation in your survey, that's not at all surprising depending on the survey that I'm doing. I get between a 5 percent to a 40 percent response rate. And so the actual the main variable that affects survey participation is actually age. So the lowest participating people are those who are under 30 and the highest participation rates become come from over sixty fives. And I think that some of you have seen that in your own survey work or students are the least participatory and then professors or other professionals are more likely to fill out your surveys. So if that happens to you with your survey, it doesn't mean that you've done anything wrong. It doesn't mean that you've constructed your survey at all incorrectly. It just means that, you know, the phenomenon is why is is global and that one of the positives with your work is that there may be more students or more visitors who are there may be a larger population. So if you get fewer responses, it's not as big of a problem compared to with a professor group where there's already a small, a small number of them, but a higher response rate. And so, you know, response rates have actually been declining since at least the mid 1990s and probably since survey research began. And this has consequences for the interpretability of the data that we receive from our survey research, and this is called non-response bias. So if there are particular subgroups and age is one of them that are less likely to respond, then the data that we receive is less valid since we'll have an over over representation of older respondents or really any any subgroup that happens to be less responsive to surveys will not be we won't get as much feedback from them and we won't be able to incorporate their opinions into our quality improvement research. So, yeah, so validity and reliability are dependent on a representative sample and some survey researchers will use techniques like waiting or over sampling to account for non-response bias. It's something that, you know, is always up for debate in the in the survey research community, which technique to use and whether it's an improvement, even an improvement over simple, over standard sampling procedures to deal with non-response bias. So for waiting, what you're doing, say you get a survey survey, you get 300 survey responses, but you don't get that many responses from Latino respondents. Fewer than would be predicted by their their portion of the of the population. So what you could do is extrapolate from that that tiny sub-sample of Latino respondents and then wait it for their actual proportion of the population. So some survey researchers actually incorporate this in their work. I think I'm seeing less and less of it because you're actually extrapolating from a very small group, typically. And so the the error term and that small sub-population is very large. And so you're actually increasing that error term by increasing their size in the in the survey results. And so I think a more popular technique is that when you're interested in getting more information about a sub-population is to over sample it. So many of you, I think, are are research are surveying your entire population. So maybe that's not not something that you can do necessarily. But you can you can always maybe put in more effort or extend or do some more marketing to get responses from particular sub-populations compared to other ones. And so one of the obvious ones that we do in survey research when we have too few Latino respondents is to produce the survey in Spanish and to encourage telephone use since we tend to get more telephone responses when when spoken Spanish is used. And in that way, if we over sample a particular subgroup, we've eliminated the problem of the the very large error term and then we can incorporate that into our into our overall survey research results. And so the example here is just from a hospital services research survey that shows declining response rate across years. So it actually isn't too bad. I've seen in some government surveys a drop of 15 percent or more. And so there's some some public knowledge about times when non-responsible bias has gone wrong in survey and polling research. And so when I was taking when I was starting out in doing coursework about surveys, the example that was always used was the 1936 reelection campaign for for FDR and the literary digest survey, which was a paper survey, one of the first to measure a presidential election presidential election preference before it was actually held. So they sent out paper surveys to all of their members and they actually found that they projected that Alph Landon would would narrowly defeat FDR. As it turned out, that was not the case in FDR one in the landslide, as we as many of us know. And the reason for this is that their their members were not at all representative of the general population. They tended to be wealthier and not a diverse group at all. And so that was the cause, this non-response bias that that occurred because of their lack of planning to get a diverse sample that was representative of the population caused this large error. And then something I didn't know and found out recently is that, you know, I'd never heard of literary digest. It's only come up when I've talked about this this event. And it's because they they went out of business. They went bankrupt shortly after releasing the poll. So there's more some more recent examples, the 2016 election famously, and then also the polls leading up to the Truman's election. So it's a common problem that never goes away. In survey research, we just have our techniques to minimize them, at least in your case. Typically, the results we're not picking whether we're not doing polling research or so we're not dealing with very small margins. We're dealing with larger statistical differences that we can use for quality improvement. So we get a break there. And so for modes of survey research, I mentioned that you can also do survey research by fax, by mail, by telephone, those all have their their advantages. But for web surveys, I think for a lot of you, it's definitely the best the best fit for your work. Some of the some of the nice advantages of web surveys is that you can actually send them through. You can actually send them by you can send a link by SMS. So sort of the standard emailing is actually getting to be an older method since many of our younger respondents and my research projects will actually prefer to answer surveys through an app. Since they don't check their email as much or to respond to a survey through a text message. So one of the advantages is that there is some flexibility with web surveys in that you can you can use email to contact older respondents or text messages or or app use to get to younger respondents. The biggest one for me is that web surveys can enforce skip logic, which is not something that you can do through a paper survey. And that means that the respondent will answer questions that they they aren't eligible for and that you'll have to do more work on the back end to remove those answers from from your survey analysis. Another nice feature since I had no one wants to do large data entry for for survey work is that web survey platforms have data export tools. The only issues that sometimes they're not necessarily compatible with your data analysis workflow. And that's something that I'll talk about in the next series on Tableau, where I show you how to take data from the platform and then put it into a form that's useful and makes it permissible to use in visualization. And some of the disadvantages of web surveys is that web surveys are more likely to produce duplicates. So if you're a sort of respondent may get the link in an email, click on it, do about half the survey, get distracted and then come back and redo the survey from the beginning. I have seen some improvement in this. If they use are using an up-to-date browser, it will remember their location in the survey. And then a lot of the survey platforms are doing a better job of prompting the respondent to go to the go to the right question from where they left off to continue the survey. The only problem is that it does it is somewhat reliant on the respondent having up-to-date software. So a lot of times when I'm working with respondents who are having trouble with a technical issue with a web survey, it has to do with an old browser or some incompatible incompatible software that is causing the issue. One thing that's not necessarily as obvious is that you may get more negative responses in a web survey compared to a telephone interview or even compared to a paper survey. This is because the web survey is more removed from human interaction. So people are more likely to be honest than when they're talking when they're answering questions about services on the telephone. And that can be depending on your perspective that can be that can be useful to you. If you're interested in feedback that's this actionable, you might want a more direct answer from your respondents that you're able to implement changes to your quality improvement program than if they were to give more blanket positive responses. And one thing that comes up more and more with web surveys is that some respondents, a subpopulation, this has to do with non-response bias, are less likely to reveal information online. So with all of the information out there about fishing attempts and some fear about hackers, people aren't necessarily a subpopulation of respondents aren't necessarily inclined to reveal information even about services they've encountered on a web survey. Now I think a lot of you are at an advantage if you're emailing, if you're emailing people within your institution from a well-known email address or a listserv that this is less likely to be a concern for you. But if you're moving to sending text messages or other sort of newer methods of contacting respondents, this becomes more of a problem. So to get into survey development itself, concept mapping is just a technique that I use to think about survey flow. So I'm thinking here about how to divide services into their minimum measurable components. So I'm operationalizing them. So from the schematic on the slide you can see library visits. I want to think about, maybe I want to think about how frequently or what days of the week they're visiting if I'm interested in volume measures or just different experiences within a certain time period. And so for library service desk visits, for example, you can think about a lot of things. The services that the respondent discussed, the quality of the interaction, what unanswered questions that they may have after the visit or what they would expect from a future visit. And so this helps to divide. So this helps to avoid sort of the temptation to ask the respondent a general question like what did you think about your experience at the survey desk and then allow them to write in a response or to just give that or to have the temptations to just ask them for a rating from 1 to 10 about what they thought of their interaction because on the other end of the of the survey research project you'll get a you'll get a number that says, you know, they the average number was seven out of ten that they seven out of ten in the sense that they ten being the their warmest feeling towards the their interaction with the service. But that doesn't tell you what it exactly was as a part of the service that they were that they weren't that they really liked and maybe some things that could use some improvement. And so that's why we use concept mapping to really tease out what components of these services are or of these services that drove the drove the their overall experience. I mentioned this a bit about the beginning. So one of the things that we think about when we're designing our survey is the population that is actually eligible to take it. And so you definitely the standard one is to limit the survey to people who have visited the libraries before you actually prompt them to answer questions. So typically what will you'll do in a survey is you'll ask them did you visit and then you'll skip ahead to the basically to the end of the survey to collect demographic information about the group of people who haven't visited the library. And that's so that you can know what sorts of people are not visiting your library if there's something that's preventing that group of people from engaging in that to visiting the library. But you're not prompting them to answer questions that they wouldn't necessarily be qualified to answer because they haven't actually visited. And so this can this can vary by survey. The standard one is have they visited the library. You might also want to limit it just to students or professors depending on what subpopulation you're interested in knowing the most about or if you've tailored the survey to one of those groups already. And then for maybe you're doing a follow-up survey from a more general survey and you're more interested in learning about a specific service from a subpopulation that you know has already interacted with it. So that you're already limiting the population for the follow-up survey by using the information you got from a previous survey. And then you could also you could also actually limit the population to those who haven't interacted with your library to figure out why it is that they haven't. And so in order to enforce this eligibility we use we use gate questions. And these are probably the most important ones in your survey even though you don't actually look at them very in very much detail in the analysis portion because they're used to weed out participants who aren't eligible to comment on a service. And so we have our initial gate question have they visited the library but then another gate question later in the survey would be how often was your experience with the library service services desk positive? And maybe that's not that's not a a gate question that you would necessarily use in your survey but if you're interested in finding out about what it is the opinions of those who never had a positive response you can branch that that group of people into a series of questions to learn about the negative response or if they answer sometimes usually in all ways you can branch them into another series of questions to get their to get their opinion about services in a different way. And so the branch questions help you to manage subpopulations of eligibility within the survey itself. And so it's also about managing survey flow but it's also I think helpful for the survey design team to think about how to divide the survey up conceptually to create breakpoints within the survey itself. So I think it helps the design the design team as well. Yeah, this is an example from the the exercise that I've put together of a branch question. So I'm asking which survey do they plan to use to use when they visit the library. And then if they select none of the above it's both a gate and a branch question because it skips the following question that asks more about the survey the software that they plan to use in the library. And while and also gates. So it's if they click down of the above it skips over the that question. And then for the branch question if they ask if they pick tab lower are you could theoretically ask them questions specifically about those pieces of software and why they want to use them. And one you know one underutilized technique I think in survey research that I think is actually important is expository text. So there's no rule that the only text in the survey that you use has to be in the question itself. And so expository text happens outside of the question text itself often at the beginning of a section that leads to questions about a specific service. And it typically follows it also typically follows the gate question. And it has to do withdrawing the the respondents attention to a specific service to remind them that they are only to answer about this specific service or this specific experience. Or you could just use the expository text to say that in the following section the scale will be the one to five scale with five being most satisfied. So those are the two main uses I think. And this helps you know it helps to avoid repetitive text if you say in this section and then you split it up on the same web survey page the respondent will orient to that and we'll be able to answer consistently across those questions. And that way you have a more reliable and a more valid survey. And for answer choices and this is something that comes up a lot when when talking about creating questions because there's so many different answer choice scaling answer choice scales out there for gate questions you're typically working with a yes no question a frequency question a frequency scale that I typically use never sometimes usually always other people will use the zero times one times three times or more. And I think both are valid but the key here is to use it consistently to use it consistently within the survey itself because research has shown that if you use more than one type of one type of scale within a survey that you're more likely to get contradictory responses or answers that are difficult to determine to interpret. And so I don't necessarily mean that you have to use the exact same scale within the survey. So if you have a frequency question and an intensity question you're going to have to use a different scale. I just mean within the type of answer choice scaling you should use the same you should use the same scale. So yeah in a survey either stick to never sometimes usually always or stick to the times method. And one of the one under I think another underutilized tactic and this I'll get to this in the quantitative the quantitative analysis portion which is the third lecture in the series. And that's we can actually use what we think about answer choices in detail because we actually use across tabs analysis to tease out the preferences of the people who have answered a certain way. So this has to do with the branching questions that I talked about earlier. So we can look at so across tabs analysis is when you have you're comparing the answer the subpopulation that answered a certain way in one question. And their responses in us in another question. So the number of people the people who answered never for a question what was their response on a question about another library services. There's something specific about people with that opinion that we can learn about in in the survey. And then for other types of scales. No, I prefer to use scales with fewer with actually less fine detail. So prefer three, four or five point scales. That's because and more than five points response of difficulty understanding or at least conceptually dividing up in their mind very fine scales and five tends to be the limit. So the the most fine scale uses the strongly disagree to strongly agree scale that you see on the slide. Of course, there's an exception. You know, a lot of surveys will use a one to scale a one to 10 scale. And that's since and that's because people do have an intuitive grasp about what a numeric scale is you're likely to get a pretty reliable answer. There's also the option to actually allow the respondent to rank choices, which I think I've seen in some of you, excuse me, in some of your surveys. And I think that works really well. So in this type of question, you might be listening all the services in your library and then asking the respondent to order them in terms of importance in their from their perspective. And this way, you're actually you're actually shortening the survey itself and getting an intensity measure at the same time so I think that could be a valuable tool to get at preferences with your survey without having to ask a lot of repetitive questions. So I saw that and thought that was a good tactic. And then another another question choice that I like to use is the not applicable no experience or no opinion options. And that gives the respondent an out and this is sort of a this is a subjective thing. If you if you are not sure if the respondent is going to have a strong opinion about something you're asking, you ask them, you can give them an out. You can ask you can give them the option to say not applicable or no opinion. And that way, you're not forcing the respondent to give you a response that they're not qualified to make. And then the the write-in option which can you can do in two different ways on its own or within the question itself. And I'll get into how to how to do that in a second. So, you know, write-in questions. The first thing I ask myself when I'm thinking about them is if I really need one. And that's because write-ins are are your valuable way to get qualitative information. But it's it can also lead to some counterproductive feedback that you don't find actionable since you don't have as much control and you're not prompting the respondent to to think about a specific service. They may end up answering a question about a service that you don't offer or or about or to just provide in general feedback that you can't use. And so one one way around this is to have the write-in be a a catch all that is the fifth option or the last option in a question where you give you prompt them with all of the choices that you think should be available. But if the respondent has a strong opinion they can add in their own write-in option. So that's a good compromise. But of course many times it is absolutely necessary to get qualitative feedback. And in that case there are ways of coding those answers so you can get themes that you need out of it. And to use that for quality improvement purposes. So I definitely don't want to discourage write-ins. I just think it needs to be something that's thought out before you you add it into your survey because it couldn't it couldn't it can actually derail your survey and lead to some feedback that you can't use. And so for coding write-in answer is what what's helpful is to think about the themes that may may come out in a write-in answer before you actually release the survey. So you might think about you know it's about a seminar series which maybe is difficult to get feedback with specific questions. You just want some general feedback. You can code. You can think about well actually the frequency is one of the themes that I'm interested in learning about. The space itself the word the seminar theory the seminar series is held is another theme. So if you have those themes in advance that will help you when you get your responses to both to see if your question landed if you if if the respondents are thinking in the same way as you or if if you need to retool for the next survey. And so the other way we think about it is is in terms of sentiment analysis and the intensity of the response so whether it's a if the if the if the response is negative or positive and how negative or positive. So typically when I'm coding these answers it's with two blind survey coders who haven't designed the survey but who know the themes and so they go through each of the right in responses code the theme and the sentiment and then we see what kind of concordance we can get from that coding. And then you can also just use a descriptive analysis for for responses that don't necessarily fit into a theme. And then from that answer you can still do some further research into maybe some some portion of the sort of the services that you offer that you hadn't thought about when doing the survey development work or you can retool your questions for the next survey based on that feedback. And so one you know one question type that I I like to use to reduce survey burden is the matrix of the side by side question. This is one of the visual methods that you can use to guide flow through your web survey. And you know in certain cases this can't be used for for compliance reasons but when you can use it I think it's a valuable tool and this is just when you're asking the same question about a variety of services and you want a and you're using the same scale for all of them and if you're not able to use this particular format there is a backup you can that also reduces the amount of space that's taken up and you can just use some questions for each of the services that you're interested in and then put expository text below the question itself that explains that the the scale is the same for each of them. So I think both of those are good options. I alluded to this earlier but for answer choice ordering typically in my surveys I actually start with the most negative choice first since the quality improvement specialists that I work with are actually interested in getting feedback that's that they that they can act on and so if they get if they get more if they do get more negative responses they have a better case to change something in their existing processes since there is a recency bias for for order choice. Well that's that's definitely a stylistic choice it's a subjective choice. So I don't necessarily advocate changing the ordering within the survey or having a preconceived idea about how the ordering should be except that it should be consistent across the survey of course. So you should keep if if it's listed this way in one question you shouldn't flip it for the for the next question that's the only advice that we give for the initial survey but if you find an effect where you're getting more answer choice ones picked in your survey that's something you can work on for your for your next survey and think about randomizing your choices and then of course my favorite is enforcing skip patterns in in web surveys which I alluded to and the advantage here is it reduces the amount of time needed it limits it limits question eligibility to those who are only qualified to answer the question and so you can do this in number of ways you can literally force the respondent to go to the question after the one that they're not eligible for or you can also just add an NA choice it has the same impact as a skip pattern by giving them an out and not forcing them to answer the question and so I did want to do I think I got did I get a question that I didn't see let me just you did this is Claire I just put it in but in the previous slide when I'm in a public area so this is Claire so in the previous slide for for choice ordering the middle selection was neither a green or disagree and would you if you really wanted to understand if people had used a service or not should you put in something like a does not apply to make sure that you pull that out or or would it not really be worthwhile because people might not understand what that is so you're not going to get that information that way yeah I think I mean that's that's a good question I think I think the choice here is actually I mean I found out recently actually that some of the some people don't have access to to skip logic necessarily in their survey research so in that case I think you do want to add the not applicable option to your questions but if you're enforcing skip logic you probably don't need the na choice as much and it does depend on your population if you have if you have a population that's sort of wary of answering surveys or is nervous about the about providing answers you might want to give them the out with the na choice on the other hand if you know that that survey population knows about the service and you want to force them to answer the question you might actually want to leave off the na so I think it's I don't think there's necessarily one exact way that you can do it it depends on the tools that are available to you so if you have skip logic that's that's my preference but also depends on your own intuitive knowledge about your about the people you're surveying would you ever actually put the option of I have never used this service or does that not really okay you absolutely can yes okay thanks yeah and I yeah they're definitely yeah so I I don't think I got into this but yeah if um for some questions like if you're answering about demographic data you won't you want to necessarily enforce that that that person to to answer that question so you can put the na option or if you have a circumstance about a question or you haven't already established in the survey that they've encountered that they've encountered that service you can add the na the na choice if you don't want to be repetitive with the amount of questions that you're asking thanks sure thanks um so for the survey exercise get the link so for this we're actually going to I created a short survey that you that you can fill out where I've designed it so that there's an error in the skip logic or at least a lot a place in the skip logic that doesn't follow how you would think the survey should go and to be clear the eligible population for this survey is those who have visited the library in the past and intend to in the in the next semester and so that'll be clear once you see the survey itself okay I'm going to share the link through the the chat for survey the survey monkey so I'm sharing the screen with you that just has the the pdf structure of the survey that I created and so this is what it should look like when you encounter it on survey monkey it doesn't have the skip logic in the pdf since that's what I'd like you to to troubleshoot so when I'm doing my my quality control for my surveys I'm actually going through each portion of the skip logic to find out if I've correctly restricted the population to teach question so I'll give you a minute to do that but in the meantime yeah if you have any questions feel free to put them in the chat I'll see if see what questions you have or then I always actually like hearing about survey experiences so if you have have had any recent survey experiences that you you know filled out from a company or or some service that you've interacted with I I'd like to hear it okay well go ahead and do it along with you so here I was restricting it to those who visited in the fall semester so this should go to the demographic question and it does so they visited it all should go to question 2 will you visit the library this year what if I say no I hope as it goes to the demographic question the end of the surveys that I know I have some information about those who didn't visit the library change it to yes and then I should get to the the software question so I'm interested in lots of software so let's see where this goes and then I'm interested in why and I did want to point out that at least in survey monkey there's an asterisk next to the questions where I'm enforcing the respondent and I'm forcing them to answer and for question 4 actually didn't enforce that choice because I didn't want to force them to answer why they're using the software since that's possible that they don't want to answer they're using it for personal use so that's typically allowed so just sort of an example of using your intuition when you're building your survey now let's try another choice all right I didn't use I don't plan to use any software so I should skip the next the question about purpose for using the software and it does it goes right to library services and then let's try another choice the other option and so this is this is actually the I don't want to give it away but I certainly before you complete the survey but this is where the the error is I hit this is a common problem with surveys is that they sort of people forget about the other option like I do want to know about the software that I entered into question three the other choice but I've designed the logic unfortunately to go to skip over that question and to the library services desk and so that's why we troubleshoot our skip logic and then it's really the the core of the survey there's yeah two two most important things question during survey design and skip logic and the rest should be right so anyone who answers something here and then I do I do actually enforce the choice here since I do want them to answer this question but I give them the out of the no experience so I'm not forcing I'm forcing them to answer the question but they're allowed to answer no experience is one of the one of the examples of adding that in and I use survey monkey here but one of the consistent things across survey platforms is definitely skip logic so you once you learn that on one platform it's easy on other ones let me just transition back to check if there's any questions so for for data collection so we've released our survey we've checked our skip logic and we know our survey works the way it wants we want it to just some some tips about what what what we're thinking about for data collection we definitely want to set an end date for the survey something that can it can be forgotten when thinking about the protocol is we don't want to leave it open for weeks particularly if it the survey is about a recent event event that a large public event you've just held and you're getting feedback about that you want to limit it maybe to a week after that event so that you're getting fresh feedback for for data collection for some respondents will likely fail to complete the survey and then restart it and there are ways to deduplicate so typically this you keep the survey that is the most completed for data collection you do want to release the survey in a consistent way so try not to edit the questions it's not the end of the world if you have to edit it typically a web survey platform will update in real time it just means there's a slight a slight opportunity costs with reliability and then something that should be aware of is that in in survey research there's always if there's a right an option there's almost always some language asking respondents not to use any personal information not to write their name or any PII every once in a while I'll get a survey respondent to who enters their name or other financial information that we have to disqualify that survey since it removes their anonymity and that can be I think that's important even outside of my particular field of survey research that you know respondents should be aware that their their answers are anonymous so they can be they feel free to give direct responses to your questions and then for technical issues you may have a non-response bias problem if the a certain portion of your population is uneasy with browsers or hasn't updated their hasn't updated their software but some some of the small technical glitches that occur can occur across the way of your survey fielding but don't want you to get too bogged down on it if you have a small sample of 500 people it's unlikely to be determinative in your survey results okay so those we went through my survey design principles you know a lot of these are best practices so if you don't if you don't get a hundred percent perfect survey design structure the results that you want that's made just because of the way that people are the varied response that people will give to your survey and the various experiences they'll have with their in their own home or at their desk or on their tablet and so you're just creating trying to create the most consistent environment that you can so yeah my my principles are designed this survey so that you can encourage completion so that there's no so people don't get stuck in the middle and get frustrated and to guide them and be forceful about managing workflow through the survey and I think that it's I think it's better to over manage a survey workflow in skip lodging than it is to under manage for quantitative research to to get the most consistent response as possible using expository text is an underutilized tactic that you can do to reduce the amount of text on the page but also prompt the respondent to think about only the services that you're interested in them thinking about for that series of questions and then just overall guidance is to actually start with general questions in your survey and they get more specific as you go on and that way there's a logical flow that's intuitive for the respondent and then it also allows you to branch off some respondents to answer questions about a specific service and then other ones to another part of another service itself so that that sums it up for web survey development I did want to put in this slide about the next the next seminar series topic which is data visualization in Tableau for survey results the next the first iteration of that will be on April 13th to give you a heads up and then the third one later in the spring is the the data analysis portion so we'll be doing a descriptive visualization of descriptive data in Tableau and then more complex quantitative analysis in the presentation after that so in Tableau you can expect to be able to learn about creating dashboards with multiple visas they're called so visualizations within one larger dashboard using descriptive statistics and then one thing that comes up is because survey platforms will export their data in different ways ways to transform that data so that you can do a visualization quickly creating calculated fields in Tableau so if you don't do the calculations in Excel you can actually do them in Tableau directly the dashboard itself and the flexibility that with dropdown menus and how you can limit the visualizations to demographic subgroups or other any other variable that you have in your survey is a really powerful method to quickly it's useful for sharing information with stakeholders in a quick way about populations that they're interested in and also crucially methods to actually share the visualizations in the dashboards with other people in your organization who don't have a Tableau subscription which is possible you can share your dashboards with only having them download a piece of software and not having them actually buy anything so I will stop there I think okay I have just a few minutes I think for questions so feel free to ask any that you might have whether about survey best practice web survey best practices or any anything else you might want to share Kevin it's it's Claire again so my other question is what what do you think or where is it in best practices about let reminding people to take a survey like is it better to just send it one time is it better to send it at specific intervals is it better to not do that at intervals if you don't have the option to only send it to the people who didn't take it yeah that's that's a great question you know what you know when I when I do my work we we typically do three to five email prompts and we find that three is really the key number to get the the attention needed to get your response to fill out the survey and that there's diminishing returns after the third email wave and so the absolute best practice when you're doing survey waves is to eliminate so is to eliminate the those who have already responded to the survey from the next email out and that's good institutionally since it's sort of a reward for filling out the survey right because they they filled it out they don't get another email reminder and that way you're more likely to get more responses both for the survey that you're fielding at that time and then for future surveys as well okay and then my follow up is when you're setting those up or just when you initially deploy is there a rule of thumb of like what day of the week is better like is Friday a deadly day or a good day and the same with Monday and then when you send the reminders or do you try to hit that differently? We actually try to not do the same time just if we and since I do surveys with large populations the expectation is that people have different time schedules and they won't be opening their email at the same time every day so we'll do a weekday morning a weekday evening and then one or two weekend reminders so I think yeah I think that's you know I think that's a good point to vary your your ways of interacting of sending reminders to your to your respondents is there an ideal number of questions and how many are too many? I think that the I found it with my respondents that it's more it's about time so a 10 to 15 minute limit on the survey is something that I won't see a drop off in participation I won't see a significant portion of the sample that just will stop answering questions and I think you know I think it's more I think you really this is where survey design comes in that the survey seems shorter if there's fewer pages to toggle through if there are fewer scales to to think about and if there are fewer services asked about so I think you know the number of questions sort of interacts with other features in your survey that reduce survey burden but yeah I think it's a good practice to take the survey yourself and then if it takes you more than 10 or 15 minutes to complete the survey then you might think about reducing the amount of questions in it Kevin we have reached the the end of our time with our colleagues but I wonder if you would be willing to share your email address so in case colleagues had a few more follow-up questions with you that they might reach out to you yes please do reach out to me if you have any further questions about the presentation about any of the work that you're doing so I think it's sent to everyone there we go thank you Kevin and actually what we'll do when we share the recordings from the sessions we can also share your your email address when we do that as well because that'll catch colleagues that weren't able to join us today so let me say thank you to Kevin thank you very much for leading this workshop on web survey design we really appreciate it and thanks to all of our colleagues who participated today and joined us so thank you we look forward to seeing you at Kevin's upcoming workshops as well as Margaret's and we'll get the registration information out to you sometime very soon so thank you all thanks everyone bye