 Good afternoon, and welcome to the Association of Research Libraries' webinar on Lifequal Survey Administration. This is the final one of three webinars that we're offering this year for libraries that are participating in Lifequal during the year or considering participation. We'll have a recording of the webinar made available, and we'll send you the link to the recording in the slides sometime in the next week. I'm Amy Yeager, Public Relations Program Officer at ARL, and I'm here with Angela Pappalardo, ARL's Program Coordinator for Events and Finance in the Lifequal Survey liaison. Before we begin, there are a couple of housekeeping items to mention. We've muted all participants' lines to cut down on background noise. If you have questions, please type them in the chat box in the lower left corner of your screen, and at the end of the presentation, we'll unmute participants' lines for the question and answer session. Our goal today is to introduce you to Lifequal, and to review the information you will need to prepare for and administer your survey. We'll start with an introduction to Lifequal and Lifequal Lite, and then explain how to use the Lifequal online platform to administer your survey, and then we'll end with a question and answer session. So, what is Lifequal? Lifequal is an online survey measuring library users' expectations and perceptions of service quality. It was developed in 1999 and 2000 by researchers at Texas A&M University, Colleen Koch, Bruce Thompson, and Fred Heath, and also by Martha Kearley, who are here at the Association of Research Libraries. Lifequal uses a GAP measurement model based on serve qual, which is an instrument created in the 1980s to assess customer perceptions of service quality in the for-profit corporate sector. Since 2000, over 1,300 libraries in 35 countries have run Lifequal surveys, and more than 2.3 million responses have been collected. Foundational Lifequal research identified three dimensions of library service quality. Aspect of service measures the interpersonal dimension of library service. It includes aspects of empathy, responsiveness, assurance, and reliability. Library's place measures how the physical environment of the library is perceived in pragmatic, utilitarian, and symbolic terms. It encompasses aspects of the library as a refuge. Information control measures service quality in terms of content and access to information resources. It includes the scope of the content offered by a library, convenience, ease of navigation, timeliness, equipment availability, and self-reliance. The Lifequal questionnaire is composed of 22 core questions in the three dimensions. There are nine aspects of service questions, eight information control questions, and five library place questions. There are also five optional questions, which libraries can select from a pool of about 175 items. I will discuss that possibility a bit more later in the webinar. There are information literacy questions, such as the library helps me distinguish between trustworthy and untrustworthy information. General satisfaction questions, questions about library usage, demographic items, and a free text comment box. In 2010, the Association of Research Libraries introduced the shortened version of the survey, Lifequal Light. With Lifequal Light, all of the questions on the survey are asked over the course of your survey run, but each respond and answers only a random sample. This methodology reduces the time it takes for participants to complete the survey and has been shown to increase response rate. Lifequal Light is a customization option, so it's not something that you need to decide, make a decision about before you register for a survey, but once you have registered for a survey, you can choose what percentage of your participants will receive the light questionnaire. So on each light survey questionnaire, there are eight rather than 22 core questions. There's one question from each of the three dimensions that is the same on every survey, and then the other questions for each of those dimensions are randomly selected by the protocol. Each light questionnaire also has one of the five local questions, a sampling of information literacy and general satisfaction items, but it has all three library usage items, all of the demographic questions in the free text comment box. Lifequal core survey questions use a triple Likert scale where users are asked to evaluate a statement related to a distinct aspect of life of service quality. For example, library space that inspired study and learning, and they're asked to give three ratings about this statement, minimum, desired, and perceived. The perceived rating represents the level of service that the user believes the library currently provides in this area, and the minimum and the desired ratings provide context for that perception. The minimum rating represents the minimum level of service the user would find acceptable, while the desired number represents the level of service that the user personally wants, or their ideal level of service. There are several key concepts in understanding how Lifequal results are expressed. For each question, the averages of the respondents' minimum and desired scores form a zone of tolerance, which is bounded on the bottom by the minimum mean, and on the top by the desired mean. And from these scores, we derive a couple of additional indicators. The first is the adequacy mean, which measures how well the library is meeting users' minimum expectations in a particular area. The adequacy mean is calculated by subtracting the minimum mean from the perceived mean. A positive adequacy mean shows the degree to which the library is exceeding minimum expectations, while a negative adequacy mean indicates that the library is failing to meet minimum expectations in this area. In a case where the library isn't meeting minimum expectations, the perceived score would fall below the zone of tolerance. We also derive what we call the superiority mean, which is calculated by subtracting the desired mean from the perceived mean. The superiority mean is usually a negative number, which indicates the library's room for improvement. If the library exceeds desired expectations in this area, the perceived score would fall above the zone of tolerance, and the superiority mean would be positive. And these concepts make a little bit more sense when you see them in action. So in this example, the mean of the respondent's minimum score is 3, and the mean of their desired score is 8, and the zone of tolerance is the 5-point range in between those two scores. The perceived mean of 6 falls within the zone of tolerance, indicating that the library is exceeding their user's minimum expectations in this area. The adequacy mean is 3, and the superiority mean, which is the measure of this library's room for improvement, is negative 2. So now I'm going to turn the presentation over to Angela to talk about using the survey live call platform to administer your survey. Hi, everyone. I'm Angela Pobolardo, the Program Coordinator for Events and Finance at ARL, and I'm the live call survey liaison. In this next section, we're going to go over the steps for administering a survey using the online platform. When you log into your institution's account on livequal.org, you'll be taken to a survey dashboard displaying the information you need for the current stage of your survey implementation. There are four survey stages, pre-launch, which is where you'll customize, monitoring survey progress, closing the survey, and post-survey tasks and results. This shows stage one. When you click on the link to configure your survey, you'll be taken to a page with a series of tabs across the top for customizing various aspects of your survey questionnaire. If your library has previously run livequal, the choices you made in your last survey customization will be carried over to this year. You can decide to keep these features as they were on your previous survey, or you can change them. The first tab is the customization tab, which includes a number of items, including the logo, the light percentage, and the dates you plan to run your survey. These dates are for information only and they're not binding, and they help us predict periods of high load on the system. If you choose to award incentive prizes, your questionnaire will include a box where respondents can optionally enter their email addresses. You can also select optional demographic items in this tab. It's important to make the distinction that while these demographic items are optional to include on your survey questionnaire, if you do decide to include them, they are required items for your respondents. The optional questions tab is where you can select up to five additional questions for your survey from a standard list. There are currently about 175 optional items available in American English, and other language translations will have a smaller subset of those items. On this screen, you will also have the option to submit your own local question. These questions must be in the triple Likert format and will be moderated, usually within a couple of days by the live call staff. One of the more complex aspects of configuring your survey is customizing the response options for the demographic questions. Live call has standard sets of position and discipline options, and you can customize these labels with your local terminology. Your results report will break down your findings by user group, with sections for each of the categories of undergraduate students, faculty, library staff, and staff. And if you're at a community college or health sciences library, your categories will be a little different. On the questionnaire, these categories are further broken down into subgroups, which are the response options that your users can select. Here's a view of how this item looks on the survey questionnaire. Only the user subgroup options, not the parents, can be selected. And this is how it looks on your configuration screen. The left column is where you can enter your customizable text, and the right column is where you select the reporting value. A few caution, only the user subgroup options can be selected by participants, and you must have at least one user subgroup option for each parent category. You'll also need to provide population data for each user subgroup. On the branch library options tab, you can enter the response options for the question the library you use most often. This question is optional. So if your institution has only one library, you should leave this item blank. As with the position question, you can use your local terminology to map to the standard list of disciplines. You'll need to provide population data for each discipline. Too many choices present challenges to users, so we recommend no more than 16 disciplines if possible. And in your results notebook, there will be charged showing the number of respondents from each discipline. In this example, as with the position option, you will be able to enter custom text in the left column, which must map to a reporting value in the right column. When you have finished configuring your survey, it's time to preview your questionnaire and launch. The preview survey URL does not collect data, but it provides an opportunity for you to test your questionnaire in different settings using different platforms and web browsers. You must complete and answer all questions and submit at least one full run of your preview survey before the launch survey option will activate. Once you launch, you can no longer make changes to your configuration. Once you open your survey, you'll receive the survey URL to distribute to your users. If you need to know your URL in advance for creating promotional materials, you can open your survey a few days or more before you announce it to your community. In stage two, you can also monitor the number of responses coming in by date, time of day, branch, discipline, and position, and you can view and download comments submitted by users. In your results report, ARL provides an analysis of how well your respondents sample represents your overall campus population. In order to do this, we ask that you fill out what we call a representativeness questionnaire, where you will provide population data for your user groups and discipline areas. The representativeness questionnaire becomes available in stage two and is based on the customizations you make when configuring your survey in stage one. This is an example of a representativeness chart. The blue line shows the population in each discipline area as a percentage of the overall campus population, and the red line shows the number of respondents in each discipline as a percentage of the total respondents sample. You can see that for this library, these lines track fairly well, indicating that the distribution of respondents by discipline is representative of the campus overall. The representativeness questionnaire also asks for statistics on the library's expenditures and staffing, which are based on questions asked on the annual ARL statistics survey. We collect this information to help libraries identify peer institutions for benchmarking their results. And the library statistics we ask for include total library expenditures, personnel, for professional staff and full-time equivalent, and personnel for support staff and full-time equivalent. And total library materials expenditures, as well as total salary and wages for professional staff. Use the fiscal year 2016-17 that ended the year before you're doing the survey if you're running in 2018. At the end of your survey run, you will manually close your survey from the survey dashboard. The start and end dates that you entered when you configured your survey are for information only. We recommend keeping your survey open for at least three weeks. And stage three will confirm that you want to close as this is an irreversible step. As soon as you close your survey, some of your survey data is immediately available on your dashboard. You'll see three CSV files, the raw data, the options key, and the response key. And these can be read as Excel or SPSS data files. You'll also see the comments and your incentive emails list. Also available on stage four is a post hoc questionnaire where we ask information about your survey, including the sample size, the number of emails sent, the number of invalid email addresses, incentives offered, marketing techniques, et cetera. You'll also see an evaluation questionnaire, which is a place where you can get feedback about your live quality experience. All survey leads are encouraged to complete this questionnaire and it is anonymous. Your results notebook will be available on your survey dashboard and in the live quality repository about two weeks after you close your survey. You will receive an email notification as soon as the report is available. The notebook contains sections for overall undergrads, graduate students, faculty, staff, and library staff. Within each of these sections, you'll see a demographic summary, core questions summary, local questions, general satisfaction questions, information literacy outcomes, and a library use summary. If you've chosen to have an incentive prize drawing, the list of email addresses will be available in stage four, along with some demographic information. You can use this list in any way you choose to award your incentive prizes. To preserve respondents anonymity, incentive email addresses are stored in a separate table in the live quality database, and they can't be linked back to individual survey responses. You can download this list at any time during the year, but once the platform closes on December 9th, the list will no longer be available. AARL policy is to purge incentive emails from the database six months after the close of each survey session. We also have a few additional services. The live quality membership subscription. You can also order print copies, a library branch analysis, customized discipline analysis, user subgroup analysis, or there are other customized analyses. Email livequal at aarl.org if you're interested in any of these.