 Good afternoon, and welcome to the Association of Research Libraries' webinar on live-crawl results and practical applications. I'm Amy Yeager, Public Relations Program Officer at ARL, and I'm joined by Angela Papalardo, ARL Program Coordinator for Events and Finance and the live-crawl survey liaison, and Michael Maciel, Senior Data Analyst at Texas A&M University Libraries. Before we begin, there are a few housekeeping items to mention. All participants' lines have been muted to cut down on background noise, but there's a chat box in the lower left-hand corner of your screen, and if you have questions, type them in there. At the end of the webinar, we'll have a question-and-answer session, and we'll answer any questions that come in, but feel free to type them in as you think of them. This webinar is being recorded, and we will make the recording available to Amacol, and you'll get further instructions from them on how to access it once it's available. So now I'd like to turn the presentation over to Angela Papalardo. Hi, everyone. I'm Angela Papalardo. Thank you for joining us. Today I'll be giving a brief overview of the survey components and customization, as well as the steps to run a survey, and then I'll also briefly discuss interpreting the survey results before turning it over to Michael Maciel, who will provide examples of practical applications from Texas A&M. Live call questions are measured in three dimensions. Affect of service measures the interpersonal aspect of service, such as empathy and responsiveness. An example is willingness to help users. Libraries place measures how the physical environment of the library is perceived. For example, community space for group learning and group study. And information control measures the content and access to information resources. It includes types of content, convenience, ease of navigation, et cetera. An example of this is the electronic information resources I need. There are 22 core questions, and there are also five optional questions, five information literacy questions, as well as three general satisfaction questions, three library use questions, and up to three demographics questions, as well as a free text comment box. In 2010 we introduced a shortened version of the survey called Live Call Light. This is a customization that you can apply in stage one, where you select anywhere from 0 to 100% light, and respondents will randomly receive either the short or long survey according to the percent you select. The median completion time for the long survey is about 10 minutes, and the median time for the light survey is a little over five minutes. Each light questionnaire includes eight of the core questions, one optional question, one information literacy question, two general satisfaction questions, three library use questions, and all of the demographics questions that the library chooses, as well as the comment box. The core questions use a triple liker scale, where users are asked to evaluate a statement, such as library space that inspires study and learning. There's three ratings, the minimum, desired, and perceived. The perceived rating represents the level of service that the respondent believes is currently provided, and the minimum and desired ratings offer context for that perception. Minimum represents the minimum level of service the user would find acceptable, and desired represents the level of service that the user personally wants, or their ideal level of service. Live Call can be run in multiple languages. When you register, you'll see language choices based on your region, and then in your survey dashboard, you'll be able to preview the survey in all available languages. There are four steps to running the survey. When you log into your institution's account, you'll be taken to the survey dashboard, and that will display the information you need for your current survey stage. Pre-launch will be your customization stage. Stage two will be monitoring your survey progress. Stage three is closing the survey, and stage four is post-survey tasks and results. When you click on the link to configure your survey, you'll be taken to a page with a series of tabs across the top for customizing various aspects of the questionnaire. If your library has previously run Live Call, the choices you made in your last survey will be carried over to this year, and you can decide to keep these features as they were on your previous survey, or you can change them. The first tab you'll see is the customization tab, where you can upload a logo, establish your light percentage, enter your date, and choose your demographic item. If you choose to award incentive prizes, your questionnaire will include a box where respondents can optionally enter their email address. You can select up to five optional questions in the optional questions tab by selecting them from the existing bank of questions or by adding your own. If you choose to submit your own, it must be in the triple Lycurt format, and Live Call staff will moderate the questions before adding them to the database. This usually takes a day or two. Live Call has standard sets of position and discipline options, and you can customize these labels with your local terminology. Your results report will break down your findings by user group with sections for each of the categories, and we will go over this in more detail later. On the questionnaire, these categories are further broken down into subgroups, which are the response options that your users can select. This is how it will look on your configuration screen. Your subgroups must match to a reporting value, and then here's a view of how this item looks on the survey questionnaire. Only the user subgroup options, not the parent options, can be selected. On the next tab, you'll see branch library options. You can enter the response options for the question, the library you use most often. This question is optional, so if your institution has only one library, you can leave this item blank. The same as the position question, you can use your local terminology to map to the standard list of disciplines. Too many choices can present challenges to users, so we recommend no more than 16 disciplines, and in your results notebook, there will be charts showing the number of respondents from each discipline. In this example, as with the position options, you'll be able to enter custom text in the left column, and these must match to a reporting value in the right column. After entering your survey, it's time to preview the questionnaire and launch. The preview survey URL does not collect data, but it gives you an opportunity to test your questionnaire in different settings using different platforms and web browsers. Once you launch, you can't make any changes to your configuration. After you open your survey, you'll receive the survey URL to distribute to your users. If you need to know the URL in advance for creating promotional materials, we recommend opening a few days or more to the community. In Stage 2, you can also monitor the number of responses coming in by date, time of day, branch, discipline, and position, and you can also view and download the comments. In your results report, ARL provides an analysis of how well your respondent sample represents your overall campus population. To do this, we ask you to fill out what we call a representativeness questionnaire, where you will provide population data for your user groups and discipline areas. The representativeness questionnaire becomes available in Stage 2, and it's based on the customizations that you make in Stage 1. In this example, you'll see a representativeness chart where the blue line shows the population in each discipline area as a percentage of the overall campus population, and the red line shows the number of respondents. In this example, the lines track fairly well, indicating that the distribution of respondents by discipline is representative of the campus overall. At the end of your survey run, you can manually close your survey from the survey dashboard. We recommend keeping your survey open for at least three weeks or longer, and the system will ask you to confirm that you want to close. This is an irreversible step, so make sure you're ready to close. As soon as you close your survey, some of your survey data is immediately available on your dashboard. There will be three CSV files, the raw data, the options key, and either Excel or SPFS. You'll also see the comments and the incentive emails list. There are also some optional questionnaire, the post hoc questionnaire, where you can provide information about your survey, such as sample size and number of email sent, as well as the evaluation questionnaire, which is where you can give feedback about your life ball experience. Your results notebook will be available on the survey dashboard and in the live quality data repository of your survey. You'll receive an email notification as soon as the report is uploaded. The notebook contains sections for our overall undergraduate, graduate students, faculty, staff, and library staff, and within each of these sections you'll see a demographic summary, a core question summary, local questions, general satisfaction questions, information literacy outcomes, and library use summary. Live call scores have three interpretation frameworks. The one that receives scores against the minimally acceptable and the desired service level is what we call the zone of tolerance. You can benchmark against peer institutions via the data repository and analytics, as well as through norm. And you can benchmark longitudinally, which is the ability to know how your library is changing over time. For each question, the averages of the respondents' minimum and desired scores form the zone of tolerance and it's bounded on the bottom by the minimum mean and on the top by the desired mean. The adequacy mean, represented here by the orange bar, measures how well the library is meeting users' minimum expectations. The adequacy mean is calculated by subtracting the minimum mean from the perceived mean. So a positive adequacy mean shows the degree to which the library is exceeding those minimum expectations. And a negative adequacy mean indicates that the library is failing if you have a negative mean, it means the orange bar would display below the gray box. And these scores would be noted in the results notebook with red text. The superiority mean is calculated by subtracting the desired mean from the perceived mean. So this is the gray area above the orange bar. The superiority mean is usually a negative number and it indicates the library's room for improvement. If the library exceeds desired expectations, the perceived score would fall above the zone of tolerance and the superiority mean would be positive. And if that happens, this chart would show the orange bar extending above the top of the gray box. This is another view of the same concept. In this example, the mean of the respondents' minimum scores is 3 and the mean of their desired scores is 8. And the zone of tolerance is that 5-point range in between the two scores. The perceived mean of 6 falls within the zone of tolerance indicating that the library is exceeding their users' minimum expectations. The adequacy mean is 3 and the superiority mean which is the measure of the room for improvement is negative 2. Radar charts are another example you'll see in your results notebook. These give you a snapshot view of the dimensions. Each spoke in the wheel represents one of the core questions. Most charts will display blue and yellow which indicates that the perceived score falls within the zone of tolerance. Green indicates that perceived is above the desired and red indicates that it's below the minimum. This is a close-up view of the radar charts and zones. Yellow shows the superiority gap the distance from perceived to desired. Blue shows the adequacy gap the distance from minimum to perceived. And green and red shows scores above the desired and below the minimum. Once you close your survey you'll have immediate access to the raw data and the comments. The results report will be delivered within two weeks and you can find your survey results in the data repository along with the comments and raw data files as well as the group notebooks. You can compare your data in the analytics portion of the website where you will be able to compare the aggregate data against that of other institutions in your same survey years. You can also generate charts and view and download the data. You can conduct peer benchmarking via the analytics pages data explorer tab. You'll select your institution and generate your staff and choose your peer institutions. Here are two examples of the charts and tables you can generate in the analytics section. You'll be able to view the charts and the data tables as well as download the data. Qualitative analysis of the comments is another way to use your results. You will have access to the comments immediately upon closing the survey and you can download them as either Excel or a text file. A simple way to begin tackling your results is need attention by ranking services with the highest desired scores and or taking a look at the adequacy and superiority gap scores. The DN score model combines these three scores into one and allows stakeholders to easily interpret the results. Michael will talk about this a bit more in detail in the next section. And then the right column of this slide highlights some additional ways to look at your results. You can look at the top five most desired services, individual user groups, through a lens of awareness or explore one or more particular questions by discipline and user group. Communicating your results is a critical component to putting the results into action. These need to be communicated clearly to your stakeholders and be sure to consider the needs of different user groups. Faculty needs, for example, may be considerably different than undergraduate needs. Determining whether library services or meeting user needs or not can be tricky. In some cases, it may be necessary to implement new marketing strategies in addition to changing or adding services. And I've included a couple of links here for further reading. The article in the second bullet point contains a detailed description of the DN score model that I mentioned. Now I will turn it over to Michael Mastiel, senior data analyst at Texas A&M University and live call super user. And he'll give some practical advice on running the survey and interpreting the results. Thank you, Angela. As Angela said, my name is Michael Mastiel, and I work at the Texas A&M University Libraries. Today I'd like to present to your group some recommendations on setting up, running and analyzing the live call survey and data. At the end of the presentation, I will talk about some of the projects that we have completed as a result of reviewing data for and from live call. So one of my first recommendations is the population sample. One of the questions you may be asking yourself is just, do we survey the entire campus or do we survey a random sampling of it? And my recommendation is that if you're not running the survey annually, that you invite everyone to participate in the survey. And don't forget that you have other populations besides your faculty and students. You have researchers and clinical staff, university administration and the library. I would recommend you consider using local questions. Sometimes you'll have a sense of an issue that you particularly want to address at your campus and in any of your libraries. But this is a great way to do it. One of the questions or one of the options that you'll be given in the survey when you set it up is what percentage of the survey do you want to conduct in the live version and what percentage of the survey you want to conduct in the full version. Now the full version will take considerably longer to complete whereas the live version won't take as long and you'll probably get more respondents that way. But you necessarily won't get as many questions, the full 22 core questions that you would from the live that you would from the full. So you really got to talk about do you want as many responses as possible or do you want to look at the full 22 core question perspective from your campus and from your survey participants. Before you start your survey here's some recommendations. One meet with your subject specialists, the people that go out in the field and meet with the students and with the faculty and let them know that the survey is coming and give them some talking points about the survey. Word of mouth is one way to generate participation. Meet with your public service personnel, the people at your circulation desk, interlibrary loan desk and that's like that. Again, Word of mouth and them mentioning it might improve your participation rates. Be sure that the entire library is conducting the survey because you never know what point of contact a staff person is going to have with a student or with a faculty member. So don't look at just your public service personnel but look at your technical behind the scenes personnel as well. Also do a follow up, send an email to your library personnel so that they can read that and again have some bullets to refer to when talking about the survey. And then share the survey schedule and marketing materials with everyone just so that when they start seeing especially the marketing materials, signage, table tents and items like this that they know what's on them and it doesn't take them by surprise. My scheduling recommendations is that we generally do the survey in the second semester, not the fall but the spring semester and we generally do it mid semester typically either after or before midterm. The length of the survey is Angela mentioned to do it more than three weeks. We've actually had better response rates by conducting over a 45 day period and just sending out an occasional reminder. You don't want to send too many reminders out but you do want to send reminders just to again increase that participation rate. Check also with your campus to make sure that your institutional live survey is not conflicting with another survey. We conduct student assessments surveys like NESTI and SERU and one thing that we make sure is that the invitations to participate in those surveys do not conflict with the live quality invitation. When appropriate send out different emails and email contents to select user groups. We have one email text that we send out to undergraduates. Another type to our master's PhD students yet another to our professional degree students like our medical doctors and our veterinary students. We do send out a separate one for the last time we ran the survey. We sent out a separate content to researchers administration and in some cases we highlighted certain colleges for example the college of nursing which we've had a history of low participation rates where we've addressed specific email text to those user groups. Be sure to keep the emails brief. If you print out your email it should not extend beyond the page and preferably three quarters of a page. I'd recommend you use bullets as opposed to full sentences. When you send out that email invitation give the people you're inviting a reason why to participate. List the service improvements that are important to the user group and then also emphasize that user input drives improvements. The more that they participate in the survey the more we're going to be able to deliver exactly what they're looking for. I'd also recommend that the invitation come from either the dean of your library or the director of your library and in saying that I also recommend that you set up a separate email address for the dean. Otherwise when people start responding to the dean's email you might just blow her or his email box and more to the point you want to be able as a survey administrator to be able to look at those comments and address those comments and I don't know that many deans that are going to give you access to their individual email account. So set up an alias for something that you have access to. I would recommend you begin the survey on a Tuesday or Wednesday. Deliver it mid-morning. Email schedule send out an initial invitation. First reminder again on a Tuesday or Wednesday and then a final reminder on the Wednesday that the week that the survey schedule to end. Announce the survey will end on Friday but keep it open through the weekend just to catch any stragglers. What you see here on the right side of this slide is the marketing image that we use throughout the survey period. We created table tents that we put on study desks and at public service desks. We in addition to the survey URL that ARRL provides, we actually created a user friendly email address and ours is library tamu.edu slash survey so that people don't have to keep looking up and down to type in numbers. Ask your subject specialist to send out announcements about the survey. Use any list service that you have that are campus or university specific. Use library and institutional electronic signage. Use social media if you have it available and again use table tents at the library study table in library service points. While the survey is being conducted you will have the ability to look at the comments as they come in and the comments will be identified by user group first year student, second year student, assistant professor, associate dean as example and also by college. If the responses are disproportionate by that I mean that you are not getting as many participation numbers that you would like to see consider changing your email reminder text content. Then make sure that your response rates have peaked and are declining before you send out email reminders. Don't send out reminders while your participation rates are climbing. One thing that we do is that we also monitor the comments and whenever a user mentions an individual librarian or an individual library department I will send out a congratulations email a kudos email to that individual or that department. I will also go up the chain of command copying the supervisor as the dean as well and then we coordinate with the dean so that when I send those out the dean then follows up by sending a further congratulations for getting this kind of notice. You'd be surprised about what this does when you generate your own people out there marketing for participation in the survey. A lot of these comments that name specific librarians are actually used in faculty evaluations so around this time that we do conduct the survey you'll see the faculty members going out there and promoting this just so they have something to put in their evaluations. There's a practical application to this. After the survey is done some of the areas that you'll have data to analyze will be from the live call analytics for your library. You'll also have the ability to compare yourself to other libraries. You'll have the raw data itself to look at as well as the comment text. Some of the types of analysis and I'm going to be going through this in a little bit more detail is break down by category, career, trends, comments and so forth I'll address those in the upcoming slides. Angela showed you a copy of what the graphic representation of the data looks like. I've come up with my own through Excel way of visualizing the data. Basically the same version is what Angela showed you except I use a dot as opposed to a bar within the zone of tolerance. I've also given you some definitions here to use. Priorities are your top desired scores. Your top perceived scores are what are your successes. Your satisfaction is a ratio I call AGR which is your adequacy gap ratio and your concerns are your bottom AGR scores. The left hand of this slide I've given you some of the formulas and some of the criteria to use to determine when something is a success or when something is an area of concern. This goes back to what Angela talked to you also. What I did here is kind of a mathematical version of explaining what this data can be used for and how it can be interpreted. Angela mentioned top five lists. I definitely use them for the purpose of this presentation. I used a top three list by priorities, successes and concerns. The first column will show you what is important to your user group and then you can carry that over and see of those priorities which one your library is currently succeeding at which one is your library is an area of concern that your library may want to work on. The core question organization there's 22 core questions and what I've done is I've broken them down into six different categories. That's customer treatment and job expertise under effective service, under information control and information resources and information accessibility and then library is the place of study. There's three categories the library environment overall individual and group study related question and then question A question related to the equipment that your library provides your users. Here's some of the examples of the analysis that I talked about and Angela talked about. Here's one that compares for the question a library web site may need to locate information on my own and what I've done here is compared undergraduates to graduates to faculty and you can see that with undergraduates where the dot is it's within that zone of tolerance so they're pretty much satisfied with the ability to locate information on their own because that dot is over half way up that zone of tolerance bar. You can say this is a success. You look at the graduate students that dot is within the zone of tolerance but it's in the bottom half of that zone so even though your graduate students are satisfied there is some area for improvement. And then the faculty you look at that dot is below the zone of tolerance which is a demonstration that faculty are not comfortable with being able to locate information on their own. Here's another analysis where I've broken down the analysis by user group and then by college. I believe this is for undergraduates and again what I've done is the first bar shows the responses for all user for all undergraduates for all colleges and then I've broken down by college and then the final bar is where I compared our results to our ARL members so that not only can you look at how these scores by college vary to the institutions overall score but also to other institutions. Speaking of which one of the things that would be important to you all is especially if you're doing this in a social environment being able to benchmark which Angela mentioned and here you see a longitudinal trend for the question printer electronic journal collections I require from my own for faculty and you can see over the years that our score has gone from in 2003 when we weren't meeting faculty needs 2015 where we were meeting those needs is in the lower half of that bar there's still areas of improvement and then I've provided a trend chart here for the ARL perceived scores also so it shows that compared to some of our ARL other members our scores are slightly higher so time for Pat ourselves on the back I did an analysis by AGR which again is your satisfaction value and what I'm looking for is changes from year to year where the satisfaction either increased or decreased from the previous year or increased or decreased from two years what I'm particularly looking for is where it's decreased over a two year period you are going to see some fluctuation from year to year but if it decreases two years and that means that you're headed in the wrong direction and you may definitely evaluate what you're doing to address in this case dependability and handling user service problems I've also done a comment analysis I don't use Atlas TI I actually have a a comments code book that I use in Excel to know how comments are related or how they're categorized and you can see here that it's one of the most important functions for undergraduates whereas with graduates library is a place and information accessibility is your equal importance and with faculty it's information accessibility that's the most important here's that comments code book that I was talking about and here's how I break down the comments not only by your broader it's effective service, library's place information resources information accessibility categories within there so if I want to drill down and find out how the comments are for a particular area for example marketing or the way our reference or general treatment I could pull out those comments using a sort function on Excel by the way just regarding comments the general rule is that for you should get about 50% of comments for your participation number so if you get a thousand people participating in the survey you should get about 500 comments here's another example library usage this is again for undergraduates even though I didn't market that way how many people visit the library premises 84% visit within at least a month how many use the web resources 72% so you can see that whereas both the library web page and the premises are important more undergraduates are going to the building itself versus going to the virtual library this is the one thing that I want to talk about when we talk about just the impact that live call can have is evidence supports funding I can't tell you how many projects we've had funded by the university just because we can refer back to the live call survey findings and say look this is what our users say is important and this is how we want to address that particular issue just quickly just to go over some examples here customer treatment we've provided a standardized method how we greet and talk to our customers at our public services jobs expertise I've listed some examples here all in one area or another dealing with professional development in many cases we moved away from the librarian out of just out of the MLS school and have hired people that have had experience and we've also provided a new clinical and instruction faculty track as opposed to a tenure track that's applicable to your organization information resources again I'll let you read through this accessibility was a big issue and it's been on doing web usability studies to track how people when they open up the home page of the library website how they go about looking for information on their own library environments two slides with the library environment stuff we recently completed over the last 10 years about a $15 million renovation and here's just some examples of things that we've done this is the library environment itself individual group study and equipment and with that I'm done so thanks very much for your time thank you Michael that was just incredibly helpful to hear all of your really practical experience gained over many years of familiarity with this survey we'd like to open it up for questions from the audience now there's a chat box in the lower left corner of your screen and if you type your question in there we will pose it to Michael or Angela and there is one question waiting now for Michael can you talk about how often Texas A&M runs the live call survey whether you use the same local questions every time or if you change some we're actually in the middle of a transition we used to conduct the survey annually and as a result of that we would do random samples of students and faculty one year we would sample all the science and technology faculty the next year we do the liberal arts and business faculty but we're moving toward a once every conducting the survey this coming March to begin that new cycle regarding the local questions no we do change them every time we run the survey again what the local questions do is try to highlight a concern at that time the last live call survey that we did we emphasized information literacy classes this survey we're actually going to spend some more time on web usability issues that's a good way to keep keep the survey current for what's going on in your library in a particular year Angela other libraries some consortia will run surveys every two or three years yes there is most libraries that say run it every few years we do offer discounts for running it every year or every two years a lot of I don't know if that's a lot some institutions run every three years or every four years so they would not be hitting the same populations if they do it more than three or four years apart do you have a question here Amy I'm sorry to jump it in but yes we do use the survey for accreditation reports and visits in fact at the 2018 library assessment conference I did a whole presentation on how to use live call for assessment reporting and visits and I think that presentation is on the live call website and the publication section people are interested in exploring that further Liza has a question is it possible to manipulate the wording of the questions taking into consideration that we work in an ESL environment the wording of the questions themselves the core questions cannot be changed the the translation however we could potentially work with you on a translation if there's a mid translation issue but generally the the core questions can't be changed and the optional questions are what you can submit to be worded however you want so long as they're in that triple like format the beginning text is when it comes to and then question text my minimum desired and perceived rating is so it sort of has to sit within that format to to be on a survey to make sense within the survey context EBS if we could share a link to Michael's presentation on using live call for accreditation and when we share the recording with Alex at Amacall to pass on to you all we will also send along that resource and any other resources that we think might be useful Michael you talked when Angela demonstrated a little bit the live call analytics module which is where libraries can interact with their data to compare with peers to compare across user groups, discipline areas to create custom radar charts and download subsets of data Michael could you talk a little bit about how you use analytics in your using your live call data well the university has several aspirational peers we use AORL institutions as one peer group we also use all these Texas universities as another user group so by having that analytics option or the subscription to analytics we can pretty much pick what we want to compare ourselves to either individually institution institution or and this is very germane to the fact that I'm talking to a consortia group right now is you can gather all your data by institution and by the consortia as a total to compare how your library is doing compared to the consortia or to individual institutions I would say one thing just as a reminder is that if you are going to compare yourself to if you do use analytics and you are going to compare yourself to another specific institution there are certain guidelines on how you report that other institutional data and Amy or Angela I'll let you explain that but I do want to offer that cautionary note about not identifying that institution by name that's true yeah we have guidelines on how to use data that emphasize that these scores are just one measure of how people look at library services and they're not an absolute measure that doesn't necessarily mean that one library services are better than another it's a measure of the satisfaction so you need to take a lot of external factors into account when comparing scores and then we also ask that people anonymize libraries in comparison just as a further note on that I believe that Greece has an IRB human testing standard that they have to meet when they send out surveys and one of the issues is that we do want to anonymize not only our data but that of other institutions just to meet those internal review board guidelines Evee had some questions about the analytics module was wondering if it's a feature of live quality or if it's an additional service and Angela can explain the options there yes analytics is available to anyone who runs the live quality survey it's not an additional service however there is an additional service for having more access to all of the institutions who have run so when you run the survey normally you will have access to any of the aggregate results for institutions that have run in your same survey year so if you run the survey a lot as Texas A&M does you automatically have access to all the years and all the institutions who have run in those years if you are only running in 2019 for example you will only have access to the other institutions that have also run in 2019 for an additional fee which is $1,000 per year we offer a subscription to the live quality analytics which gives you access to all of the institutions in all years so we recommend doing that if you are planning to do a lot of benchmarking work especially after you run a survey perhaps in an off year and it's something you can subscribe to at any time so you don't have to do with the thing here that you're running a survey but you do have to run a survey first before you can subscribe to that and you can subscribe to anyone who hasn't run a survey hopefully I do want to second that because Texas A&M no longer does annual surveys but does them like a three year cycle we do pay for that annual subscription even on the years that we're not conducting the survey and one reason for that is that when we do benchmarking when I'm comparing our university situation where another university ran a survey in 2017 and we ran it on 2016 so by having that analytics subscription I can compare those two institutions whereas if I didn't have that option I wouldn't be able to pull that data for the other library exactly thank you Michael put your gears a little bit at Texas A&M use incentive context to promote participation in the survey yes we do and we actually have to be very careful about that because of new tax laws what we've been offering over the past few surveys is we offer five Amazon fire tablets and we make sure that the fire tablets are under $100 again for tax purposes and there's also a catch to that is the federal government here requires that even if it is an incentive under $100 the recipient has to pay tax on that so we try to make sure that people know that you're getting this $100 fire tablet but you're going to wind up having to pay $2 in tax on it or whatever the tax rate is but we do offer incentives I did one year where I did not offer an incentive and the one question I kept getting on comments from the people that have taken the survey before was what's your incentive this year so if you run this survey consistently like on a three year cycle or less and you do offer the incentive it's something that you almost lock yourself into for future surveys again look at what your tax codes are regarding incentives before offering them but I would recommend that you use them oh one other thing with regard to that and Amy or Angela correct me if I'm wrong but I believe there is a resource link on live call that gives a list of the incentives that have been offered by various libraries throughout the years is that correct familiar I will look for that exactly where it is and there's been a large range of options that I've seen just in the past couple of years that I've been working with the live call survey if anything as small as a piece of candy or a few pieces of candy for completing the survey to $5 gift cards to larger items like iPad I'll look for that and we'll send it around with the slides as well the recording ideas on that if I recall correctly with regard to that too Angela we live call also has a repository of other examples of marketing materials on the website oh yeah and definitely spend some time looking at that web page with those links I'm really proud of the marketing design that we came up with but there's some genius other ideas out there and some very cute and effective ideas that can really spur your creative thought process when coming up with your marketing campaign while we do provide some examples of what other libraries have done live call doesn't provide marketing materials for libraries to customize themselves although we do have a bank of images where you can download the survey logo but that said if I recall correctly that website does identify the institution that was providing those marketing ideas and it's been my experience in previous years that they're more than willing to share their graphics with you please explain how you differentiate the light and the full version and what percentages do you recommend and why are you throwing that question to me I guess though yeah Michael could you talk about what you've done at Texas A&M oh yes and she specified Michael gee thanks I'll tell you that's the point of contention this year in previous years we did 50 and 50 50 full version and 50 light and I actually want to our participation rate generally is anywhere from 10 to 25% of the total population so we don't whereas with most surveys you know you're a success if you're getting an over 50% response rate you don't look at that with live call you look at your representativeness chart to determine survey validity but this year what I'm trying to promote is 75% full and 25% light just because I prefer to get that entire 22 core question perspective as opposed to only the 8 question perspective that the light does and saying that there's several people here that want to see a higher participation rate versus that full 22 so like I said it's a bone of contention right now and I know that's not an answer but at least let you know that that's where I'm leaning and why I'm leaning that way it's hard to say it really depends on what the priorities are for the institution and like you say Michael you get a higher response rate using live call light 100% or any percent but then you don't have as much data for each question as you would if you had run it all along so most institutions I would say do a mix I think lately there's been a lot running 100% light so it really just depends but I think a mix is a great way to start out if you haven't run a survey to see what kind of participation rates you get EDS sits there as a support community to help with questions or a listserv would you like to talk about that? Yes we have a live call listserv and we can definitely get you added to that I don't remember off the top of my head I think we have a link on the live call website which I'll have to send around to join but we can also manually add you anyone who likes if you send an email to livecallatarall.org that's me and I can help you out with that I'll also send this link around when we send out the recording and the slides In addition to that and anyone else that's out there my email address is on my slides and I really am a live call geek so feel free to email me if you have any questions as well The slide with our email so please feel free to contact anyone of us on this slide here I have another email address which is just you don't have to put the library in there Thanks Michael Well we've just about come to the end of the hour and it looks like there are no further questions coming in so thank you all very much for joining us thank you Michael and Angela for all the good information and we will be sending the recording to the consortium within the next week thank you