 Good afternoon and thank you for joining the webinar. This webinar is being recorded on Tuesday, September 22, 2015. All lines have been muted to cut down on background noise. You may ask a question at any time using the chat link on the left side of your screen. I would now like to turn the floor over to Martha Kira-Lidu of the Association of Research Libraries. Martha, please go ahead. Thank you, Amy. Welcome everybody. We're really delighted to have all of you here interested to learn about the latest pilot experience experiments that we did on the well-known live call survey. As Amy said, everyone is muted to cut down on background noise. So please type your questions in the chat box on the left-hand corner. And a recording of the webcast will be posted on ARL's YouTube channel after the event. And I'd like to welcome in this webcast beyond myself our two presenters, Rachel Llewellyn from the University of Massachusetts Amherst and Sarah Murphy from Ohio State University. They will be the main presenters for today's webcast. We also want to remind people that we have three more webcasts. We have captured with colleagues from other institutions implementing and speaking on different aspects of improving library services. And we've had colleagues from McGill and Texas A&M and from the UK, from Cranfield University and there, webcasts are available on the ARL YouTube channel. We do promote this series of webcasts as one of our offerings that promote one of our new strategic directions, a strategic direction known as Libraries That Learn. And this webcast series is known as the Libraries That Learn webcast series since it features ways for libraries to learn and use the evidence they have been collecting. So I do want to say a few things about what inspired the pilot we are going to talk about. From time to time we have heard our live call participating libraries bemoan a little bit the fact that they have to send reminders to everybody without knowing who has responded and who has not. So historically the live call survey protocol was developed and has been promoted as being anonymous. So as a result people were burdened with unwanted reminders. And there was no way to really know if someone had filled in the survey more than once, for example. And these were two motivating factors that led us to consider the development of the confidential survey approach to doing live call. And we are really grateful to both Rachel Wellen and Sara Murphy for stepping on the plate and being willing to test it. So what have we done with the confidential protocol that sort of makes it interesting and useful to libraries? The key feature is that you are able to send reminders only to those people who either have not viewed the survey or have not submitted the survey. And as a result you minimize the burden of sending subsequent reminders for filling in the survey targeting only those who have not responded. And you are also able to calculate more accurately response rates because there are controls behind the things that tell you who has responded and a person can only respond once. So they cannot go to a generic URL and respond multiple times because every URL is unique and is tied to the email address of a respondent. So this also I think brings forward a number of issues that we will have to sort of discuss as a community and as a profession. You know, with the anonymous protocol we do not link perceptions and behaviors with any other data that ability is not there. With the confidential protocol though the ability to link perceptions and the responses to the behavioral questions that the survey includes you can link that information to potentially other data and eventually one could even go to as far as doing real outcomes assessment by linking these perceptions and expectations of library service quality to other outcomes that the users are achieving, you know, graduation, completion time, graduation rates, future plans. But all of that I think will have to be discussed and vetted and understood a lot more thoroughly as we move forward. So the last bullet point I have there is about the importance of ethical considerations both on the anonymous survey protocol and on the confidential survey protocol. You know, even with the anonymous survey protocol oftentimes people come back to us and they say, well, you know, the computer IP could be unique. You know, this way you can identify a unique computer type to a unique user. Well, theoretically yes, but we do not release IP addresses to the participating institutions. And, you know, another consideration, ethical consideration is about the incentive price winner emails. And, you know, libraries come and ask, you know, can you really tell from that information who has responded. We have set up actually the anonymous survey protocol in such a way that the survey responses are disassociated from the table where we capture the emails for the incentive price winners. So we have taken steps in building the anonymous protocol that address the ethical considerations of maintaining anonymity. On the confidential side there are also similarly, you know, ethical considerations that one needs to consider. A key consideration is, you know, is this programmatic assessment or are you doing this more as a research project? There are different regulatory and each institution actually varies in a little bit in the way they approach programmatic assessment. You will have to talk with the institutional researchers in your institution to get a better sense of how your institution is approaching some of these ethical guidelines. And, of course, what we end up having now that we, on the confidential protocols now that we know who's responding and all that, we also capture their emails on the email, you know, incentive price winner database. Sometimes these two lists of emails do not match because on the incentive winner email database, for example, we eliminate those that have, you know, included the same email more than once. And there are, there's a cleaning process and a vetting process so the two lists can be different. So, you know, as you are getting more sophisticated in implementing whether it's live call or other surveys and you are considering whether you do it in an anonymous fashion or a confidential fashion, know that all those ethical considerations have to be thought through, taken into account. Ultimately, you know, we have a responsibility to do the best we can to respect and protect the subject's opinions and their identity. And I would recommend actually that you consider as professionals engaged in library assessment. You consider doing on a regular basis a certification course that's known as the city training certification that's available in most academic institutions. And there is also, you know, a time limit for this certification. It's good for, you know, three to two years. So every three years you go through that again because as our environment changes, some of our understanding of how we need to approach some of these research protocols and ALCX also shifts over time. So this was a little bit of an introduction on some aspects of doing anonymous and confidential surveys. And without any further detail, I'd like to welcome Rachel Llewellyn who's going to tell us in more detail how the live call confidential pilot was implemented at the University of Massachusetts Harris Libraries. Rachel. Thanks, Martha. Hi everybody. So I'm happy to share some of our experiences with the confidential pilot. I can tell you we first did live call in 2004 and we've done it in 2007, 2011, and this past spring in 2015. So we were especially interested in being able to customize and personalize messages to our users and to target the follow-up reminder messages only to people who had not yet responded. So that was what we were most interested in getting started. So just to give you a little bit of background of how we set things up, we drew, we did a combination of a population and a sample. We sampled our undergraduates and we pulled this over 3,000 names. We surveyed all library staff, all faculty, and all graduate students. So we did a population survey of those groups. A few little pieces around that, we were pulling them from our integrated library system, which we get that data from our campus system. So faculty, we were able to limit that to active faculty. So we felt like we had a pretty good population of the faculty who were actively on campus, and you can see that that was where we had a very high response rate. We have a lot of people in the library catalog under the user group of graduate students because they tend to be graduate students for a long time and sometimes their status changes. So that number of 6,682 is a little bit higher than what the campus numbers represent for actively enrolled students. We saw our response rate in that group lower than it had been in previous years, but in previous years we had pulled that sample differently. So we weren't too worried about it because it's sort of where we were in terms of the number of responses. So that gives you a little bit of background about how we got our sample and how we got started. So what we did is we wanted to send the mail from our Director of Libraries mailbox. So it would come directly from JShafer Director of Libraries and not with a heading that said all campus mailing or bulk mail, which is a good giveaway to know when you're getting a message on one of the lists. So we used Mozilla Thunderbirds and we used their mail merge add-on tool that you install. We made sure to schedule these mailings with the campus postmasters so it didn't appear that there was some kind of bad actor activity or any kind of spamming or any kind of other thing that was inappropriate happening on campus. So they would know that we were going to be producing this volume with mail. And the unique survey URLs were provided to us by ARL after we provided the list of emails to them. And ultimately we sent an invitation and two reminders. This is just a little sample of what the spreadsheet looks like. We blocked out the names, but we pulled the information from our ILS and then we separated the first name, the last name. We had an email. We had a user group designation for undergraduate-graduate-library staff and faculty. The URL was given to us by ARL as well as the information about the submit date and the start date and whether it is valid and complete. And as we would sort through the different iterations, we could make a choice to say whether or not somebody submitted something, whether or not it was true. So if you look down, oh, maybe the third. It might be a little hard to read online. There was a submit date of March 2, 2015, but it was not valid. So we had a choice to say, we chose not to send reminders to people based on the status of being valid or invalid. We sent it if they submitted any survey. We didn't send them a reminder. So we had that sort of different level of information that was around. We also added some additional fields. So to the far right is a coupon code. So we handled our incentives a little bit differently in that we offered everybody in the invitation as well as an each reminder the opportunity to have a free beverage at our cafe by printing the message and redeeming it whether or not they took the survey. But we were tracking redemption according to user group and whether it was on the invitation or a reminder. So we have a different coupon code for each user group and with each mailing. Similarly, there are some other columns if you keep going that customize the message further. There might be a custom line that says something about undergraduates or about faculty. But this gives you just a sense of the kind of standard spreadsheet. It had the URLs. It was matched up to the email. And it had our custom information as well. This is a little bit about how we got it started. So we used the double curly brackets which was the Thunderbird protocol for doing that and it would reference the fields in the spreadsheet. So when the message was complete you can select mail merge because you can merge and send it in two steps. So it connects to the spreadsheet and you can have the option to send it later and they're compiled and stored in your mailbox or you can choose to send them right away as soon as it compiles them. So those are the steps kind of briefly that you go through and you can see where the curly brackets are referencing your custom fields. And here's another version of that. So here's the first name. So this first name is pulling out of here and this is SurveyURL. It's coming from this column. And the constituent saying how we should make improvements for undergraduate students or for users depending on what kind of message we want to send. And this could be long. It could be a couple of sentences. We could put whatever we wanted into that spot. And then here I mentioned earlier with the coupon code where we were pulling from here. So this is to give you a little bit of a sense of how it worked. We didn't know how long it would take or what it would be like when we were in the process of doing that. So we started on a Friday afternoon and we merged the messages. We did a number of practice runs to make sure the formatting worked and that the links were active and that things were appearing the way that we hoped that they would appear for users. It took us about two hours to merge 11,000 email messages. That was great. We merged them. We put them together. We held them in the mailbox. And then Monday morning, right in early 5.45, we did the send. And it went very quickly. It only took an hour and 40 minutes. Monday morning was apparently a really good time to send mail on campus. And we didn't really have any trouble. We did see a bit of variation. We did reminder the following Monday but we didn't get started until a little bit later in the morning. So even though we were sending fewer messages, it took a lot longer to both merge and then to mail. And we did those back to back. And for the third and final message, we did it again on a Friday Monday split. That Friday though it was a morning rather than Friday afternoon and that took a long time. Now, it's definitely not an exact science around the merging. One thing that we learned is you have to kind of pay attention and I'll show you here. If you're in the process of mailing and you get an alert message, it will hold up the mailing. If you're not attending to it, click OK. So we did stay close by. We didn't leave it for long periods of time. But that was certainly a factor in how long it took because that might not have actually, it could have gone a little bit more quickly if we were being more attentive to any error messages that came up. That said, we really didn't get very many error messages at all. So that was really nice. We get a regular update from our campus in terms of the email addresses. So they're very recently updated and that's a pretty good source. So we don't have a lot of error messages. So that was actually really helpful to be sure that we were sending messages to real users. The one thing that we did learn is that when we merged the messages on a Friday, even though we sent them on a Monday, the timestamp on those messages was from Friday. So if that changes the order in which the mail appears in the recipient's box, that's something to think about. It could be good or bad, depending on your intention, but we didn't realize that until we actually received our own Monday morning messages with the timestamp on Friday. So that was something that we tried to pay attention to as well. So I don't know, Martha, if you wanted to take questions anytime here or just wait until the end, but that was a really quick introduction to how we managed the survey here. Let me unmute here. Thank you, Rachel. Let's take questions at the end and I'd like to encourage again people to put their questions in the chat box and then we're going to read them. So please do use the chat box on the left-hand corner of your computer screen to add your questions. Thank you, Rachel, and let's move on and hear from Sarah Murphy at Ohio State. Great. Thank you, Martha. Here at Ohio State, we've been administering live quals since 2002. We've done the survey nine times. And in the early years, our main reason for our interest in doing live quals was in part to support the renovation of this building here, the Thompson Library. And as time has gone by since we finished that project, now our interest in live quals is more to serve as one of the benchmarks on our scorecard for our library strategic plan. So, for instance, on this slide here, it shows that we're looking at the library's place dimension and really we're just trying to maintain or improve our score by bringing this gap number closer to zero. And so we have something here from a teaching and learning scorecard for libraries place and the affect of service dimension and in our research and innovation scorecard we're looking at information control. So our process for administering live quals confidential here at OSU was very similar to Rachel's process at UMass Amherst. We did do mail merges, however, we used Outlook and ARL sent us a spreadsheet that had unique URLs. We sent ARL a spreadsheet with all of the names for faculty, undergraduate students, graduate students and our library staff. So we took the unique URLs that we received back and we plugged in to an Outlook mail merge and this was the message that we sent. And, you know, in the middle, under take survey now we had the URL here and it's one little quirk that we found out through this experience is that if you do emerge with the URL and the recipient opens the message via an Outlook client on their desktop, the URL link stays live. However, if the recipient opened the message through a web mail application, that link sometimes was live and sometimes was not and we spoke with our IT department and we really never found an answer why. So if we do this again we will change our language a bit to say click here now or copy and paste this URL into your web browser because that upset some of our respondents. So this graph here shows our respondents in relation to the total population and by doing live call confidential and just sending out reminder emails to individuals who had not taken the survey our response rate was not affected. It did not go up or down. It was about the same as it is whenever we have administered the survey in the past we had a 16% response rate and we distributed a total of 11,981 surveys. Now this slide here shows one report that we were able to make with our raw data from our results and in the past because of the way we administered live call and not having any identifying information we weren't able to parse out our data in a way that was... I guess how to describe this. We could parse out our data but we were restricted to OSU colleges and schools which for a campus like OSU which has many diverse populations and departments and unique departments such as veterinary medicine or landscape architecture and pharmacy and our librarians are assigned to multiple departments and their departments are not always in the same college or school. This makes parsing out of results at times difficult and so with live call confidential this next slide shows we were able to take the unique IDs and match them back up to some local data sources so that we could obtain codes that were codes for majors and codes for faculty departments and those codes to match them back up to a reference data sheet that we have here in the libraries it's just an Excel spreadsheet that maps these codes to the assigned librarian and so now if we have a librarian who's assigned to veterinary medicine and environmental sciences and psychology that individual could review the results for the live call survey all by her user group population to the results for that population only. So we're using Tableau to run this and it works pretty well and that's basically I think all of my comments. And we are at the question slide again. It's really interesting that you can now actually tell the different subject liaison librarians that here's the data from your own subset of users. Have you had any direct feedback from those liaison librarians about the ability to mine the data this way? Well, let me go back a slide or two. You know, one thing I do want to mention too is that this is set up so that if the N is less than 10 no results will show and that's one way that we're respecting confidentiality of the individuals who took the survey but also since we're delivering these results using Tableau and Tableau Reader the underlying data, any identifying data is not part of that delivery. So your question though is how is it received? I think, you know, so I cannot claim credit for presenting the results this way. I got this idea from Jeremy Bueller over at University of British Columbia. They were really happy there were no radar charts. They liked the bouncing balls. We call this the bouncing balls because when you change the filters over here on the right all of this will recalculate and the balls will move up and down. So if it's, you know, within the gray bar here that means it's acceptable service but if it's red that would be the red you would see on a radar chart. If it was green that would be the green you see on a radar chart. So these results are limited to our faculty right now and all of our subject librarians so that gives our subject librarians some more richness to the results too because then they can see the results just for their user population and color coded in a way that's meaningful to them. So you tried to have any meetings sort of with the subject librarians as a group to discuss how they are interpreting these data? We've had a faculty meeting and we've presented this at the departmental meeting for our research and education department. So you've had some. And Rachel are you thinking how are you thinking in terms of how this information can be used by the subject liaison librarians at your institution? You may have muted yourself. How about now? Is that better? Now we get a voice. We haven't mapped the sort of individual information back to a department level in the same way that Sarah has. So we are looking at our results at the school and college level according to the live call categories and putting together sort of versions or reports of the ability to look at that data by those broad areas. So sometimes that might be pertinent to one or two selectors or subject liaisons and sometimes in the humanities there might be more people that would be connected to those departments. So we're not looking at them at this point with that level of specificity but the nice thing about maintaining the email address with the data is that we have the ability to do that easily because that's part of our ILS data and all of the other patron data that we have and have that ability to look at it together. And actually one of the participants, Arielle Diordorf was asking specifically about how you are planning on matching the respondents to other data based on their name, email, what kinds of data are you hoping to compare it to? It sounds that at this point you don't have a specific plan but the ability is there if the need to do that comes in the future. Yes, and I expect as we look more closely at certain pieces there will be questions that are raised that then we'll be able to answer as a result to say, oh what does this mean and then we can look more closely but we don't have a particular question in mind at this point. Thank you. We have another question from Brian Roberts from Brigham Young University. Brigham Young University we opted to use Qualtrics to administer our survey and thus maintain anonymity as well as allowed us to personalize the surveys and track those that had gone to the live call survey though we had no idea if they ever completed it. It did improve our response rate somewhat from our 2013 effort and certainly minimized the number of reminders that were sent. So Rachel do you want to say something about the reminders and the burden on the respondents? I know that was a big driving force. Sure we were really pleased at the potential to not send reminders to people who had already responded and to not have to craft the explanation if you've already received, you know if you've already responded then you can ignore this or the explanation again that we don't know whether or not you responded it just seemed inelegant and we felt like we would be losing people and burning capital at that point so we were really pleased to have that and then it cut down on people sending messages saying oh I already returned to my survey we're introducing confusion about whether or not we had received their survey so we were really glad to be able to do that it's hard for us to know what the impact was it turns out that we sent fewer reminders so in the past we sent the survey coming here's your survey and then send three reminders depending on the user group we stopped after two reminders we may have that may have just been a result of our our experience knowing that they were people who were not responding and being able to follow that and then seeing the numbers that we were getting back we felt okay and we said you know we don't need to send another reminder in some ways just having more information about what was happening allowed us to make a choice not to send another reminder which I don't think we were anticipating at all and in closing this is going to be my last question I know both of you work very closely with your institutional research offices on your campus would you like to say a word about how that relation is helping enhancing the implementation of live call because that good relationship I think is important to be able to do a confidential version of the survey Sara do you want to step into well sure I have a pretty good relationship with our institutional research office over the years after sending many many requests to them they said well how about we just give you access to this data so for instance I used to have to ask our institutional research office for lists of names for various surveys and so they coached me through what I needed to do here on campus to take the appropriate training and get the appropriate certifications so that I could start mining some of that data on my own I can't do all of it for instance faculty data I still have to request from our HR department but I can generate my own samples for students for these surveys Rachel we have a nice relationship with our folks on campus as well and much of our work really does fall under the programmatic assessment as you explained in your introduction since we have access directly to much campus information through our ILS through the load that's already given to us as part of the work that we do and because there is a lot of programmatic assessment both in the library and on campus in terms of how are we serving users and what are we doing and how are we being responsible so it's sort of less in the research arena and more in the assessment end of things we had been originally had done the full IRB proposal back in 2004 and since then had an exempt status around the survey. I can say we had always put it forward as confidential rather than anonymous in that as you mentioned the IP information was retained and it was possible for those connections to be made even though all the appropriate safeguards were in place so that that wasn't didn't prevent us from doing the survey so it really wasn't a big shift in terms of changing the parameters I mean it certainly was in terms of our ability to have that information but it was already information we had connected to a lot of other kinds of library use. Thank you, thank you both of you I do want to remind everyone that we do have an opportunity to share our emerging and shifting practices and new ways of engaging with assessment questions and issues at the upcoming library assessment conference which will take place in Arlington, Virginia at the Crystal Gateway Marriott so mark these dates on your calendar October 31 to November 2 I hope we will have opportunities to do a live call share fair there and at this point from an ARL perspective we would like to engage a few more libraries to sort of do the next phase of the pilot where we're going to build an online interface and much of the back and forth we did with Rachel and Sara through emails it is going to be done through a web interface and we would be very interested in having a couple of you test this environment before we launch it as an integral feature of live call for future years so hope to see many of you at the library assessment conference hope to see a couple of you into the next pilot thank you very much to Rachel Wellen and Sara Murphy and thank you to Amy Jaeger and to all of you who have been with us remember for those who want to listen to any of this information again it will be available on the ARL YouTube channel have a great day