 Good afternoon everyone and welcome to libraries that learn analysis of live call comments to drive service improvement at McGill University This webinar is being recorded on Tuesday, May 5th, 2015 All phone lines have been muted to cut down on background noise You may ask a question at any time using the chat link in the upper left corner of your screen. I Would now like to turn the floor over to Martha Kirillito of the Association of Research Libraries Martha, please go ahead Thank You Amy Welcome everybody This is the third in a series of webcasts. We've been recording in the libraries that learn series and They have focused on the use of live call data The ones that You see on gray on your slide and they have already been recorded and are available on the ARL YouTube channel and We have the one today and another one coming up on September 22nd So I hope many of you will be able to join us at that time, too Without further ado, I'm really very pleased to have here with us today Laurie Claude assessment librarian at McGill University Laurie has an is a well-known researcher. She's One of the editors of the evidence-based library and information science journal and she's been working with live call data With a rich array of live call data at McGill as we will see Lori the floor is yours Thank You Martha and thank you everyone It's nice to be able to speak to you today So as Martha explained I'm going to be talking focusing Today's presentation on How McGill we have dealt with analyzing the comments the open-ended comments portion of live call results in order to drive service improvement So just to give you a quick overview of what I'm going to try to cover in the next 15 to 20 minutes Before we open up to questions. I'll give you a little bit of background about live call at McGill and what we've What what we have had been able to do over the years and talk a little bit about Not so much a problem, but the issue we have with all of the data We have the qualitative data and how we decided to solve that issue And then I'll go into more details about the goals and process of how we now analyze comments in live call And the results and recommendations that came out of the last time that we did that process And finally I'll give a bit of an overview of the toolkit We developed here at McGill library for going forward with analyzing live call comments So just a little bit of background for those of you who might not know McGill University is located in Montreal in Quebec, Canada It's an English language university in a French-speaking province We have a very diverse student population from all over the world And we are a government-funded institution We have over 37,000 full-time students about 8,000 of those almost 9,000 of those are graduate students We have over 3,000 instructional faculty and about half of those are tenure stream and And a given year we have about less than 800 doctor's degrees that are awarded For about 10 years running now McGill University has been ranked first in Canada among the medical doctoral universities and That's a nice sunny photo of McGill And now I have a photo of the winter And so last year McGill ranked 30th in ARL. It's composed right now of 10 branch libraries Most of those are in the downtown campus. We have one library on our satellite campus in the suburbs The annual budget of the library is equivalent to thirty two American dollar thirty two million American dollars and our collection Print and electronic is just over six million volumes so Historically McGill has a long and fruitful relationship with Libqual this year Was the 10th iteration of the Libqual survey and so as you can see since the beginning McGill has been using the survey to inform service improvements and As a result we have a lot of historical data and we have a lot of experience Getting the Libqual reports and results and trying to use them to make changes and we've been pretty successful at doing so but just to let you know My role as assessment librarian only began in September of 2012 and so what that means is that though I've been at McGill for a while We had somebody else working with the survey up until that point And so I came in and the 2012 survey had already been run and so what I'll talk about is a little bit about how I My role in coming in and how I decided to Going forward deal with the comments part the qualitative data part of the results of the Libqual survey And what we did for the 2013 implementation of the survey we just closed or actually we're just closing today the 2015 survey So this is a General sort of overview the typical sampling Method that we use so we don't we run the survey quite frequently. We don't Send it out to all of our students We send it out to a sample of undergraduate and graduate students and we send it to all tenure-stream faculty And on a given year for each of these groups. We get a response rate of about 10 to 13 percent So what this means is that? We send out the survey to about 10,000 members of the McGill community the response rate in 2013 was therefore about 1200 responses and so 45 percent of those respondents left a comment That means that we have over 500 written comments of various lengths to deal with And this year I anticipate we have a lower response rate, but we still have hundreds and hundreds of comments That are available for us to analyze and this is good. This is a great thing but it means that we need a Method to systematically cope with them. So for those of you who are familiar with Libqual This logo is a little bit old. It might look familiar and I'm just I just gave you a snapshot of our Old website and we do have a history of sharing our Libqual results with our community and this is from 2002 when the librarian in Charge and the library staff supporting Libqual would share the results notebook share interpretation of the results and Also provide details about what actions are being taken for to improve services or to respond to the needs of users Based on all of the results from Libqual So we have a strong track record for communicating this and we continue to try to do that And I have here a snapshot of our McGill Tribune. This is a paper the newspaper that is no longer in print Where we do try to Share with our community What we're doing what the results are and what actions we're trying to take and follow up on? One of the benefits of running Libqual over more than a decade is the longitudinal trends that are available to us We can use that longitudinal information and here I have a 10-year span to show you just as an example We can see changes In response to particular items. This is an item on this is libraries place item 2 so quiet space for individual activities and undergraduate students who Place a high value on on library space and quiet study space And their perceptions and we can compare that to the ARL average as well over time and we can see that over time We are getting closer to the average, which is something that We are always trying to do You'll see here. I think we also have it for graduate students We're a little bit a little bit better at meeting their Their needs So that being said That's the quantitative data and the results that we get back from the Libqual reports so in the past we have a long history of Sur analysis by user group as well as by branch McGill employed a data specialist who focused a large proportion of time conducting analysis and writing detailed reports based on The results notebook that we would receive from ARL and these were used at the branch level So each branch had would get a report not just of the overall library, but also for their branch They would be able to see the the means The scores the Libqual scores for each of the items for their particular The groups of users who identify themselves as using that particular branch And they could use that to inform changes and as well at the system-wide level The library could also look at larger trends and Institute changes typically comments were sorted by branch and Those comments were then divided up and shared with all staff and in particular the heads of branches Who could then go through the comments read through and also use those to inform decision-making? There was also some topic analysis that was done to help decipher the results and scores and answer specific questions But for the most part the analysis was a little more ad hoc. So for instance We would load the data into qualitative software perhaps But mostly we would just use it to search for particular terms for example If we needed more information about how people were reacting to a change in the catalog We would search for the word catalog and look at the comments about the catalog So the way that we were analyzing the data was focusing much more on the quantitative than the qualitative And this word cloud is an example of a few years ago We tried to really think more about the qualitative data and started to try to display and Present it to all of the staff in a way that was make it more interesting to them to dig deeper into it And at that time as I mentioned I started as the assessment librarian I have a background in qualitative methods and so I was interested in doing a little more in-depth qualitative analysis and the Opportunity arose in the spring of 2013 just as the webcalls survey was closing Sorry, I'm just trying to make sure I see that someone can't hear Mollie crackhead. Okay. Otherwise fine. Okay So we did have the opportunity in the spring of 2013 to work with a student From the School of Information Studies McGill University has a project a program for students a Volunteer program for students who want to do a practicum also known as an internship. It's an unpaid Internship for which students get course credit They spend a hundred and twenty hours over a semester in this case. It was the spring semester Which worked out really well with our with loophole so what we did was we found someone who had an interest in qualitative analysis a little bit of background and research and an interest in assessment and Practicum student joined the library for those three months and the objective of that practicum project was to develop an analysis plan for the open-ended comments to actually Run that analysis at least for a portion of the comments for the most recent live call survey that is the 2013 survey and At the end to create a toolkit so that this wouldn't just be a one-time occasion But in fact we would develop a method that would hopefully be sustainable and that we could continue to use for every time We ran live call And so I'm pleased to say that it was successful a very successful practicum And the student Jennifer combo is now a librarian at a college here in Montreal But it was a very fruitful Practicum or internship, so I'll tell you a little bit about it it started off with the student did some readings and Met with some librarians including myself to familiarize yourself with library assessment in general Assessment in university settings She read about libqual and reviewed results notebooks from previous iterations of the survey at McGill and looked at coding practices That were published and there are several publications and I have some references for you at the end of some of the The ones that she found the most useful In addition to that We provided some training for her so that she could be familiar with a qualitative coding methodology And of course the actual software to do the coding the software in this case was atlas Dot ti atlas is well-known qualitative software, and it's the one we use here at McGill library And then the practicum student was involved in developing a code book doing the actual coding and analysis for a Significant portion of the comments that we received in 2013 and finally developing a tool kit Which included a guide on how to conduct analysis going forward, so I'll talk more about these in a little bit more detail now So first I'll show you that an example a portion of the code book And so the method for analysis employed here is template analysis and template analysis begins with a code list Which is essentially a list of codes and their definitions some cases examples And it template analysis makes use of mostly deductive coding That is where you have a code you have a list of codes and you try to apply the existing codes to the data that you're reading and Analyzing, but it also allows for inductive coding That is the addition of new codes splitting of codes into more than one code or category and Editing or removal of codes as we see fit so the initial code list was developed From previous comments analysis at McGill and from also the code list that we had from other universities that have been published So codes aren't mutually exclusive so a segment of text or a segment of a comment can be coded with more than one code And this is known as parallel coding so in the end this code book had a total of 57 codes and some of these were collapsed into larger categories and With a few exceptions these fell within the three dimensions measured by LibQual So those are libraries place affective service and information control the few exceptions to those categories were a special category special codes Comments on behalf of students, so this is the case where faculty would advocate for students That was a particular type of code that emerged type of comment that emerged and another set for example that were just comments about the LibQual survey administration itself So there were a few extra codes, but for the most part they fell into the three broad dimensions So there were three iterations of the coding the practicum student who managed to actually code about two-thirds of all the comments and She actually coded the comments by faculty and graduate students The first round involved an initial coding and creation refinement of new and existing codes The second iteration allowed for a complete coding of the comments and the final iteration of coding Served as a review and confirmation of the coding process and that also was a way to finalize the code book And as I mentioned these codes mostly aligned with the LibQual dimensions So the code book actually has sort of two parts to it one is a hierarchical list so the dimensions in some cases subcategories and then the actual list of codes by category and The practicum student or anyone doing the coding can sort of print that up and have a nice look hierarchically to see all the categories in the codes much like subject heading So it's a quick overview after doing coding for a few hours though you start to memorize and you know What the codes are? Another document is created which was just alphabetical with a list of all the codes and all the definitions and examples And exemplars to help with coding if if for example the coder is not sure which code or codes to apply in a particular context So once the codes are complete using the software you can generate reports and atlas by the code or by the user group or by combination of user group for example undergraduates all the codes about noise and pull those up and Reading through those Code by code one could then develop Do further analysis interpretation and write up the report and so a draft of that was started by the student And but the full report from the comment analysis was created by the assessment librarian Who is me and the findings were organized by the dimension and then subcategory and by code so for example there would be a A set of findings or a summary of the comments analysis on the library website on Database and on noise and if there were differences between User groups or notable differences these would be remarked on in these sections And in some cases direct quotes were included to provide evidence as to the findings So these summary of the findings were shared with all employees along with the original Cleaned comment so those are the 500 plus comments, but the findings was a much Shorter document and it was a tighter summary After that the library assessment advisory committee of which the library the assistant librarian is chair They use that findings document to develop recommendations and in total for 2013 there were eight recommendations That emerged from those findings and these were presented to the library leadership and in 2014 during a strategic planning exercise those recommendations were Incorporated in different ways into strategic intentions that were Part of the libraries Well part of their exercise and and part of a larger Exercise that included all staff input So as part of that outcomes and targets were identified in order to address the recommendations and those recommendations Came directly from the analysis of the comments from live call So we continue to at McGill library share the results of live call survey. We continue to try to share The service improvements and changes that we've made For 2013 we're still in the process of making improvements But we continue to try to communicate with our community about that and finally I just want to mention the toolkit so the Practicum project how does it school to develop a method that was sustainable and that we could continue to use here at McGill University? And so what it consisted of this toolkit was a guide a step-by-step guide for those conducting analysis of open-ended comments in Live call it includes recommended readings Including additional readings to prepare someone for the process depending on their familiarity with qualitative analysis Coding live call it includes step-by-step instructions for loading the comments into the software the automatic coding that happens for demographics as well as the steps for actually conducting the coding itself the codebook and the guide for applying the codes and How to create reports in the software as well as an appendix? What with the codebook and the definitions so this guide the toolkit itself and It's about 20 pages. It is specific to McGill, but I believe it's transferable to other contexts It also includes information about the number of hours needed for coding and it's estimated for the number of comments It also includes information about the number of hours needed for training And that's divided up by training learning the software familiarization with the codebook So the idea is that in any given year that we run the live call survey We have enough information to decide If one person is to undertake the coding if that person is completely new to qualitative analysis and to the software If we know the number of comments we can estimate how much time how many hours are actually required to conduct the coding And finish the analysis As opposed to somebody who may already be familiar with qualitative analysis and the software How many hours that would take and so the idea is that No matter who is responsible for a live call that that information is available and somebody can undertake qualitative analysis of The open-ended comments and make use of them So this is not a solo activity. I just want to acknowledge the huge effort made by the practicum student in 2013 Jennifer Kambua and the advisory committee to assessment which Was instrumental in getting the findings into actionable recommendations for the library and as promised here are some references to articles about specifically about how to conduct analysis of Open-ended comments. I know there are more than this, but I just wanted to give a few that we found particularly inspiring Yeah, I have another slide Martha has another slide. So thanks very much. I also if you are interested we have more information about live call at McGill on our on our assessment website Lody tell us a little bit about the number of hours You know, I don't have that information in front of me. I forgot it. That's a good point The practicum student estimated that it takes a full 30 hours of training for somebody who's brand new to the software Qualitative analysis software in this case. It's atlas or atlas.ti depending on how you want to say it And in order to learn about coding. So that's reading a few articles Or a chapter about how to do qualitative coding and and perhaps speaking, you know speaking with the assessment librarian or learning about qualitative analysis and deductive and inductive coding what's generally known as template analysis and then the actual Coding depends on the number of comments. Obviously comments vary in length But I Think if I if memory serves it's about another 50 hours for about 500 comments Yeah, no, this is a good ballpark and tell we have offered it ARL training on atlas.ti Over it in the past typically over a two and a half day training So, you know 30 hours. Yeah, and then, you know, of course you have to practice like everything thing everything with analysis the more you do the more of an expert you become the more insights you gain into a different aspect of This methodology and I would recommend that people go and read your full toolkit Information I did say that I have some more references and here are some additional references It's it's interesting to to note that you know the qualitative Analysis is a fundamental Approach of the live call protocol Because the original items were developed and standardized based on extensive interviews that were coded with atlas.ti and Through an interactive approach Reduced into questions that were tested to see whether they were valid and reliable from a quantitative Perspective and then that that was an iterative approach over the first three years of the development of the protocol and there is calling cook dissertation that captures the mixed methods approach, but beyond that The good three article here the 2002 article took feedback on the actual instrument and Analyzed that feedback and There was refinement on the instrument based on that feedback Then we started seeing gradually, you know more people Approaching those comments in a more systematic way You can see the 2004 article from Begay and other call Thursday And 2009 Margaret Friesen Who's at UBC in Canada University of British Columbia? Did an analysis In 2008 Jones and Kayonga from the University of Notre Dame published a piece in college and research libraries the 2011 article by Greenwood and co-authors Captures the University of Mississippi analysis of the qualitative and the quantitative and the most recent work at it's actually Officially not published, but it is actually accessible as a pre-publication as a pre-print through the college and research libraries website The Eight the official publication date is going to be September 1 2015. I have put the url there a piece by Dettler and ball from McMaster University They have they have also approached the comments in a very interesting way so The our understanding of how we approach this kind of information is evolving and Actually, I would be very interested in seeing whether Your effort your toolkit will be adopted by others. I know there was an effort to buy Brown University and Dan O'Mahony and others That have presented at the library assessment conference Where they try to standardize a code book so I often refer people to that effort There are a couple of questions from the audience So Charlie Goss is asking do you always conduct one wholesale analysis of all responses or do you ever split results by user group? And then analyze each set separately if I understand that question correctly It's a good question because in the past The way before we we developed this method There was well there was no strict method, but they were typically actually divided by branch because as you know in libQual The respondent can select which branch they use the most The most frequently and so we would divide them up that way that is not how we did it this time the 2013 We did actually divide it by large the large set of user group in terms of undergraduate Graduate faculty and then all the staff and other groups in different categories And that was partly done because the practicum student was not was only a Because she had a restricted number of hours. We decided to limit First she started with faculty and then went on to graduate students when she realized how much comment analysis she could get done And so I think that's how we would do it going forward And that also allows us to identify differences at least between these types of user groups I don't know if that answers the question though We don't break it down for instance by sex or by age group or other types of groups But by those user groups we do by the basic one And there's another question by Sherry and George whether you engaging in member checking I Saw that question. Well, no, we can't because it is they are Anonymous they are anonymized however in the future what we do plan on doing is having from the finding some questions may arise And we're hoping to conduct focus groups With a new advisory group to the library so that we can follow up on questions that arise from the comments Just to make sure our assumptions are correct. So those wouldn't be necessarily with people who responded to lip-qual But it would be a way of going back to our users to verify our findings They were a pool is asking. What is the process of to clean the comments? I can answer that if you'd like at McGill It's very it when we say clean. It's very simple The raw comments that are provided By ARL to which are provided immediately when the survey closes. I believe in an Excel spreadsheet All we do to clean those is that means that one or two people read over the comments and remove any identifying information Either of the user who's responded so that they remain anonymous and also of library staff any library personnel Who might be able to be identified and that's just to well to protect their privacy and and in some cases some comments are removed if they are Extremely egregious, but for the most part cleaning just means removing identifiable information Thank you. That's very helpful And I think an item we were chatting about it before Is a longitudinal analysis of the comments a good approach? I think that's a question that we are starting to ask In fact, Martha and I were just chatting about this earlier We do have 10 different sets of comments now at McGill Spanning over a decade while spanning 15 years There's a lot of qualitative data and it would be interesting to look at the data set as a whole To look at differences and themes or codes that emerge over time and how you know Things have changed so I think there's a lot of opportunity there, but I think that also points to the reason why it's good to have a consistent rigorous method for qualitative analysis That is implemented In order to be able in the long term to make comparisons more efficiently and effectively So right now what that would mean would be going back to the earlier data and Implementing the current method that we designed in 2013 Yeah In the other advantage of doing analysis with the comments and telling people what analysis you do and Showing how you are improving is that it will help. I think with Marketing the survey next time around especially there's a question about ideas to get higher response rates part of the communication could include Elements of you know, this is how we improve. You know, these are the two three key improvements We did last year based on you know your feedback Help us guide us for next year's improvements or something like that Certainly, and we've done that in the past and I think that is a very effective way to improve the response rate The back to the cleaning of the data Emily Thornton was inquiring whether when you remove identifiable information it's in relation only to negative Comments or do you do that when the comment is positive too? That's a good question We I do remove all of the names or identifying information from all comments whether they're positive or negative or Neutral There's a lot of focus on positive and negative when people talk about The comments and I find that to be very black and white sometimes it's a little more nuanced And so it's I feel that it would be biased to only remove Identifying information if the comments are negative But what I do do is when a comment is positive is I do share it with that staff member So if someone is named I will copy and let them know Someone said something nice about you for instance and share it with them But in the comments as a whole that document is shared with all library staff And so for certain people who have a lot of public facing they work a lot with the public They may be mentioned several times And so we I just we just remove We remove all the names so that there is no sort of preferential Yeah, bias I've and there's no negative bias towards people So we remove names, but we do not remove the comment and I will sometimes remove information if it identifies the person for instance The individual with brown hair Who works at this particular branch on Saturdays? I will remove that information so that nobody can be identified Yeah, it's good sense And one last question before we close has my gill adopted the life con light protocol So I just keep referring it to lip call But of course, it's lip call lighting we've been using lip call light For the past few iterations and in fact in this year we used a hundred percent lip call light Yes Wonderful, thank you Lori Thank you to everybody who's been attending and You will get the slides and a link to the webcast for the recorded webcast on the LL YouTube channel Stay in touch with us. Thank you