 Hi, I'm Leanne George, coordinator of the spec survey program at the Association of Research Libraries. And I'd like to thank you for joining us for this spec survey webcast. Today we'll hear about the results of the survey on collection assessment. And these results have been published in spec kit 352, which is freely available on the ARL digital publications website, publications.arl.org. Before we begin, there are just a few announcements. First, everyone but the presenters has been muted to cut down on background noise. So if you are part of a group today, feel free to speak among yourselves. You won't disturb other participants. And we do want you to join the conversation by typing questions in the chat box in the lower left corner of your screen. We will answer as many questions as possible at the end of the presentation. I'll read the questions aloud and the presenters will answer them for you. This webcast is being recorded and we will send registrants the slides and a link to the recording in the next week. And now let me introduce our presenters. Karen Harker is Collection Assessment Librarian at the University of North Texas Libraries in Denton. Jeanette Klein is Interdisciplinary Information Science PhD student at the University of North Texas. And you can use the hashtag ARL spec kit 352 to continue the conversation with us on Twitter. Now let me turn the presentation over to Karen. Thank you, Leanne, and thank you all for coming. I'm Karen Harker and with me is Jeanette Klein. In this presentation, we will review our reasons for undertaking this project, the overall response to the survey and the details of selected aspects of the results. We will then discuss how these results can help us, everybody in our libraries, point you to the spec kit itself and finally answer any questions you may have. Like any good work, there is a preface to set the stage. In our case, we wanted to clarify our use of the terms assessment and evaluation. For this, I will first defer to the classic text on collection management by Peggy Johnson. In chapter seven of Fundamentals of Collection Management on the quote analysis of the library's collection, its use, and ultimately its impact, she distinguishes these terms in this manner. She considers assessment to be an examination of quote, how well the collection supports the goals, needs, and mission of the library or parent organization. Conversely, she considers evaluation to be more of a comparison of the collection with some internal or external criteria. However, we did not make that distinction, a distinction that can be difficult to communicate effectively. So, for the purposes of this survey, we use these terms interchangeably. Karen and I, during the course of our own work together, realized that while a significant amount of literature exists on the topic of collection assessment, specifically in the form of case studies, very little literature was present on what methods and techniques other institutions were using in the field. As a result of these observations, we realized that a unique opportunity was at our fingertips to investigate what practices are being employed above and beyond the traditional should and or could approaches historically discussed within collection assessment literature. We then developed the spec kit proposal with the key objective to investigate the methods, measures, and practices used at ARL libraries and also to determine what forms the results of those assessments took for youth and for dissemination at other academic institutions. But, as with all projects, the underlying objectives are far broader than our stated objective. Especially important to us as we began developing the survey was our desire to see how other libraries analyzed their collections, the time commitment within assessment process for both data gathering and analysis, who was involved in the different aspects of assessment when data gathering and analysis were being performed, what the results were used for, to whom they were disseminated, and lastly to determine if other ARL libraries perceived their collection assessment processes as being successful and what if any areas of improvement existed that they were willing to identify. Yes, we really recognized that we were using the traditional structure of interrogatories to develop the survey, but we also hope that by doing so we could establish a baseline from which additional inquiries could be developed in the future. Additionally, we had several aspirations of what we wished to gain by conducting the survey. Of course, we were desirous of gaining additional knowledge of the methods and measures used at other institutions beyond what we currently use at our own institution, but we were also hoping to identify potential collaborators for future research projects and approaches and ideas for improving our own existing collection assessment process. So what then was the structure of the final survey and to whom was it distributed? The survey consisted of a total of 60 questions, many of which were multi-level and included branching logic, thus creating a fairly high level of overall complexity to the survey. The survey was distributed to 124 ARL member libraries and we were fortunate to receive a total of 71 responses yielding a 57% response rate. This response rate is slightly higher than the average spec kit survey response rate. As we expected, all responding institutions indicated that they were involved in data gathering and analysis, but what we were surprised to see was a degree of diversity expressed throughout the survey of how they approached and engaged in data gathering and analysis. Exploring the survey responses, we turn once again to the original question of who, why, and when, as these interrogatories pertain to identification of the purposes and outcomes of collection assessments, the locus of control of data gathering and analysis and the human resource element related to the time spent on these processes and the number of people engaged in the process. The next few slides will break down each of these three key areas of focus in more detail while highlighting overall findings and some unique features that manifest within the responses. One of the principal goals within the survey was to ascertain exactly who is doing what and what data is being used within collection assessment data gathering and analysis. From the institutions responding to questions on the gathering of data, 97% indicated the gathering of collections data above and beyond the requirements for ARL and iPad statistics surveys. Delving into these responses more deeply, 49% noted the presence of both formal and informal elements in the process used for regularly assessing their library collections, while almost 20% indicated that either a formal or informal process was in place. Interestingly, 30% indicated that at the time of the survey, a process was not in place but that they were working towards instituting a process. 65 institutions responded indicating their frequency of assessment. This particular question was open-ended and from the responses, we developed seven levels of assessment frequency from the 96 responses as shown in the chart. Somewhat surprising to us was the almost 42% response rate indicating that assessments were conducted on an as needed basis, surpassing even the number of institutions performing assessments on an annual basis. Also, several institutions noted that they conduct assessment on a monthly and or ongoing basis. This was interesting as it leads to further questions in our mind about what types of collections are being assessed on such frequent basis. Knowing a little more about how frequently institutions were conducting assessment, the survey questions then transitioned to gathering insight on the scope of collection evaluations. Within this series of questions, 67 institutions responded indicating the format and discipline included in the evaluation. Receiving just over 52% of the responses, it is clear that the majority of the institutions are conducting evaluations on all formats, all disciplines, including their digital collections. At the other end of the spectrum, only 4% of the institutions indicated that they were evaluating all formats, selected disciplines, and all formats, all disciplines not including digital collections. As collection evaluations are being conducted, a point of interest is investigating what formats are being included in the evaluation process. 67 institutions chose one or more of the eight provided format options for a total of 340 responses. An average of just over five formats were selected by each institution. As expected, the most frequently evaluated formats are those of electronic and online and print. However, it is worth noting that between 63% and 67% of the institutions do also perform evaluations on the remaining five formats. Similarly, the survey explored the types of collections that were included in the collection assessment process. Interestingly, 33% indicated that they assessed three collection types, with most indicating monographs, journals and serials, and DDA. Delving even further into the types of collections assessed, the 67 responding institutions made a total of 287 selections from the available seven collection type options and one other collections category. As shown, the most common types of collections assessed were those of journals and serials, receiving 23% of the total selection, and monographs receiving 22% of the total selection. DDA received 16%, and Open Access and Archives each received 8%. On average, institutions selected about four collection types within this section. In reviewing the survey responses, 11 institutions indicated that they actually assessed all eight of the collection types, while six institutions indicated that they only evaluate books and journals. Interestingly, four institutions selected only one collection type. Now that we have a clearer picture of what format and types of collections are being assessed as reported within the survey, we will look at institutional responses as to the purpose of the assessments conducted. As mentioned in our purposes for conducting the survey, we are very interested in ascertaining why collection assessments are both initiated and used. Based upon selections made by the 65 responding institutions to the nine provided options and one other category for a total of 373 category responses, nearly all respondents indicated that collection assessments were initiated for reasons associated with collection development, as well as for library administration or other library purposes. Accreditation and new program reviews were also very common, although university level accreditation was indicated by just over half the respondents. As shown within this chart, where the number of options selected corresponds with both the size of the box and the color. On average, a total of 5.7 categories were selected for this question. Shared collections received a fair number of responses with nearly 50% indicating assessment for the purpose of initiating a shared collection and 37% indicating assessment for evaluation of a shared collection. Within the open-ended responses to the other category, comments indicated reasons related to collection movement and space, external reporting, budgets, and weeding and deselections. A few unique comments included understanding user behavior maximizing our utility and answer questions from departments about library funding and acquisitions. To see the detailed comments provided to the open-ended other categories, we do encourage you to review the section on purpose of assessments within the spec kit. Now that we understand a little more why assessments are being initiated, we turn our attention to how the completed assessments are being used. This survey question also had 65 responding institutions who were able to select as many options as were applicable from the 21 answer options, one of which was an open-ended other category resulting in a total of 737 responses. As shown, the average category response was 11 and two-thirds of the respondents indicate assessment used for demonstrating value and or funding justifications, evaluation of collection strengths and weaknesses, and funding allocation adjustments. Understanding the purpose and how assessments are used provided valuable insight into what is happening at research libraries, but how are the collection evaluation and assessment processes being coordinated? Understanding the locus of control within data gathering and assessment processes for collection assessment is our next topical area. The levels of data gathering and analysis were divided into three broad categories to determine where data gathering and analysis occurred. A total of 67 institutions responded and as we reviewed the data, we noticed that of those locations that perform data gathering and collection together, 80% indicated that data gathering and analysis occurred at the local level, 40% at the consortia level, and just over a third at both the local and library system level. Interestingly, 10 institutions indicated engagement with shared collection partners other than consortium and five institutions indicated gathering and analysis on multiple levels including local system, consortium and shared collection partners. At a more granular level, we proceeded to determine if a centralized or decentralized process was in place for data gathering and analysis. Of the 67 total respondents, 39% indicated a centralized process while 61% indicated a decentralized process for data gathering and analysis. Of those that indicated a decentralized process about 40% engaged separate committees for data gathering with the committee size ranging from less than five to more than 40 members. On average, the committee size was between five to 10 members. While within the decentralized institutions, it was indicated that committee sizes were ranging from four to 40 members with an average committee size of around 10 members for data analysis. This is two to three times larger than the reporting numbers for data gathering. Determining who was performing what function, whether the gathering or analysis of collections data was both one of the most challenging parts of the survey to develop and also to analyze during the tabulation of the data. As shown within the chart, institutions identifying as decentralized committee slash group segments shown in dark blue had the highest number of responses for data gathering, analysis and gathering and analysis together. Surprisingly, other structure in green also received a high level of responses while within the centralized single department slash position, the responses were fairly evenly distributed. As pertaining the number of individuals involved in the collection assessment process is an area that we hoped to be able to delve into within the survey. While certain insights emerged, such as data gathering and analysis by single position dedicating an average of 59% of their time to those duties and institutions that perform the same duties with a single department allocating 45% of their time to the gathering and analysis data. It was also noted that within this, an average of 1.4 FTEs are being dedicated to collection assessments. Yet determining trends within the amount of time spent on committee meetings was a little more challenging as only eight respondents provided input to this question. And the responses received varied widely with data collection estimates from less than 50 hours to more than 2000 hours per year and data analysis estimates ranging from 20 to 200 hours per year. This then did not allow for any conclusive themes or trends to be developed from the survey. With this information at hand, I will now turn it over to Karen. So we've covered the how, the who and the when. Now we'll discuss the how, the methods and tools used to collect and analyze data and disseminate the results. There were two dimensions that we measured regarding specific data tools, actual use and or current interest in using. Here, the size of the rectangle indicates the use, current or past, with Excel being the largest because well, everybody uses Excel. And color indicates interest in using with visualization having the most interest, 39% of the respondents. Indeed, data visualization as a tool centers prominently in the responses with a moderate level of use and a strong interest in using. We were surprised that databases also figured heavily with nearly two thirds having used MS Access and nearly half having used Microsoft SQL Server. Of the four commercial collection analysis tools in our survey that compare holdings with other libraries, YBPs, Gobi peer groups had the greatest positive response with over 60% having used it, either current or past, and another 20% interested in using it. Over half of the institutions reported having used OCLC's collection evaluation system previously, but few are currently using it. ProQuest and TOTA had the most institutions interested in using, and because Valker's BAS is no longer offered, it had no current use, but a small set had previously used it. Now, we understand that these are not equivalent tools, but they use the same approach, peer comparisons of collections. Other tools mentioned in this question could be grouped into these categories. Holdings analysis, such as Green Glass, the most recent addition to the toolbox, and Serial's overlap tools like Serial Solutions, Ulrich's and Colorado's Gold Rush. Usage data management, specifically ProQuest's 360 counter, EBSCO's usage consolidation, and Ex Libris's use stat. ILS's data analytics services, notably ALMA, Innovative, and Cersei Dynex. Data storage, such as LibAnalytics and LibPOS, and finally, there were citation analysis tools, including altmetrics. We also wanted to know what librarians were dreaming of. What tools were missing? What did they want done that they couldn't get done? Generally, they provided either improvements to existing tools, like ILS's or EURN's, or pie-in-the-sky tools that just do not exist yet. The solutions they wanted were more centered largely around data aggregation and integration, both between and within systems. They also wanted tools to evaluate specific resources more easily, based largely on cost for use. Other desired solutions included ways to automate the collection of data, more effective and easier to use reporting and visualization tools, and finally, ways to make holdings assessments easier to generate and more useful in reporting. In addition to the tools, we wanted to know what methods librarians have been using in assessing their collections. The options provided in this survey were selected and organized based largely on the matrix that Peggy Johnson provides in chapter seven of her text, Fundamentals of Collection Management. This matrix has two dimensions, quantitative to qualitative, and then use or users-based and collections-based. Here, the rates are the ARL libraries that have used each method at least once in the last 10 years. Color, of course, varies by the response rates. Three of the four collections, excuse me, three of the four quantitative collections-based methods in the upper right quadrant had been used by at least three-fourths of the libraries, while qualitative collections-based methods, just below, had been used the least on average by half. Methods that were quantitative use or users-based had the widest variation of using. Most institutions looked at the usage of e-resources, but only 14% used mines for libraries. We were a bit surprised that few institutions used open-source data in their collection assessments. These sources include the national surveys, as well as data gathered on the impact of journals that are independent of the more traditional and costly journal citation reports. As noted by Megan Oakleaf and others who are deep in the assessment of academic libraries, gathering data and analyzing that data are only half the work. The information generated from that work must be disseminated to those who will use it to make decisions. So we asked how and to whom these collection evaluations were reported. Generally, the most common audiences were the internal stakeholders, library administration, collection development, subject librarians, and other library staff. Those in the broader parent organization were far less likely to receive this information, and certainly not the general public. We were also interested in learning if and to what extent librarians share their data with their stakeholders and with the world, summarized as in, presented in reports or raw, that which is at the more detailed level, like expenditures at the item level. About a quarter have their data accessible to stakeholders directly, no intervention required. A third make their data available upon request, while another quarter make very little data available at all. Generally, the format of the results of collection assessments was dependent largely on the audience of the results. Most commonly, reports were delivered as print or PDF or as a presentation, and these were accessible via the library's intranet for internal stakeholders or direct delivery, mail or email, to the institutional stakeholders. The library's own institutional repositories were disappointingly underutilized for such dissemination. We were particularly interested in learning what, if anything, these collection evaluations or assessments had on the libraries themselves. Over a fifth reported that the librarians gained a better understanding of their collections, and slightly fewer reported that the evaluations resulted in a change in collection development priorities. Improved funding for either targeted collections or overall collections was reported by 15 and 9% respectively, and another 13% reported improved understanding of collections by the faculty themselves. Now, collection assessment requires skills in a lot of areas. We wanted to know what these librarians thought were the most important skills, so we asked them to rank the skills from most first to least tenth important. In this chart, the color indicates rank, green highest, red lowest, gray in the middle, while the size of the square indicates the number of responses from one to 26, the most anyone's skill was given a rank. We noticed these skills can be grouped into three distinct categories, broad principles, critical thinking, and technical. Merging the ratings of these groups, it appears that analytical skills were considered the most important by the most librarians, followed by broad principles, while technical skills were considered less important. All that we have been discussing so far, the purposes, the outcomes, the human resources, et cetera, are dependent upon the climate or attitudes of the librarians and their administration. Generally, librarians indicated that they worked in a climate that was positive and supportive of collection, evaluation, and assessment. In particular, they reported that the internal stakeholders were interested, but few reported that external stakeholders had any interest. It should be noted that the first item listed here, data difficult to gather, is itself a negative statement. Thus, agreement with this statement is more negative than positive. So we inverted the color scale to match the context of the remaining statements. Nobody disagreed with this statement, and about half strongly or very strongly believed that yes, data is difficult to gather. While institutional climate was important, we also wanted to better understand the attitudes of those who were most closely associated with collection assessment. Most of these statements were positive, but one was negative, the second one on interpreting data. And in this case, we did not invert the scale. Generally, attitudes were positive with most agreeing moderately or more strongly with the statements. Regarding the negative statement that interpreting data is difficult, even that was positive, and that a sizable portion disagreed with the statement. We're hopeful because the librarians are very interested in sharing results of collection evaluations, and they believe that collection evaluation is supported by theoretical foundations in collection development. Most interestingly, most only moderately agreed with the quantitative data trends qualitative, thus providing more opportunities for qualitative data to be used. Finally, we asked about the successes and challenges that they have faced. First, the good news. Nearly a third reported that collection of usage statistics has been very helpful for selection or deselection of specific resources, as well as demonstrating longitudinal trends. Others reported that collection evaluations provide the foundation of evidence-based decision-making and a fifth reported increased collaborations with subject librarians and or faculty. Based on the results we have presented so far, it's not surprising that the key challenges identified related to data, quality, integration and sharing, and improving the processes, notably in training and in allocating resources. As Jeanette mentioned before, our real reason for doing this was to find out what other libraries were doing and to learn from them. We, that is you and us, can use this information to explore new ideas, including tools that enable us to integrate the data, our data, and visualize them to tell our story, as well as new methods that compare and contrast our collections with our institution's needs and with those of other libraries. We can use this survey to focus on developing the most important skills, as well as that audience which we are not reaching, the external stakeholders. ARL has joined the Open Access Revolution by making the PDFs of the spec kits freely available. You're encouraged to download our spec kit, number 352, from the ARL Digital Publications website, as well as purchase your own hard copy. We can be reached via email at unt.edu. And I'll turn it over back to Leanne. Thank you Karen and Jeanette, and we do welcome your questions participants, so please join the conversation by typing questions in the chat box in the lower left corner of your screen. Again, as they come in, I'll read the questions and Karen and Jeanette will answer them for you. While we give our participants a chance to answer some questions, besides the fact that survey respondents aren't using much open data, were there any other real surprising results from the survey? Well, just the overfall fact that we got as big of a response as we did, and the fact that we now know that a lot more libraries are doing collection assessment in this manner. My position was brand new and I had never heard of a position like this before, collection assessment. There's assessment librarians, but they tend to focus mostly on instruction and reference, whereas collection assessment is fairly old and new, old ideas in that we've been analyzing our collections, comparing them to others for a long time, but rarely in such a systematic way. And you've gathered quite a rich amount of information here, but you did mention that there are some unanswered questions. Where do you think future research might go from here? I'm most interested in benchmarking, because any of us compare our collections in what's called gap analysis, and that is the amount of overlap that two libraries or more libraries may have, but then what, that may be say 20% overlap, but is that good, is that bad? Do libraries with broad collections tend to have smaller overlap with other libraries, and that sort of thing. So that's where I'd like to go next is determining benchmarks and trends against which we can compare ourselves. Do you have any suggestions for which data might be shared with external stakeholders and what methods might be best for that constituency? That's a very interesting question, because that's sort of what I'm hoping to start generating is more interest in actually posting data. It's now easier than ever to actually post raw data and in data sharing resources like GitHub and other blogs and things like that. Google Docs, or Google Drive, you can share things, data files, and if we can share raw data, and by raw I mean like circulation statistics, and other libraries can compare their circulation statistics, of course we do have the PED, so that's getting better, but I think sharing data of holdings, detailed data holdings, detailed statistics of our holdings, not just overall holdings, would be very helpful. Thank you, and audience participants, we do want to hear from you, so please do type your questions here. I have one coming in from Richard. He says, do you have any insight into the heavy use but low interest in databases as analysis tool and the low ranking in importance as a skill? Yeah, I think in general technical skills were rated as less important, perhaps because of course I don't have any insight into their actual thinking, but my guess would be that these are skills that are more easily generated in people. You can send people to training a lot on databases a lot more easily than you can find somebody who has that analytical critical thinking skills already developed, and so there is interest in using the technology with the data visualization and that sort of thing, interesting using those tools, but I think it's considered more easily easy to train people to do that than it is to train them on the more critical thinking higher order level thinking. Samuel asks, he says, they were wondering why the range required to gather data was so large, 50 to 2000 hours a year. Does this reflect the manual gathering versus using automated processes? Can you tell that from the survey data? That is an excellent question, and as we were reviewing the data, it is something that we started to question ourselves, and unfortunately the survey did not specify manual versus automated processes, so that is an area that we would be interested in exploring further. As I mentioned, this is something that we'd really hope to be able to get some rich data from. What we received was very broad data from a small group of people, so future opportunities exist in exploring this aspect of collection assessment. I'm hoping that vendors of these products, ranging from the ILSs, the ERMs, the serial solutions, the ProQuest, the EBSCAs, will take note at how important data integration and data aggregation is, and to start making tools to make it easier to integrate the data from various systems, there's no, few libraries are gonna be one system shops, and I've been struggling with this problem for decades trying to be able to easily integrate data from different systems and put it in a single spice method. We have time for some more questions, so please do enter your questions in the chat box on your screen. While anybody may be typing in questions, I wanna go back to, again, our main purposes in this whole survey was to find out who's doing what and who's using what. We use OCLC's collection evaluation system, and what I'm really interested in now is contacting those libraries that say they currently use it and may be working with them to develop those trends and benchmarks, and using the same basic system, same set of data. Being able to know who's interested in data visualization, they'll be able to collaborate, hopefully, and come up with our training for using Tableau. This is sort of a shameless plug, but if anybody's going to LITA in Fort Worth in November, we are doing a pre-conference on collection of development dashboard in Tableau. So, Karen's and Jeanette's emails are back up on the screen. If you can provide her some information about your use of OCLC tools or Tableau, you can contact her there. We have time for about one more question. I'm just curious, Leanne, if any of y'all were surprised by these skills ratings. I think the subject expertise surprised me a little bit as being less important because of the changing roles of library liaisons and how they're being asked to help evaluate collections. Yeah, that's true. That was a little surprising too. And knowledge of publishing, that has gone way down too, and I would think you'd need to know the outputs and the kinds of, but then, you know, maybe that's the emphasis on quantitative kinds of data. I don't know. Well, I'd like to thank all of our participants for joining us today. And I remind you that we will send you slides and a link to the recording in the next week. And please join me in thanking Karen and Jeanette for their presentation today. Well, thank you very much. We really enjoyed the whole experience. And I would highly recommend other librarians to submit proposals for spec surveys. I think the whole experience has been very useful.