 Hello, I'm Leigh Ann George. I'm coordinator of the Spec Survey Program at the Association of Research Libraries, and I'd like to thank you for joining us in the first in a series of Spec Survey webcasts. Today, we'll hear from the authors of the survey on scholarly output assessment activities. The results of the survey have been published as Spec Kit 346, which is available now. Before we begin the presentation, there are just a few announcements. First, I want everyone to understand that everyone but the presenters has been muted to cut down on background noise, so if you're part of the group today, feel free to speak among yourselves. But we do want you to join the conversation by typing questions in the chat box in the lower left corner of your screen. The presenters will read the questions aloud before answering them. This webcast is being recorded, and we'll send registrants the slides in a link to the recording within the next week. Now, let me introduce our survey authors. Ruth Lewis is the Scholarly Communications Coordinator and Science Librarian at the Washington University Libraries in St. Louis. Kathy Sarley is the Senior Librarian for Evaluation and Assessment Services at the Becker Medical Library at Washington University School of Medicine. The tutor also at Becker Medical Library is the Scholarly Publishing Librarian. Amy, I'll turn it over to you. Thank you, Leanne. Today we're going to provide an overview of the results of the spec kit survey on Scholarly Output Assessment Activities. We'll highlight advice from respondents and future trends. We should have time at the end to answer any questions that we don't get to during the presentation. This is a visual definition of Scholarly Output. Here we have our scholar and some of her outputs, and we attempt with this drawing to illustrate that while traditionally the focus has been on peer-reviewed articles and books, there is growing acceptance, especially within the ARL community, that Scholarly Output includes much more than published articles. To think about assessment of our scholar's output, we consider how we measure both productivity and performance. Enlisted here, we have just a few of the quantitative metrics we can use as possible practices for what everyone wants to report impact. These quantitative measures, in combination with qualitative assessment approaches, are often linked to evaluations of scholars for reputation, funding, and promotion and tenure decisions. They are also used to evaluate research groups, departments, and institutions. Our scholars are experts in their field and experts at creating research outputs, but they are not always experts in assessment, and you can see here our scholar has questions. We have answers. Research libraries have the resources, tools, and expertise to help scholars navigate assessment of their output. With this survey, we wanted to learn what activities, services, programs, and training libraries are providing to assist their scholars and researchers with output assessment. The output assessment is a hot topic these days, and many organizations are interested in the landscape. We felt the topic fit well with ARL's focus on Scholarly Communications and Scholarly Impact. ARL is also a key member of the SHARE initiative that is creating infrastructure to make research widely accessible, discoverable, and reusable in order to fulfill SHARE's mission to maximize research impact. For us, it was a great fit. The spec survey was an opportunity for us to capture a snapshot of the scholarly output assessment activities of ARL libraries, and today we're highlighting what we've learned. Now, the survey ran early this year, and 79 of the 125 member libraries from across North America responded. We asked libraries if they were providing services to researchers related to scholarly output assessment. And 96% reported they are providing these services. We then asked 76 libraries to respond to indicate which services they're providing. So the top response reported by 70 libraries was consultation or guidance on bibliometric measures. The majority of respondents are also providing consultations or guidance in other areas, such as alternative metrics, the use of databases for tracking outputs, and author name issues. Some libraries are providing their scholars with publication and citation reports and with reports based on the usage of their works in the institutional repository. And although it was less common, we did have over a quarter of libraries indicate they're providing graphs, charts, or social network maps to their scholars. So we wanted to know if these services were limited to specific user groups. And we found that the majority of libraries, 71%, provide the services to all their users. However, some libraries do limit services to specific groups. For example, limiting services only to faculty members or administrators. Comments from respondents to these set of questions also reflected the different service models that exist or are being developed in libraries today. Services may be provided by designated specialists, are primarily handled by liaison librarians, or are provided on an ad hoc basis with no systematic program. We saw additional evidence of these models when we asked directly about staffing. We requested the job titles of up to three library staff members who provide scholarly output assessment services or training. From the 62 libraries that completed this question, there were 152 titles reported. We categorized them in titles that fit whether the liaison or subject librarian category were the most common. Positions related to scholarly communication were the next most frequent. And there was a fairly diverse long tail of other position titles that provide these services or training. Libraries indicated that they have already hired or planned to hire to provide assessment activities. With a slightly greater number reporting that they have reallocated staff or planned to reallocate staff to meet these needs, comments from the responders again reflect that these activities will be, in some cases, the primary focus for a new hire. Or more frequently will just be one part of their role. Staff who provide output assessment services are frequently providing training to their users, too. When we asked specifically about training for scholars, researchers, staff, or students, 64% reported they are already offering training, with another 27% currently developing or considering developing training. Several different formats were mentioned, including workshops, drop-in sessions, brown bags, and ad hoc classes. And based on the titles provided, we can see that training sessions are often focused on specific resources like Scopus or Web of Science. But libraries are also offering broad-based sessions like building your academic profile and determining your scholarly impact. So we wanted to know a bit more about the tools libraries are recommending to their users. What you see here are the top six resources responding libraries are already recommending. They include large subscription-based citation databases such as Web of Science and Scopus, along with the freely available Google Scholar. And there were two resources more specific to journal metrics found in the top six, and a relatively new resource, Impact Story. So we see Impact Story again as we look at the top six resources that libraries are considering acquiring or using, shown here in orange. I should point out that we did not strictly rank these, we did not list them strictly by rank. Instead we clustered them to better visualize the trend. So the top cluster appear with altmetric.com, Impact Story, and Plum Analytics. They provide evidence of libraries' interest in alternative metrics. And the second cluster is Complectic, Vivo, and SciVal. All tools that reflect libraries' interest and involvement in research information management and research networking platforms. Now the libraries reported a number of other resources and software, both paid and free for use in scholarly output assessment. And that extensive list begins on page 23 of the spec kit. We wanted to know if libraries were sharing with any of the units in their institution. Only 29 libraries, or 39%, reported that they were sharing costs with another unit. For those sharing costs, the top five resources included Academic Analytics, SciVal, Simplexic, Web of Science, and Scopus. Sort of your high dollar resources. The majority aren't sharing the cost of these resources with other institutional units. However, they are still partnering with other units on assessment activities. 53% reported that they have partnered, and another 27% are planning a partnership with another unit in their institution. Only 3% stated that they tried a partnership, but it was not successful. Libraries listed partnerships with several different units, including the Office of Research, Office of the Provost, IT units, as well as individual departments and programs. Many reported partnerships focused on specific projects, such as implementation of Vivo, or integration of Orchid. Yet others focus on providing service or training to scholars and administrators. And these partnerships are one way libraries can promote their scholarly output assessment activities. So specifically ask libraries to identify what methods they use to market and promote their output assessment activities. The top three methods were Word of Mouth, LiveGuides, and Library Websites. There were a lot of other methods that respondents provided. 47% of the respondents listed additional methods. These included targeted emails, presentations at departmental meetings, and one library even held a wine and cheese event to promote their activities to new faculty. Students had great thoughtful advice for their colleagues. We chose to highlight just a few key points here. The full list is on page 52 in the spec kit. Now one respondent advised the need to integrate this work into existing relationships with faculty to support their work across the research life cycle. Another noted that providing these services can help build faculty and librarian relationships. And several comments emphasize the importance of understanding the needs of faculty and administrators, building partnerships, and coordinating these efforts on campus. Libraries also mentioned the difficulty in keeping up with the tools and trends and scholarly output assessment. Some recommended an expert, a point person, or a team who can coordinate these services and help keep staff up to date. In addition to keeping up with the tools, respondents advised that librarians really need to understand and communicate the strengths and weaknesses of available tools and measures. One library commented that tools for scholarly output assessment have limitations and to be mindful and explicit about this as you introduce, discuss, and utilize them. Another respondent advised honesty about the limitations of the tools and to always make caveats explicit. He also asked respondents to think about future trends for scholarly output assessments. Their insightful responses begin on page 55, and we just highlight a few here. Libraries expect a growth in the use of alternative metrics, a trend we saw in other questions on the survey. There are concerns about the increase in the number of tools available, the cost of these tools, and the resources necessary to keep up. Libraries are predicting that these new tools and metrics may affect promotion and tenure decisions, and they repeated their concerns about the limitations of tools and the accuracy of the data. But there is optimism that new systems like ORCID will help with author name issues and new tools might better serve the needs of scholars and other disciplines outside the sciences, specifically the arts and humanities. The comments reflect the challenge we face in this area. There is no one-size-fits-all solution for our scholars or our libraries. It will be challenging to keep up with the trends and resources, and we do need to be proactive. Many pointed out that librarians possess skill sets well suited for future roles in scholarly assessment activities and see this work as an important opportunity for us. We had some great, great responses to the survey, and it was difficult to limit to just the materials provided in the spec kit. So the spec kit includes examples of reports provided to scholars, training materials, job descriptions, lip guides, and library websites used to promote library activities. It also includes recommended readings to learn more and resources used by librarians to stay current in this area. And we had one other resource that came out after we did the spec kit that we wanted to draw your attention to. I think one of my colleagues is going to copy and paste it into the box. It's a new book out. So now we should have time for any remaining, well, any questions or discussion that you have for us. Okay, we'll go ahead and jump in. We have a question. We were wondering what the people who are participating today, what was their most surprising finding on their survey? Was there anything of particular interest? I'll take a stab at answering that question. This is Kathy Sarley. And the question is, was there any kind of output with most people interested in seeing? Was it reports of their scholarly work or was it visualizations? Based on these survey responses, I think that the combination of both types of reports, anything that helped the scholars better understand their output and impact was something that was found to be useful. The question that I see is, did any of this work look at how libraries are planning for responding to the federal public access grantee requirements? I don't recall that coming up in responses. Kathy is going to answer, I guess. No, I don't recall any either, but perhaps that's the question we should have included in the survey. It's interesting that after we saw all the responses, which were wonderful by the way, we thought maybe we should have asked additional questions or maybe we should have phrased our questions a little bit differently. So a question we have for the audience is, what questions would you have liked to have seen on the survey? What strategies did respondents find effective to get campus buy-in for making research available beyond researchers in their own field? The survey didn't really deal with making research available. It was more on measuring the research that was put out from a particular individual or campus. So I can't say that respondents really addressed that. I agree. That's the question we should have asked on the survey. The next question is, I would be interested in what librarians are doing on their campuses with respect to researchgateacademia.edu. That is a really good question. I know that this question is a very common question at our university. And I think based on the responses, it appears that the librarians are making that information available to their users on campus. Yes, people did mention recommending researchgate. Some of the respondents did anyway. They mentioned recommending researchgate. I don't see academia.edu. So academia.edu was meant to be maybe one or two times compared to Web of Science, which is reported 70 times. Researchgate, I think, was three or four libraries mentioned researchgate. So we didn't address, look at other ways they might be mentioning it, but this is what we could tell from how they responded. And yes, we didn't realize that you aren't all seeing the questions. We will repost them so you can see them. I'll take care of that. Thanks, Karen. And thank you, William, for posting your guide. That's very helpful. To the other question from Nicole, to what level are faculty interested in metrics? That's a very good question. I don't recall any specific responses on the survey to that question. Well, we didn't ask a specific question on that. We saw comments and advice from respondents that providing metrics to faculty for their promotion and tenure packets is very helpful. It can be helpful to build relationships, but we didn't quantify. We didn't survey faculty members about their interests and we didn't specifically ask librarians to assess their faculty members' interests in metrics. Does that answer your question, Nicole? And I'll go back to the question that William asked about academia.edu and research gate. I know for our university, our position is to remain neutral, that we don't advocate one over the other. We just present the information so that the faculty can make their own decision as to what they want to do. But we do help them with any resource they may be interested in. If you haven't had a chance to read the spec kit responses, I highly recommend that you do so. There were so many interesting projects that were reported by the respondents and we found that very encouraging and it gave us some ideas for how to move forward with our own program. And what I thought was especially interesting that only 3% said that any attempt to form a partnership failed. That most said that the attempts to start a partnership was successful. I found that very encouraging. Very encouraging. Another comment that just arrived, it would have been good to know about workshop and rates. And yes, we could have asked about that. People did report on events that they hosted. There was a wine and cheese party for I believe it was new faculty and classes and ongoing events and one-time events and workshops and things, but they didn't really talk about attendance rates. So yeah, that would be a question to ask. Hi, this is Ruth again. I'm interested to know if seeing this input from other ARL libraries has encouraged or inspired any library to pursue additional activities. And Peter, I just want to clarify and Sean will get right to your question in just a second. The question about differentiating between academia.edu and institutional repository, did you mean for the purposes of the survey or just in general? Well, I think and I feel free to jump in here. We tend to have concerns about academia.edu from some copyright concerns. And so we tend to recommend institutional repository to our scholars because we help them manage those issues. However, we don't judge our faculty or make recommendations either way about academia.edu. Another comment or question that came in was are we planning or is anyone on the call planning further investigation into disciplinary differences among researchers? No, I have seen articles. In fact, our bibliography has a few things. In the discussion about job titles, a lot of the responses were specific subject or liaison librarians like the Education Librarian or the Biology Librarian. And so they would be focused on disciplinary tools. I know in that meaningful metrics book that we sent the URL, there is a section on disciplinary differences in scholarly output assessment. But we're not planning another spec kit on this obviously, but someone else could propose that for the next one for the next year. Great, thanks for sharing that Megan. If there are libraries who are considering this type of program support, the advice section and the additional comment section of the report has many ideas and recommendations from the respondents. So that may be a place to start. It sounds like Megan got people who were not in the humanities to come to the workshop as well. We'll give it another minute to see if any other comments want to come through. We hope that there is a follow-up survey with a similar topic. We find this to be pretty interesting and we're looking forward to anyone who might want to take that challenge. Did we get a sense of libraries that are leading the way in providing these services? Good question. I guess I will have to look in my spec kit. We can give you kind of a list here of who's in there, but I'm not sure. We just want you to keep in mind that we could only include so many. I would definitely say that Emory, Northwestern, University of Pittsburgh, University of California, Irvine, they're all in the spec kit. Again, just off the top of my head, I know Iowa State, University of Iowa, Kansas, University of Kansas. I'm trying to cover as many as I can. Minnesota, did you say that? There were so many we couldn't pick and choose. And one library may have an area of strength as opposed to another, so they were all interesting. And I think we only had 79 out of 125 AOL libraries respond. So that means 46 did not respond. So we don't know what they're doing. I mean, we got some sample reports from Florida State and MIT that were great. So there's a lot in there, but as Candy pointed out, there's a lot being done by other universities that we just couldn't put them all in. Okay, somebody asked how much Mayan did respondents get about altmetrics, incorporating, for example, new audiences versus traditional citation metrics. The survey really didn't address that one over the other just that libraries were dealing with the issue. Right, right. So we didn't have the courage to get that type of detail, but we had to draw a line somewhere with the questions. And in hindsight, we would have added or taken away some questions. I also thought what's interesting is that word of mouth was the number one reported way of marketing these services. Okay. Well, we want to thank you all for coming. We appreciate everyone at ARL. We appreciate your attendance. And have a good afternoon. Ladies and gentlemen, that concludes the webinar for today. We thank you for your participation and ask that you would please disconnect online.