 All right, hi everyone. I am Liz Brinnell, I'm the Library Assessment Officer at Case Western Reserve University, and with me is Lauren DeMonte, Director of Research Initiatives at University of Rochester. And today we're gonna be talking about leveraging library expertise for university rankings. So why do rankings matter? International and national rankings help students, parents, researchers, universities and governments worldwide determine which universities are the top ones in the world. These list focus on a variety of different criteria and methodology. The big three international rankers that we all should know are QS, THE or Times Higher Ed, and ARWU, which is also known as Shanghai Rankings. As national and international rankings help focus the students, faculty, researchers and governments, they determine which universities to work with. And this means that if your university or institution is not saying the top 100 or 150 schools, you might lose out on prospective students from other countries. Researchers may not wanna collaborate with you because you do not have a strong international academic reputation, or you might lose out on potential grants or funding from other countries. So for example, if you're falling below the 100 or 150 mark, you may wind up losing out on revenue. Your academic reputation may not be a stellar to where students from other countries are unwilling to come to your institution. It's very important for us to realize that international students do have to pay out of pocket to come to the US to come to school here. So if they're unable to get that funding or they have to try harder for it, your university may fall below that mark and they would then pick another school that has a stronger academic reputation. And then always important international collaboration in regards to who your faculty are then working with, which researchers are working with you, you wanna make sure that when you're looking at rankings that that international collaboration is very strong. So if you have that area as a weak point in your criteria or the methodology that the international rankers are using, that is something that you definitely want to be looking at as well. So we're gonna go into more details on that. And then Lauren here is gonna be talking about the core expertise that the libraries can provide to you. So when you hear about international rankings, in particular, you might wonder, what does the library have to do with this, right? It feels like there's not an obvious connection. But the thing that I think is really important to remember is that bibliometric indicators are a core part of how rankings are determined. Bibliometrics are used as a proxy for research productivity. And so because of that, there's a lot of opportunity for libraries to participate in rankings initiatives and kind of grow your own reputation on campus, ultimately. So a lot of the work that happens around rankings relates back to very traditional library work, authority control, right? Making sure that your authors, you have the names for your authors, that they're affiliated correctly in various kinds of citation and publication databases. This is like super traditional old school library work. And we are the only ones who know how to do this. Half the time, we'll get into our collaborators later, when I'm working with other folks on campus, they go, oh my God, how do we do this? This is so hard. How can anyone know anything about this? And we can always say, hey, the library knows we know how to do this. Because it's also based on bibliometric indicators, and we have that kind of expertise, we've been doing this work for a long time, we can draw on lots of different skills within the library to help us understand things about citation, to help us understand things about how do we actually measure productivity in a way that makes sense. We just, we have those skills, and it's not something that we necessarily even have to hire for. You probably have folks in your collections departments or other kinds of librarians out there who know how to do this work. So I think that's a really powerful thing for us to leverage again. Another piece of this puzzle is that a lot of the times, the databases that we use to do this work, we already have relationships with these vendors, where we are already paying for these things. And so there's this interesting kind of collections opportunity here where we can, again, demonstrate that we are already ahead of the curve, we already have our subscription to Scopus and Sybell, let's say, or our subscription to Web of Science and Insights, and sort of talk about that. But the other thing that's really interesting is that we can work at a higher level to coordinate resources and talk about the kinds of things that we need to be purchasing. So in our case, for example, the university has a subscription to academic analytics, which is very expensive, and our university sort of went off and bought this without any consultation with the library. And so we ended up talking about that in this sort of broader discussion around should we purchase Scopus, should we do this, should we do that, and we're at the table now talking about these resources and these collections in a way that is much more holistic and I think really, really powerful. But on top of that, if we're doing all this work around bibliometrics and around understanding sort of the research impact of our scholars, it actually helps us do better collection development as well, right? The sort of fringe benefit of this becomes we have a richer understanding of what is actually being produced at our university. And so we can actually then tailor those that collection development in ways that make a lot of sense. And finally, with that deeper knowledge, it sort of improves our opportunities to do outreach to faculty, as well as outreach to our campus partners. So there's this sort of nice sort of set of skills that we already bring to the table and this nice set of benefits that we can sort of draw from participating in these kinds of initiatives. So I just wanted to kind of get that on the table and sort of say you probably have the capacity to do this work already at your institution. So for the next part of the presentation, we're going to go a little deeper into what's happening at our various institutions and sort of talk about some projects that have spun out related to rankings. So for the University of Rochester, I wanted to start off by talking about the kind of values that we bring to the table around rankings. I think it's really important to note that we do not want to change who we are to sue rankings. We're not trying to make changes to what we're doing so that we too rise in the rankings explicitly. We want to continue to be the University that we are with the strengths and strategic priorities that we are already pursuing. Having said that, we don't want our sort of position in the sort of global research environment to change because we're not paying attention to these things. So this is the kind of push and pull that we're always balancing. And I think it's important that we kind of set these values from the very beginning because when we're having these conversations, I mean, you can see it's this slippery slope. You might be able to say, oh, we make this change. We might be able to do this or this might be able to happen. We can rise here, rise there. We don't want to get into that kind of thing. We don't want to get into this sort of frame of mind where we're telling people we're to publish or we're trying to sort of play around with any of these sort of figures. We really just want to make sure that we're representing ourselves as best as possible. So just kind of get that on the table. When we're doing this work at the university, the library is involved, but we have a kind of core team of collaborators that are kind of driving these projects. And so we work very, very closely with the Vice Provost for Global Engagement. The international rankings piece, it's a strong driver, as Liz was saying, for international students, for international faculty, for international graduate students. And so there's a strong interest from that office around understanding where we are in terms of rankings and what we can do to get improved and how we are being represented. Another important collaborator is the Office of Institutional Research. A lot of the data that gets submitted to rankings agencies is institutional data. It's coming from that office. We have another key factor, a collaborator, the Assistant Dean for Data Analytics at our School of Arts, Science and Engineering. This is a really interesting role. It's a role exclusively focused on data analytics and kind of sits between institutional research and the Arts, Science, Engineering School. And then me, which is kind of nice. So we can see that we're kind of part of this team that represents a lot of different interests with a lot of different kinds of skills. So we meet, I would say, weekly at this point trying to understand where we are and what we need to do next. We do work that is sort of on the one hand around cleaning data and around understanding where we sit with the data, but we also strategize around outreach. How do we reach out to various academic departments? How do we reach out to academic leadership to make sure that they understand what we're doing and why it matters? And so we end up spinning out lots of projects together to improve rankings, but also to help with our own sort of projects locally. And so for that reason, beyond the actual kind of core team, we've had multiple collaborations with lots of different people around campus. So the library is a key one here. Our collections department, metadata, outreach, liaison librarians, like they're always involved in aspects of projects involved with rankings. Our chief data officer, which we just hired, is also the senior associate, I don't know, some title, associate vice provost or something, also have data governance. And the data governance data piece becomes really, really important because one of the things, and I'll talk about this a little later, that was really challenging is just defining who was a faculty member. If you can't define who a faculty member is, how do you count how many faculty you have? How do you know what to look at when you're doing a building of metrics? So there's this kind of interlocking set of questions and issues that begin to arise. Campus IT is another kind of collaborator, right? We have this enterprise governance process, enterprise application governance process so for buying systems, for buying software, for buying tools to help us do this work, we have to involve them, which is interesting too. Another key collaborator that has emerged is actually our associate vice provost for career education initiatives. And again, that might not seem obvious, but there are these reputational pieces around rankings, around employers and other kinds of academic, how academics sort of perceive you and how employers perceive you. And so getting, just seeding those names and understanding who to talk to has become a collaborative effort. And then of course, the director of academic affairs is really important when we're thinking about faculty data, right? Making sure you're involving all the right people is really important because it streamlines the process, but also it means that you get more buying for these projects. So we do try to follow a process when we're engaging in any kind of rankings initiative. So really it's, we always start with the data, we gather the data, we try to understand what are the current trends across the various ranking bodies and the ones that Liz mentioned, like the international ranking are the ones that we focus on the most, QS, RO, and THG. And then we really do try to spend some time unpacking the methodologies. They change every year a little bit. So you have to pay attention, which is sort of challenging. And then we try to understand what has contributed to any kind of shift in our ranking year to year. That can be tricky and a lot of it is kind of best guess, but we try to work from the data that you have to make educated assumptions about why you are where you are. And then we do try to launch those collaborative projects. So with all of the people involved on the previous slide, we try to decide who needs to be involved with this and what can we do to make these sort of marginal gains sort of over time? And then we try to operationalize because it's all fine and good to do a big data cleaning project one year, but you can't necessarily do that every year. So how do we leverage what we've learned to turn this into something that can be done in a sustainable way over time, right? So for example, we're doing this big data cleaning project now in Scopus, and now looking at how we can use our, we call them outreach librarians, so subject liaisons basically. How can we use them and sort of build that into their work to look at affiliations, to look at how author names are written down to this database so that becomes a group effort rather than me and a student trying to do our best to make the data cleaning happen. And for a timeline, I will say that we consider ourselves fairly new to this, but this work has been going on since 2016. So in 2016 is when the Office of Global Engagement really began to pay attention to our rank, and it'll be honest, it's because we were slipping in the rankings and it was becoming an issue. So Jane Gatewood sort of grabbed that and sort of took that on herself, and they began to look at what institutional data could be cleaned, involved the Office of Institutional Research, and then the library became involved sort of mid-2017 or so, and so we are kind of just getting going with our sort of core team now. So it does take time. I don't know how long you guys have been in this for. 2017. 2017, okay. So we consider ourselves new, but Kase is way ahead of us. I'm just gonna say that. So we've been recently been focusing a lot on the QS rankings in particular, and we do this for a number of reasons. First is that it is the most popular ranking used by international students and their parents mostly. I think international faculty, international graduate students also do look at this, and the other reason we focus on QS is because it's one of the few rankings that we can actually submit data to, which means we can intervene in sort of what they're looking at and sort of what's actually going on here. And so from the library perspective, what we focus on is this 20%, which is the citation for faculty metric. So we've spent a lot of time over the past year and a half unpacking this QS methodology. So it looks like it should be pretty simple, like, oh, citations for faculty take, what they do is they take a normalized citation count and they divide that by the number of faculty. The trick here is how that normalization takes place. So I'm not gonna get into too many details, but I wanna show you sort of what we had to look at, essentially. So when you look at the distribution of citations in the Scopus database, which is the database that the QS rankers use, they kind of, QS divides citations into these five faculty areas. You can see them here, arts and humanities, natural sciences, so on and so forth. If you look at the distribution of citations in Scopus, they look like this. So about 1% are arts and humanities, let's say, and about 49% are life science and medicine. What the QS methodology does, though, for citations for faculty is turn these numbers into, they have this formula that makes it look like science. It's just gonna flash that up for you. They turn that into 20%. So they try to make every faculty area with 20%. So one citation in an arts and humanities journal is worth 20 times more than something in another faculty area. So this has implications for how your institution is represented in these databases. So we spend a lot of time looking at how this methodology affects how we are represented and found a lot of really interesting issues. So the first thing that really came to the forefront is that we got some bad data in there, really bad data. When we did, I've been working with a student who's developed, a computer science student who's developed an algorithm to help us begin to check and understand sort of where we are in terms of author data and author affiliations and 60% of our faculty have bad data of some kind. That's a lot. That's a huge amount. So we have incorrect affiliation information. We have name disambiguation issues. We have incomplete or missing author profiles. So if we are trying to get at a really nice citation for faculty member but 60% of our people are not being represented well, that's not really gonna help us improve that score. We also interestingly found some misalignment. So in terms of what QS considers, let's say engineering and what we do. So we think of ourselves as a fairly strong engineering school, but the kind of engineering that we do gets lumped into the natural sciences bucket, not into the engineering bucket. So what does that mean for us, right? If we're kind of losing out on that whole bucket because we work a lot with like lasers and nuclear stuff and we think of that as our engineering, it's kind of an interesting thing to look at. Because we were always wondering why our engineering numbers were so low and this helped us understand that. We also really surfaced a need for data governance, a need for us to be able to understand what we mean by who a faculty member is, right? Understand what we mean by what a student is. We need to also think about actually having systems of record, right? Because if you need to get data, you need to know where to get it. Right now we have to ask a lot of different people for a lot of different data sets that we have to put together ourselves. So it really increases the amount of time it takes for us to do this work. And then we also had to establish data sharing processes. People were used to let's say the library company asking, hey, can you give me a list of faculty members? Or can you give me a list of students? You never asked for this before. Why are you asking me for this? So that we had to establish these kinds of relationships even to be able to do this work. So the strategies that we have at this point are to obviously clean the data in Scopus, which is what we're doing now. And so that's a mixture of algorithms that we've developed and manual data cleanup, which is laborious, absolutely. We've also launched an Orchid project. So this is one of those projects that sort of spun out from this ranking initiative. So this is, we've been able to do a huge outreach project through the library, but we've embedded Orchid into the faculty annual reporting system. So we have buy-in from the higher kind of, like from the School of Arts, Science, and Engineering to get folks with Orchid IDs essentially. And that was because we could talk about the ways in which Scopus uses Orchid as part of their disambiguation algorithms within the system. So we had to have that sort of technical knowledge to be able to make the case for what ended up being a really interesting and robust outreach project for our librarians. For the misalignment piece, we're not gonna do anything different. We're not gonna change how we do our engineering, but we can maybe talk to QS about some of the issues that we see around how they're defining what engineering means, for example. Whether this will be successful or not is to be seen, but this is one of those areas where our values come into play again. We're not gonna say, you know, everybody stop with the lasers and do civil engineering or something. That's not really what we're gonna be doing here. So that's all we can do is talk about what we're seeing when we do our own analysis. From the data governance perspective, again, we're collaborating very closely with the Chief Data Officer. The library is actually involved in helping her develop a conceptual data model for the university. So we have a sense of how we're beginning to define these things and then how the systems of record will be able to employ that model to do the kind of work we need to do around faculty and student information. And then the library is really at the table now for lots of different kinds of projects around the faculty information system, around faculty job codes, but this is a huge project that's spun out of this, because again, if you can't define who a faculty member is, where does that start from, right? It starts from the contracts that people sign. And so our metadata, our head of metadata is actually at the table developing this job code piece for the university. So again, the fringe benefits of the library for this work have been tremendous. And again, just drawing on really traditional expertise in many ways. So the results at this point, this is our sad waterfall, it's just sort of where we are in terms of rankings. But the one I want to pay attention to is the QS rank, which is the red line here. You can see that we've been able to kind of stabilize our position in QS because of the kinds of interventions we've been doing, right? And we have a better understanding of the issues affecting rankings. And we have increased collaboration across units, which I think is really the good news story here. And then finally, the library is a key player in the strategy around this kind of work, as well as in data analysis and outreach. So I think from a reputational perspective for the library, the rankings piece has been very, very helpful and effective, right? We are sitting at tables that we were not sitting at before because of the expertise that we draw to do this kind of work. Thanks Lauren. That's a, Lauren's presentation, we talked about who should go first. And I said, you have a lot of really good overview. And then I can kind of take the next step and talk about the actual process that Case Western's actually been doing in terms of working through the actual processes. So you'll be able to see some of the actual work that we've been doing. Case Western has been actively working through the international rankings issues. Since March, 2017, my university librarian had been talking with the president of international affairs and they determined that the library would be a really important player in trying to correct some of the issues. So he came to me and said, so what are we gonna do to fix our international rankings? And then the first thing I said was what are international rankings? And then from there, I had to figure out what I needed to do. So I began with trying to break down the components that make up the citation impact or the bibliometrics as related to international rankers. And it was determined that the variations of the institution and faculty names were absolutely critical. So that's where I started. I started looking at the institutional variations and affiliations. Does the university count the professional schools, the hospitals in our cases, the labs, the institutes of specific subject areas as part of Case Western, or are they considered their own institution? So we have several professional schools on campus which includes medicine, dental and nursing, school of social work, and then school of law. So we needed to really determine should they be separated out from us or should they be considered part of Case Western. We also needed to determine how the faculty were naming themselves either with full name, nickname, initials, and that really started to spiral since everything seemed to be impacting how these things were being looked at as a university. So the faculty impact, which if disambiguated from naming variations and affiliation connection would not only impact the citation area of an international rank, but also the area of faculty impact of the ranking methodology and criteria. So that's when I realized that even though we may only impact that 20% for QS, we actually are the underlying database for everything else that goes into it, which meant that project was going to be much, much larger than I anticipated. So I had to really keep track of everything and I certainly spiraled down into data and task overload. So I had to pull out of that spiral. As an assessment officer, I needed to make a project plan. This project plan has eight phases. It's overwhelming but I'm gonna go through it and don't worry, I'm not really that good at getting all eight phases done. But each phase really touches on each aspect of the faculty profile and aspect of what we needed to review, change and or edit. Each page includes a review of each school starting with the ones that Calvin Smith Library directly impacts, which is school of management, our engineering school and our College of Arts and Sciences. Then I followed that up with our professional schools and then we could target our institutional variations, faculty variations and affiliations, which would then include the review of the impact of the faculty before and after the changes. I then added phases of integrating ORCID, submission to the institutional list, to the ranking organizations to ensure that they have that information. That's important. I think a lot of schools don't realize you should really be contacting out to these international rankers or even national rankers to make sure that they have the appropriate institutional list if that's what they're reviewing. Even if they don't review it, I send it off and say, hey, just in case you need this, here's our institutional affiliations. From there, I can tell you that we created even more offshoots to this project. So not all eight phases are done. Really, it's the first two that are most important aspects, which is the institution and faculty affiliation variations on the project cycle. Last December, we had five weeks to put together all of our international collaborators, which included a minimum of 1,000 over the past five years to submit to QS. And this year, we started in March 2018, reviewed another 600 collaborators in painstaking detail, and we were able to complete it over a six month period instead of the five weeks, and we were able to report that up to leadership in a much more timely fashion with minimal stress for us. So we did all this using Clarivate Analytics, Web of Science, and Insights, as well as Elsevier Scopus and SciVal. Those are the two systems that feed into the ranking agencies. So it's important not only to have these tools and have access to these tools, but to understand how they work. So the first thing we did was we started with institutional variations. I figured this was the low hanging fruit. We knew what our variations were. We couldn't have more than a couple of hundred. And according to the first review of Web of Sciences Organizational Enhanced section, we had about 215 variations. I couldn't imagine we had more than that. So I took this list, split it out into a spreadsheet, sent it off to the professional schools and emailed the respective library directors. After a few weeks, I received another 200 variations from the schools due to the institutes, clinics, affiliations, to the university hospitals and foundations. And then I went to the university archives to see if there were other names because I knew we were called other names previously because we used to be two schools prior to the 1960s. So between a collaboration of Western Reserve University and K School of Applied Science prior to that, we had another 100 years of variations to review. Almost two years later, I now have a running list of 651 different variations. And it's not just because of the name changes or the professional schools. You also have to check out for the typos. And not only the name, but the location of the school. I actually had to check and confirm that no, there is no Cleveland, Sweden out there and confirm that at one point we were not Case Western Reserved or Reserved University. These are the things that make this project a continuous journey. And over the past year, with the adjustments that we made just in Web of Science, we were able to see increase of documents from 89,000 to over 110,000, not 110 documents. And yes, some of these were due to additional publications, but that would mean over 21,000 publications were published in a year. And I can tell you, after doing a lot of assessment on this, the institutions never published that many documents ever in a year. So using the list I created starting in Web of Science, I am now able to send these updates to Scobus. And now I am able to update this quarterly and send new updates of the variations in different versions of the school names that I find to both Web of Science and Scobus. It's important to keep track of this information because you are always going to find more versions of the name. One thing that we are trying to implement now is to make sure that we curb this from happening by making sure that the faculty and researchers are using a very set common name of Case Western Reserve University, not putting in their institute or putting in the Cleveland Clinic first. We want them to be using their primary affiliation, which is Case Western Reserve University, and not adding in additional information. So now that I had the school variations, I decided, oh, it's time for the faculty. What are we gonna find here? It became even more complicated. An author can be seen as J. Smith, J. A. Smith, John Andrew Smith, just John Smith, John A. Smith, and so on and so forth. And you need to make sure that this is all correct. You have to ensure that the name also belongs to your faculty member. So there are a lot of John Smiths out there and you wanna make sure that you're only counting your John Smith. So for example, this is one of Case Western's most prolific authors, Lemming Dye. And for the longest time, he was labeled as such. However, due to Scopus and their request to merge authors, which I don't know if you can necessarily see because it is white on white, pretty much. Scopus allows changes to faculty names and affiliations without much oversight. Any institution can claim the faculty papers. So Lemming Dye, whose primary affiliation is Case Western, but also works at other institutions, either in the past or over the summer when he travels and he does summer sessions. Those institutions can now legitimately claim his papers, even though he is technically affiliated with Case Western. The reason for this is because it's not Scopus's or Elsevier's job to ensure that the papers are affiliated to you because all they see is, yeah, he did work for this other school. How do we know that it's not their job to figure that out? Our job, to make sure that we're accurate. So we need to make sure that we're appropriately associating those papers to our author. You also wanna make sure that when you're cleaning up this profile, you're only cleaning up your papers and not someone else's. So I was able to get Lemming Dye's profile corrected after several months of working with Elsevier. And as of two weeks ago, he was correctly attributed back to Case Western and his other publications were attributed to his other institutions. So you're going to see that your Lemming Dye out there is going to have several different lines because if he has a lot of papers being published and he's had a full, robust career at other institutions, he's going to have multiple lines coming out. So that's something that you need to work for and make sure that you're correcting that. This is where I cannot stress enough or get IDs. Having that integration for your faculty, and if you may or may not know, an ORCID ID sort of follows this faculty member wherever he goes. I've heard it called the Social Security Number for their publications. It sticks with them. It goes with them wherever they go. But they are the ones who have to be updating it and cleaning that up wherever and whenever they go, which is another hurdle to hit. And really, I don't have the time to go into all of that, maybe another day, if we want to really get into it. But the big thing is making sure that you're cleaning this information up as much as you can. So I'm going to go into a few of the updates that we have seen so you can really see that the library can make impact. Doing cleanup like this and making sure that you're keeping up with that. So this is our QS rankings from 2014 to 2019. As you can see, there was definitely a dip. I can tell you that in the past year, with just data cleanup, we moved up 27 spots, which is actually pretty significant, at least for the work that we've been doing in the past year. Next is THG for the world rankings. We were 158 last year and we are now 132, which means we moved up 26 spots. Now, this rank I'm especially proud of for the simple fact that the Liden ranking is strictly bibliometrics. We were sitting fairly low on this list at 143 last year. Now we are sitting at 58, 57, we moved up 86 spots. This is strictly bibliometric data, which means if you clean up your institutional profiling, if you start cleaning up your author variations, you will see a major jump. This is how we know the work we've been doing is actually fairly significant because we were able to move this much in one year, and that's just bibliometric cleanup. It's a lot of work, it takes a lot of time to do it, but if you actually sit down and do it, even for a couple hours a day or a couple hours a week, you'll be able to see significant changes within one year. And trust me, it was just me doing it for the full year of bibliometric cleanup. Now I can say I actually have a team behind me and our next steps really have changed. We have more support on an institutional level. We have an international ranking steering committee that my university librarian is a part of. I'm part of the working group on the institutional level where we now have a full team of support from the international affairs, institutional research is a variety of other departments, and we're really working to make sure that we have a strategic plan moving forward for Case Western to ensure that we are not only impacting on a small level of bibliometrics, but we're actually targeting this on a much more robust way. So it's really important to be involved as part of the library to make sure that, trust me, web of science and scopus are really expensive, especially to have both, and the library cannot sustain that, especially since we saw a slight budget cut this past year. So it was really important for the library to be a part of the steering committee and the working group to say, school needs to help invest in the best. If you wanna see these changes continue, we need to be able to work in these tools, we need to be able to have good relationship with these vendors so that we can make sure that you are seeing these new changes and you're able to see our rankings increase. And on the library level, I now have the support of the research services librarians to our liaisons who actually work directly with the faculty members so that they can actually start pushing the ORCID integration as well as continuous process improvement. Now I'm not saying that we were fantastic and everything moved up and changed. I mean, ARWU, we saw one spot move up and it moved up, that's great, but we need to go back and look at the methodology and see what else we need to adjust. We actually saw our rankings drop in NTU as well as US World. So we saw slight drops there, but it's just a matter of figuring out what the methodology is, looking at what we need to be doing to make things better. Like I said, we're only about a year into this and now that there's a team behind me able to make these movements move forward, we're hoping to see more success in the future. And the process is hard and it's time running, but anybody can do it, it really can. But we hope to open the floor for discussion now. You can reach out to us right now and we can talk more about this. Thank you for your time. Thank you. Thank you.