 Hello, and welcome to Revisiting the Library Impact Map. I'm Holt Zog, the Assessment Librarian at Brigham Young University. The Library Impact Map was created by Megan Oakleaf in 2012 as a way for libraries to collect data, use it, and disseminate it to show how library services aligned to university areas of focus and contributed to fulfilling university aims and goals. It promotes the use of data to also improve library services and resources. Our re-administration helps libraries identify a move towards a culture of assessment where assessment is used to help in the library and improve its service delivery and show its value. For those of you who have previous experience or knowledge of library impact maps, this is a brief review. For those who have not heard of them before, this is a brief introduction. The Library Impact Map uses a grid system where library services are the headers for columns and university areas of focus are the headers for rows. The intersection between each library service and university area of focus is populated with one of the five codes showing on this slide that indicates whether data is collected and the degree to which it is collected, used, and shared with stakeholders. This is an example of part of a library impact map as described. As you can see, the university areas of focus are the headers for rows and library services are the headers for columns. The intersection between the two is where the codes replace indicating the degree of data collection, use, and sharing. In 2013, as part of our first library impact map assessment, we modified the original library impact map such that we connected library goals to each university area of focus and designated departments, subject librarians, and divisions for each library service to allow further disaggregation. Now I'd like to discuss some of the changes we made from the library impact map between 2013 and 2019. We examined the university areas of focus and eliminated one because it was no longer relevant. Library services were also examined and they were increased from 46 to 58. In some instances, previous library services that were designated as a single service such as library IT services were subdivided into different services so that in 2019 what was one library service was now four for library IT. This resulted in an increase from 1,334 points of intersection to 1,624 points of intersection. We also introduced another code for 2019 that could be and the individual wanted to start collecting data. Finally, in 2013, we had 15 employees populate the LIM. In 2019, we asked 34 employees who had direct responsibility over or experience with each library service to populate the library impact map. In some instances, there was more than one raider for one library service. We began our methods by, as previously stated, examining the university areas of focus and library services and updating them to what was current practice in 2019. We approached employees with responsibilities over services and asked them to rate each service according to each university area of focus and they were only sent the service for which they had responsibility over. We did not set a deadline but sent reminders encouraging them to complete them and all were completed within a month. As an analysis, we examined patterns of change so that negative shifts were considered to be instances where the intersection of a library service and university area of focus decreased in the amount of data collected, used, or shared. A positive shift was just the opposite where the intersection indicated an increase in data collection, sharing, and use. No shift is where the same rating was given in 2019 as was given in 2013. We did this for all points of intersection for each code, each library service, and each university area of focus. I will now start to review the results with the all points of intersection. As you can see from the table, the change between 2013 and 2019 indicates that the CB, Y, and N codes had respective decreases of 8%, 1%, and 1%, whereas the Y+, and Y++ codes had respective increases of 4% and 1%. These changes indicate a slight shift towards collecting, using, and sharing data. In this instance, we also see that the could be and I want to start collecting data represented 2% of all codings or 25 instances where librarians wanted to start collecting data. Hereafter, I mention that the CBW and CB codes are combined for analysis and comparison because there were no CBW codes in 2013. Next, the changes and ratings were examined. To illustrate the changes, if you look in the left-hand column at the very bottom, you can see that there were three instances which were rated as no data being collected in 2013 that are now rated as Y++ in 2019. Thus, all green squares indicate shifts that are positive where more data is collected, analyzed, and shared. The blue squares represent negative instances where less data was collected, used, and shared. The white squares indicate where there was no change. And as you can see, this represents 51% of all possible data points. I need to point out at this time that for this analysis, any library services that were split in 2019 were rejoined together so that they could be compared to the 2013 results. We also see that there were simultaneous increases and decreases as previously designated in the previous analysis. There was 23% overall of intersections that decreased, the most coming with the Y code. There was also 27% that showed positive increase changes, most occurring with the N code. It is important to note that the could-be code had the second highest decrease in increase for them so that it was constantly moving up and down. The next analysis examined each library service and the changes in each code for each service. The change in code was determined by calculating what percent of university area of focus for each library service belonged in each code. Once we had those totals calculated for both 2019 and 2013, we subtracted the 2019% totals from the 2013% totals to determine the percent rate of change. In the table, this is illustrated where an equal sign means that there was no change in the percent between 2019 and 2013. A single plus sign indicates a positive change of 0 to 10% and a double plus sign indicates a positive change of over 10%. A negative sign indicates a decrease of between 0 and 10% and a double negative sign indicates a decrease of greater than 10%. The changes depended on the library service. For example, in the first row, acquisitions had three ratings that remained the same from 2013 to 2019, one that decreased by more than 10% and one that increased by more than 10%. Conversely, if you go to the fourth row for circulation, you will see that only one code remained the same from 2013 to 2019, whereas three codes decreased by either 0 to 10% or greater than 10% and one code increased by more than 10%. Taking a look at all increases and decreases, there was a moderate increase for most services, but there were, as I pointed out, circulation some exceptions. We did the same thing for the university areas of focus, so for each university area of focus, we examined the rate of change in each code across all library services, and we found a similar increase, simultaneous increase and decreases, and it was dependent on the university area of focus as to where these were occurring and where they weren't occurring. So as a bit of a summary, the purpose of the library impact map that was highlighted in 2013 remained the same and are still useful. It is a tool that lets you look at the big broad picture of all library services and how they relate to the university areas of focus, but it also lets you zoom down in to a specific division, department, or single service to see how they're collecting, using, and sharing data. The addition of the could be and want to start collecting data code helped facilitate new data collection because we were able to work with those people to start collecting data. This process of developing a culture of assessment is an ebb and flow of data collection use and dissemination as we saw here, and the shifts are slow, but if you're able to start collecting, using, and sharing data and maintain those, it will increase the rate of shift towards a culture of assessment. As a final point, we'd like to note also that the end code so the data is not collected, it will always be present. And the reason for this is that no one university area of focus needs all library services and no one library service provides information for all university areas of focus, that there will always be some where there isn't a need or purpose for collecting, sharing, or using any data. Some limitations to our assessment. Because we use the larger radar group of raiders, which is more than double the initial one, there was a greater possibility for variance among the raiders that they could be quite different in their ratings. But it was also each raider had a stronger connection to each library service. The second part was that there was a poor understanding for some of what the university areas of focus were. We need to have these defined better and a clarity of which areas of focus we should use. The next thing we should look at is that there was 15 services that had two or more raiders. In most cases, the ratings were consistent among the raiders, but where they differed, we use the higher rating. And all raiders were first time. So this being the first time through, it isn't always easy to be able to define and understand how to go about this process. In future administrations of library impact maps, the readministration should occur more frequently than six years. This will allow the use of the same raiders in many instances. There also needs to be better identification and definition of university areas of focus. For services that had multiple raiders, we need to show both the high and low ratings to create a range instead of just using the high. It is also recommended that the library impact map be combined with other assessments, as it is only one indicator of a culture of assessment. Thank you for your time and listening. I believe now we'll open that up for any questions that you might have.