 Let's give a minute for everybody to come. I see the number jumping from 60 to 80 to 90, 126 going up. All right, I see it is slowing down. We're 178 strong class of panelists here. Good afternoon. Thank you for joining today's virtual offering of the Library Assessment Conference. Today's presentations are on the theme of assessment programs and organizational issues. My name is Martha Kirillidu, and I am the moderator for today's session. My colleague Elizabeth Edwards is my co-moderator. Before we begin, we have a few logistics to share. We have nine presentations, and one will follow after another. Each paper presentation will be introduced separately with up to five minutes Q&A for each paper. There will be a few minutes for general questions at the end of the session. We would like you to use the chat feature for technical questions and issues, and an ARL staff member will respond to you. Please use the Q&A feature to ask questions of the speakers. Note that we may not be able to address all questions during time allowed, and if there is time at the very end, we might be able to come back to any final questions. The PowerPoint slides, session recordings, and the final edited conference papers will be made available by ARL on the conference website. The session is being recorded and will be posted to the conference website. Live automated captioning is enabled for the webinar. Please hit the CC button at the bottom of your screen to turn on captions. A full transcript will accompany the recording about a week or two after the event. So our first paper is revisiting a library impact map presented by Holt Zong, assessment librarian at Brigham Young University. Holt, the floor is yours. Hello, and welcome to revisiting the library impact map. I'm Holt Zong, the assessment librarian at Brigham Young University. The library impact map was created by Megan Oakleaf in 2012 as a way for libraries to collect data, use it, and disseminate it to show how library services aligned to university areas of focus and contributed to fulfilling university aims and goals. It promotes the use of data to also to improve library services and resources. A re-administration helps libraries identify a move towards a culture of assessment where assessment is used to help in the library and improve its service delivery and show its value. For those of you who have previous experience or knowledge of library impact maps, this is a brief review. For those who have not heard of them before, this is a brief introduction. The library impact map uses a grid system where library services are the headers for columns and university areas of focus are the headers for rows. The intersection between each library service and university area of focus is populated with one of the five codes showing on this slide that indicates whether data is collected and the degree to which it is collected, used, and shared with stakeholders. This is an example of part of a library impact map as described. As you can see, the university areas of focus are the headers for rows and library services are the headers for columns. The intersection between the two is where the codes are placed indicating the degree of data collection, use, and sharing. In 2013, as part of our first library impact map assessment, we modified the original library impact map such that we connected library goals to each university area of focus and designated departments, subject librarians, and divisions for each library service to allow further disaggregation. Now I'd like to discuss some of the changes we made from the library impact map between 2013 and 2019. We examined the university areas of focus and eliminated one because it was no longer relevant. The library services were also examined and they were increased from 46 to 58. In some instances, previous library services that were designated as a single service such as library IT services were subdivided into different services so that in 2019, what was one library service was now four for library IT. This resulted in an increase from 1,334 points of intersection to 1,624 points of intersection. We also introduced another code for 2019 that could be and the individual wanted to start collecting data. Finally, in 2013, we had 15 employees populate the LIM. In 2019, we asked 34 employees who had direct responsibility over or experience with each library service to populate the library impact map. In some instances, there was more than one raider for one library service. We began our methods by, as previously stated, examining the university areas of focus and library services and updating them to what was current practice in 2019. We approached employees with responsibilities over services and asked them to rate each service according to each university area of focus and they were only sent the service for which they had responsibility over. We did not set a deadline but sent reminders encouraging them to complete them and all were completed within a month. As an analysis, we examined patterns of change so that negative shifts were considered to be instances where the intersection of a library service and university area of focus decreased in the amount of data collected, used or shared. A positive shift was just the opposite where the intersection indicated an increase in data collection, sharing and use. No shift is where the same rating was given in 2019 as was given in 2013. We did this for all points of intersection for each code, each library service and each university area of focus. I will now start to review the results with the all points of intersection. As you can see from the table, the change between 2013 and 2019 indicates that the CB, Y and N codes had respective decreases of 8%, 1% and 1%, whereas the Y plus and Y plus plus codes had a respective increases of 4% and 1%. These changes indicate a slight shift towards collecting, using and sharing data. In this instance, we also see that the could be and I want to start collecting data represented 2% of all codeings or 25 instances where librarians wanted to start collecting data. Hereafter, I mentioned that the CBW and CB codes are combined for analysis and comparison because there were no CBW codes in 2013. Next, the changes and ratings were examined. To illustrate the changes, if you look in the left hand column at the very bottom, you can see that there were three instances which were rated as no data being collected in 2013 that are now rated as Y plus plus in 2019. Thus, all green squares indicate shifts that are positive where more data is collected, analyzed and shared. The blue squares represent negative instances where less data was collected, used and shared. The white squares indicate where there was no change and as you can see, this represents 51% of all possible data points. I need to point out at this time that for this analysis, any library services that were split in 2019 were rejoined together so that they could be compared to the 2013 results. We also see that there were simultaneous increases and decreases as previously designated in the previous analysis. There was 23% overall of intersections that decreased the most coming with the Y code. There was also 27% that showed positive increased changes, most occurring with the N code. It is important to note that the could be code had the second highest decrease in increase for them so that it was constantly moving up and down. The next analysis examined each library service and the changes in each code for each service. The change in code was determined by calculating what percent of university area focus for each library service belonged in each code. Once we had those totals calculated for both 2019 and 2013, we subtracted the 2019 percent totals from the 2013 percent totals to determine the percent rate of change. In the table, this is illustrated where an equal signs means that there was no change in the percent between 2019 and 2013. A single plus sign indicates a positive change of zero to 10%, and a double plus sign indicates a positive change of over 10%. A negative sign indicates a decrease of between zero and 10%, and a double negative sign indicates a decrease of greater than 10%. The changes depended on the library service. For example, in the first row, acquisitions had three ratings that remained the same from 2013 to 2019, one that decreased by more than 10%, and one that increased by more than 10%. Conversely, if you go to the fourth row for circulation, you will see that only one code remained the same from 2013 to 2019, whereas three codes decreased by either zero to 10% or greater than 10%, and one code increased by more than 10%. Taking a look at all increases and decreases, there was a moderate increase for most services, but there were, as I pointed out, circulation, some exceptions. We did the same thing for the university areas of focus, so for each university area of focus, we examined the rate of change in each code across all library services, and we found a similar increase, simultaneous increase and decreases, and it was dependent on the university area of focus as to where these were occurring and where they weren't occurring. So, as a bit of a summary, the purpose of the library impact map that was highlighted in 2013 remain the same and are still useful. It is a tool that lets you look at the big broad picture of all library services and how they relate to the university areas of focus, but it also lets you zoom down in to a specific division, department, or single service to see how they're collecting, using, and sharing data. The addition of the could be and want to start collecting data code help facilitate new data collection because we were able to work with those people to start collecting data. This process of developing a culture of assessment is an ebb and flow of data collection use and dissemination as we saw here, and the shifts are slow, but if you're able to start collecting, using, and sharing data and maintain those, it will increase the rate of shift towards a culture of assessment. As a final point, we'd like to note also that the end code, so the data is not collected, it will always be present. And the reason for this is that no one university area of focus needs all library services and no one library service provides information for all university areas of focus, that there will always be some where there isn't a need or purpose for collecting, sharing, or using any data. Some limitations to our assessment. Because we use the larger group of raters, which is more than double the initial one, there was a greater possibility for variance among the raters. They could be quite different in their ratings, but it was also, each raider had a stronger connection to each library service. The second part was that there was a poor understanding for some of what the university areas of focus were. We need to have these defined better and a clarity of which areas of focus we should use. The next thing we should look at is that there was 15 services that had two or more raters. In most cases, the ratings were consistent among the raters, but where they differed, we use the higher rating. And all raters were first time. So this being the first time through, it isn't always easy to be able to define and understand how to go about this process. In future administrations of library impact maps, the re-administration should occur more frequently than six years. This will allow the use of the same raters in many instances. There also needs to be better identification and definition of university areas of focus. For services that had multiple raters, we need to show both the high and low ratings to create a range instead of just using the high. It is also recommended that the library impact map be combined with other assessments as it is only one indicator of a culture of assessment. Thank you for your time and listening. I believe now we'll open that up for any questions that you might have. Thank you, Holt. So if you have... Oops, let me start my video. Thank you, Holt. If you have questions, please use the Q&A function. And I will read the question for the speaker and the audience. And Holt, I do have a question you said in the areas of focus, there was one you let go from 2013 to 2019. Which one was that? Can you tell us a little bit more about that? I can't, I don't remember which one it was. What thing did you look up for this? It was something that the university no longer... A service at university no longer deals with. And so it just became out of date. Good. There is... Okay, could you define CBW the new code added? I think that question came in the chat too, but let's answer it. The CBW meant that the data could be collected and shared. The CBW added the value that indicated that the person who did the rating wanted to start collecting data. So it's saying the data wasn't collected, but it could be and that they wanted to. We added this one in to kind of get a sense of how many wanted to start collecting data and we consider it a kind of low hanging fruit that we could go to these people and say, okay, how can we work with you to start collecting this data? Great. How did you identify the university areas of focus? We use those ones that were originally identified by Megan Oakleaf in her original library impact map procedure. And then we took a look at our university. We had some administrators in the library take a look at it and say, yes, these are present here at BYU. In some instances, there were some that BYU had that weren't included in Megan's. There were some that Megan had that we didn't have at our university. And so we adjusted them in that way, but we started with the basis of the ones that she initially shared. Great. And the other questions, did you have any situations where a collection, sharing and use didn't all change in the same direction? How was an item rated if collection increased but sharing decreased? We didn't consider those parts different. We took a look at collections, use and sharing all as one whole. We didn't try to separate or differentiate between them. That that would have been, it would have added a level of complexity that just really would have frustrated the people who were doing the rating. Great. Another regarding your mentioning that, you know, the library impact map is one indicator and that a culture of assessment should be combined with other assessments. Was there a follow-up assessment or evaluation to explain why a service like circulation would have a significant decrease? We haven't looked at that specifically ourselves, but we've sent the data on to those that are in charge of that department for them to kind of take a look at. In some cases, there's been structural shifts that occurred in the library that consolidated what where data was collected and used. In other cases, we just weren't quite certain but we shifted on that to the people who were directly responsible. So they could identify what was being, why it was decreasing or if there was an explanation for why it was being decreased. Great. A couple more questions. Was there any significant change in software? I don't know if software is the right word or if it's the framework. Was there any significant change in software being used by the library between 2013 and 2019 that would facilitate data collection? We didn't look at that, so I can't comment on that. And are you using a controlled vocabulary for your services? Pardon to say that again. Are you using a controlled vocabulary for your services? Is there some standardization? No, we have our names for it and there's, for example, sometimes the names change. So for example, we used to have access services and that switched to patron services. It essentially had the same functions but it had a name change. And with that, there were some tweaks and adjustments from that. So where those occurred, we tried to note those and treat them as the same in the comparison from 2013 to 2019. Great. And I was looking, I did miss an earlier question here. Do you have a rubric definition document for the raters? We didn't have a rubric per se of what each of the university areas of focus were. If you saw in there, that was one thing that we realized that most people knew what each area meant but there were some that were confused. In those cases, they contacted us and we described what we felt it was to help them without understanding but we didn't have a rubric per se for that other than the descriptions that were provided in this. And then if you see the document that we put with the conference proceedings it will describe that a little bit more. Wonderful. I think we have them all answered and we are right on time, 1, 23. My introduction didn't take as much so we are a little bit ahead of the time but as long as the presenters are with us we can continue the flow of the presentations. Thank you, Holt. Thank you for answering all the questions so thoroughly. Thank you. Now we'll turn it over to our next presentation and the title is, are you ready to assess extending Lewin's model to build readiness into your planning? And this is going to be presented by Aaron Noland, Director of Planning and Assessment at James Madison University Libraries. Aaron, if you are with us you'll share your screen and you are with us and you're sharing your screen. Great. Hello, everyone. Thank you for that introduction. My name is Aaron Noland I'm the Assistant Dean of Libraries for Strategy and Development at James Madison University and excited to be with you here today to talk about readiness. So a lot of a sort of a different approach than the process and the data driven approach that Holt just talked about. This will be a little bit more about framing and sketching out how to engage in an assessment plan and things to consider. Probably we'll ring some bells. So if we can just start to think about some questions I know you can't necessarily answer but as a way of engaging the thought of implementing an assessment plan makes you feel excited like running away apathetic and maybe any number of other options and maybe all of these sorts of things. So that's one question. And the next one is what's the valence that assessment has for people in your organization? Are they positive or are they negative? Are they sort of a mix? And I think oftentimes when assessment professionals get together and start talking about assessment we sort of skip over these considerations of everyone else who's connected with assessment. One of my colleagues often talks about how assessment is an add on as far as work goes to our colleagues. And so it's really important to center that. So we're gonna start here with just an assessment cycle. This is James Madison University Center for Assessment and Research Studies. They are widely known for their great assessment work and this is probably a cycle that many of you are familiar with. But we care about the reason that we do assessment is this change piece where we get to use the data that we gather in a well-constructed assessment process to make good decisions, to make programmatic changes, to sharpen up a focus or to continue doing what we're doing if it's working well. So as we think about this this is the mental map that we operate with. And if we compare that to how we might struggle to get assessment plans or assessment initiatives off the ground institutionally, it doesn't map onto this really pretty process. So I think there's a really useful way of thinking about the stages of how institutions work through using assessment. And the first stage is denial and the second is resistance. And this is where assessment plans and initiatives go to die. And I think probably if you've been doing assessment work for any amount of time, you are familiar with this difficulty of getting things up and running at an institutional level or at an initiative level. We can sometimes have strong success with one-offs but it sometimes is really hard to get large-scale initiatives at the institutional level up and running. So for our first two are denial and resistance and the rest of these which are great are understanding campaign collaboration and institutionalization. And we ideally want to move through these stages but it's really challenging to get past denial and resistance particularly if denial and resistance are strong in leadership or strong in the people that need to be collaborating and partnering with us to make these things go. So then the question obviously is how do we do this? Well, the first thing that we need to figure out is why? Why do we have denial and resistance? Why is that the sort of major reaction to assessment in general? And so there's some pitfalls for assessment getting an assessment initiatives started. A lack of a vision for why? We don't orient assessment activities and initiatives at the change level. We orient them often at the sort of data method, data collection, data management level. And so there's this huge focus on the how that leaves out the North Star that orients us that gets us through the laborious work or the learning curve that we might experience. Not to say the how and the process and the data and the data management isn't important, it surely is. But those things are our means to get to the why of assessment. And so when we focus on the how, we sort of necessarily diminish the why. And I actually have a paper in the December session about this very thing. So if you're there for that, you'll hear more if you're interested. Another one is assessment as performance evaluation. So when we, assessment can be scary, it can be threatening if it's seen as a way to evaluate people and evaluate people in a zero sum sort of way. So, and that's not the purpose of assessment. Assessment activities that result in data and don't result in improvement. I think this is the biggest thing that I hear from colleagues that will gather all this data, will invest all this institutional organizational energy and things will stay the same. We won't make any of the changes. So that's a big one. Failure to connect assessment to our organizational culture. You're not necessarily talking about assessing our organizational culture, but are we a culture that interrogates our practices, our activities, our ways of being, the outcomes, are we interested in understanding how we're progressing on those things? Are we making assumptions about what's working and what's not working and guessing? And then this notion of reactants, which I think is where the denial and resistance stages really come in. And that's this notion of a decrease in freedom when we start to make assessment or we start to change something, people perceive their freedom to be reduced. And as a result, they will act in ways to wrestle that freedom back. And so it creates active resistance against the assessment or change initiatives. So for us as assessment professionals, how do we skip denial and resistance or how do we work through them in healthy ways? How do we avoid these pitfalls and institutionalize continuous improvement and a culture of assessment, which is what we're after because we know the tremendous value that it brings to our activities and our organizational outcomes? And I submit to you that if we define assessment as change or maintenance, then we're in a better spot to be able to overcome these things. So we go with Allen's and I know there's, there are as many assessment definitions as there are grains of sand on a beach. But I like this one. I think it maps well. Assessment involves empirical data or any data, you could broaden that out on student learning. But the point here is where you're doing assessment to refine programs and make improvements. And this definition is contextualized in student learning and libraries. It may be service levels that we're interested in providing it may be student learning, it may be a number of other things, which you can sort of plug and play that definition a little bit. And if we think about change as intentionally shifting our organizational strategies, structures and activities, then it's really the sort of comparison here or the relationship between assessment and change becomes really evident. If we're gonna make a change, we need to do so based on strong foundation that's driven by data, whatever that data might look like. And we need to make sure that our changes are moving in the direction of refinement and improvement. So I think we can look to Lewin's change model, which many of you may be familiar with for how to structure an assessment initiative. So what we'll do is we'll walk through Lewin's change model. It has three stages. And what I've done as I talk about these three stages is map them on to assessment rather than sort of organizational strategy or specific activities that we might pick. So the unfreeze stage is about disrupting the status quo. And so when we start an assessment and the sort of traditionally when we start an assessment initiative, it has the outcome of inherently disrupting the status quo. We start to call into the question, start to call the question of is what we're doing effective? And this might take up a great deal of anxiety that has to be managed before we can move forward. And this is where Lewin's model starts to fall apart a little bit, it assumes linearity that we just disrupt the status quo and then we move on to the next stage. I'll submit to you in a few minutes that that's not a great way to go. But once we unfreeze things, move on to whatever the changes that we're going to introduce. So this is when we're planning and implementing things. And this is when we're actually working through the stages of the assessment cycle that we started looking at. So we're collecting data, we're having our interventions, we're running our programs for structuring our activities. These sorts of things are happening. We're gathering data about that. We analyze that data and then we understand what change we might need to make. So we're implementing new practices based on the information that we gained. Now here, Lewin's model doesn't really make space, a lot of space for the cyclical nature of the assessment cycle. But you can imagine that we're going to continue to evaluate and adjust and tinker. We are constant tinkerers in academia. We want to continue to make what we're doing with our students and faculty better and continue to improve it. But again, something's missing here. And I think what's missing is on that front end. So we have this disrupt the status quo. That's where we start. But what if we're not ready? What if the organization or what if the people are not ready for that status quo to be disrupted? And that's where I'm introducing readiness, but as a construct and as an assessment. So the first as a construct, readiness is a combination of individual attributes of people and contextual factors of the organization or the situation that we're in. So right now, if you were to launch an assessment plan right now, you would have a whole lot of contextual factors given pandemic, given a virtual environment, given a lot of uncertainty about budgets and everything else that would sort of act against readiness. Despite the fact that people may be at varying degrees ready for change or for an assessment initiative. So we combine organizational and contextual factors with people's willingness to be ready for a change. And these things create efficacy beliefs. So how well do we think we're suited or how well do we think we'll be able to do in this initiative, in this assessment plan or an assessment initiative or a change, a broader change initiative. So we can predict obviously the higher degree of readiness an organization or an individual has, the more likely they are to be successful with whatever it is that they're actually trying to implement. So some of the things that you'd wanna consider in your organization around context would be organizational capacity. So what's the leadership's perspective on assessment? How does it align with their vision? What's the level of trust in the organization? What's the history with change or the history with an assessment plan? Has it been a strong factor of the organization in times before or has it been something that the organization has really struggled with? You wanna know all of those things. And once you have a sense for your level of readiness as an organization, then you can begin to do this pre-work before you start to disrupt the status quo. And we do that by readiness assessment. And many of you may have done needs assessments. Some of you may have done readiness assessments. They're a little bit different. And in the paper, at all the time to get into them now, but in the paper, I talk about a number of options for readiness assessments. And the nonprofit leadership literature is rich with this stuff. It bars from the evaluation literature. So sort of a partner with us, but a little bit different. And this helps us to take a human-centered approach to assessment initiatives and processes so that we're not just starting these things without really considering are people ready for them? Do they feel like they have what they need to be successful, to work through it, to navigate it, to understand it, to be on the same page? So we add that readiness stage at the beginning. It sets us up for success as we go through the other three stages. So how do we do that? How do we actually increase readiness? I submit self-efficacy, which is one of the most widely studied social psychology constructs, as all of you I'm sure know. So how do we do that organizationally? Well, there are four ways, there are a few more, but four primary ways Pandora outlined to increase our efficacy beliefs and efficacy is connected to success. So vicarious experience is looking at peers. So conferences like this oftentimes will leave because we see colleagues and peers that we perceive as similar to us, engaging in these sorts of things successfully and navigating them successfully. And that raises our self-efficacy about what we can do as an organization because we see people similar to us with similar resources and orientations and expertise succeeding. Verbal persuasion actually helps us. So motivation from leadership, motivation from experts, guidance from experts, help us help increase our readiness. Improving our organizational culture, particularly around trust, transparency, clarity of purpose and the vision for the why of an assessment goes a long way to helping people feel safe and understand the purpose and how assessment data and assessment initiatives will actually be used so that they understand the purpose of it and how they can slide in and participate. But the most, the sort of best way, gold standard for increasing efficacy is through mastery experience. And this I submit to you is the best way to go about it in your organization. Instead of implementing a huge organization-wide assessment plan, start small, pilot assessment initiatives and assessment processes and pockets of your organization that feel really ready and try to do that across the organization cutting across divisions or departments or different employment classifications and shepherding those things through and allowing people to see at work and experience it and participate in it helps them understand this is how this is gonna go. I've done this before, we did this, I had support, it went well, we had some hiccups but we worked through them. There was no negative consequences on the back end for my job or for the organizational culture. And that increases our beliefs about how we're set up to be successful in an assessment initiative moving forward. And at Jamie Libraries, we're sort of dipping our toes in this right now and excited to see how this goes and really looking forward to the mastery experience work that one of our colleagues is leading right now, Jody Fagan. So we'll stay tuned hopefully maybe at a conference next year, we'll be able to talk more and report out about how this actually goes. But with that, I'll conclude and leave time for questions. Thank you. Martha, I believe you're muted. Thank you, thank you. The first question is, how well does constant tinkering and challenging the status quo work in a year that has already brought overwhelming change? Yeah, I think maybe I talked about that and then sort of circled back to the contextual factors this year that you might not actually want to implement a larger change. So hopefully that sort of corrected over the course of the presentation. Yeah, I certainly, we actually, we at Jamie Libraries put off the next sort of stage of our strategic planning that our strategic planning process until after the pandemic because we don't have capacity for it. Psychologically. One might argue that actually this year, because of all the change we're experiencing, we're all building our mastery experience. Sure. So another question, are you willing able to share your readiness assessment? So I don't actually have a readiness assessment that I created. There are a number of them out there and in the evaluation literature actually is the best place to find them. They're in, I can share citations with folks if they're interested, just shoot me an email or contact. I think probably our contact information is available. I'm not sure if it's not, I will, I'll just drop it in the chat right now. Yeah, there is interest. Another question, another comment came that says I'm interested in hearing more about the readiness change assessment as well. Let me see, there are some coming through the chat. Did you encounter other useful frameworks, theories, models than other than Lewin's change model? Yeah, I think that's a great question. I think there's as many change models and frameworks as there are change initiatives. I think the reason that I picked Lewin's to talk about in this context is because it's a really useful heuristic and it was one of the early ones and as such, many of the new or more sophisticated models or theoretical frameworks, you don't have a hard time mapping Lewin's model onto. They all involve some disruption of status quo, some articulation of either vision or strategy or whatever activity change and then some attempt at solidifying those and evaluating them again. So in writing something like this where I'm trying to add readiness on the front of a framework, I wanted to go with probably the framework that fits the best onto all the rest of them. But I think there's a weakness about for change initiatives, for change theories across the board, readiness is a weakness. They don't often spend a lot of time and how do we get ready for a change initiative? They sort of assume a level of readiness that maybe isn't present. Yeah, that's a really great question. Yeah, a couple more questions. I think we'll tackle them at the end, but let me see if you can handle this one before we move on. How might you go about measuring trust within the organization in the context of the readiness assessment? Yeah, another good question. I think the same thing as far as readiness assessment, there are a lot of really strong quantitative surveys on trust in the organizational development literature, but I could point to, I don't think measuring trust internally with just the quantitative measure is a great idea. I think some consultation, I think you can assume that there are some, at least pockets of trust issues in your organization if your organization is comprised of people. And so maybe providing some opportunities for focus groups or even for some sort of externally facilitated conversations about how to build trust and thinking about trust not as something we have or don't, but as something that we're constantly trying to build and work towards wherever it is that our level of trust is today, let's continue to get better with that. Thank you, Ardon. Yes, let's continue to get better. We'll try to come back to the questions we have not answered at the end. Now, we will turn it over to Selina Kilik, Associate Director at the Open University with her presentation, Putting a Library Assessment Culture into Practice. Thanks, Martha. Thanks, everyone. I chickened out and decided to pre-record. So I'm gonna share my screen. Hello, everyone. My name is Selina Kilik and I'm delighted to be able to present today at the Library Assessment Conference 2020, even if it is from a very socially distanced location here in the United Kingdom. My presentation today is about how I've been putting a Library Assessment Culture into practice to form our strategy at the Open University Library. For those of you who are following us on Zoom live, you can ask Q&A now or throughout the presentation and I can pick them up at the end. For those of you watching this as a recording, I'm available on Twitter at Selina Kilik if you want to ask me anything. Alternatively, if social media is not your thing, I've got an email address coming up at the end of the presentation. For those of you who haven't heard about us, the Open University is the UK's largest higher education provider. We were formed over 50 years ago now with a mission to open up education for everybody. In that time, we've had over two million people choose to study with us. At the moment, I've got about 170,000 students, predominantly in the UK and Ireland, all over the country, studying with us online and that's always been our model. As my Vice Chancellor put it, we were the world's first online university waiting for the World Wide Web to be invented. 27,000 of our students have got to declare disability. To be clear, we don't insist that they declare a disability to us, but if we need to make learning adjustments, we know that information for those purposes. That makes us the largest provider of higher education for people with disabilities and accessibility of information and learning is business critical. They can choose to study a range of qualifications, 180 at last count, made up of 600 modules of study. And when I say we're open, we really mean it and we're really passionate about it. OpenLearn is our free platform of learning materials available to anybody on a Creative Commons license. When lockdown, the first edition happened in the UK, we were quick to maneuver onto that platform and move courses into that space to help the whole world. We launched courses on how to teach online, how to cope with mental health and some learning materials aimed at school children as the UK schools were closed. In that period, we had over, well, closest 800,000 courses were completed. The library then, as you can imagine, is very digital. 80% of our books are electronic and 100% of our journals are too. For those of you who like your gate counters statistics, my equivalent is a unique visitors to my website, half a million every year come to our website to use our content and resources and skills materials. We make all of our content available open if we can within the terms of our license. And that results in normally about 10 million page views per annum on the library website. We are 365 to 247. And for those of you who take part in the SpringShare web chat, my team and I are very grateful as are our students for your help and support for our students when our team are asleep. We've had a very strong strategy for digital capabilities at the Open University Library and we've worked well with our faculties to embed library content and skills into the curriculum. 91% of our students now study on a module where there is library content and skills embedded into it. And the value in that is known. Students who use library resources and attend library tutorials get a better class of degree at the end of the day. Now, our work is part of our library learning analytics research, which I've presented on at this conference before. I'm not gonna dwell on it too much today. We are multi-award winning and very passionate about what we do. And we also practice what we preach. On the Open Learn, for example, this year we relaunched our digital skills badged open course succeeding in a digital world. Feel free to take a look and feel free to use it if you want to as well. My presentation today is actually about how we pull together the library assessment culture into informing library strategy at the Open University Library. That combines with three key elements. The overall university strategy, insight, which is an umbrella term I use internally to talk about library assessment. Because library assessment isn't the most common term in the UK as it is in the US. And expertise. And I'm gonna cover all those three elements, library, university strategy, insight and expertise in this presentation. Talking first about the university strategy, I'm not gonna dwell too much on this because I know this audience will well enough to know the insight in that assessment is obviously the interest of the day. But it's vital that we will also recognize that we don't exist in the vacuum. We are always attributed to accountable to our university stakeholders. And so it's our role to understand the university's strategic aims and missions and interpret that into designing our services in line to what the university is aiming to achieve. However, conversely, it's also our role to convey what the library does and how that does support the university endeavor. And so the senior stakeholders can see a clear tangible link between what they want to deliver and what we are doing. And it's a really interesting model. I call it strategy and interpretation. That's a work in progress. I'm not quite there yet with that one. Moving swiftly on to insight, because I know this team. Insight for me is an umbrella term that I use to talk about all the various different components of insight into my users to understand the service and design it with a service to support student success. So for example, the library help desk we offer has a customer relationship management system it uses. We know who's asking for what information on what topic, when, how frequently. Is it an isolated request for help? Is there a consistent problem with one module whether consistently asking for the same thing at the same time to the training session be more useful? Is it a consistent problem with one publisher and there's something we want to talk to them about changing on their platform? Variety of different insight into our customers and when they're getting stuck and when they're contacting us for help so we can understand what we need to change to make things better to help students succeed. We also obviously look at closely our resource usage, counter statistics and the like. I'm sure you do similar. And we have data on our internal systems about where content is being embedded and where skills have been embedded into the curriculum. So we know who's using what, when and how. We look closely at our statistics on training attendance and combine that up with student attainment to understand actually is our training providing a successful learning experience for our students. Within the UK, we also have the National Students Survey. This is a compulsory survey that goes to all undergraduate final year students at every university in the UK and it is benchmarked between each other. So we know if we are improving on getting better than somebody else. I'm not a fan. The thing with it though is it is business critical and it is strategically aligned. So we have to look into that and make sure that we're trying to improve our scores in that space, which we are, which is nice. Underlying all of that though is a good team of, could we do an inter-user experience research and understanding what our customers want from us and what our services need to offer next. For example, we did a large library needs project couple of years ago now into finding out what is our students want from us and where we can do it through a directed storytelling approach. Call to making all of this work is our award-winning library student panel. Now this was something we developed in 2012 and it was lovely to see Lara Miller's presentation at the previous session of the Library Assessment Conference talk about just this and hear that the University of Arizona libraries has similar. Back in 2012, we were concerned we were hearing only from a very minority group of students. So we were either very satisfied with us or very unsatisfied with us. For those of you who are library assessment professionals, you know that those tend to be the people that want to talk to you. We were hearing from the same, quite a vocal minority of students and we were concerned about the people in the middle. We wanted to hear from them, how are we doing and what could we be doing better? Now, obviously we don't have the same fortune of you, the people walking in the door to offer them pizza too. So we set up the panel. How this works is we send an invitation to take part to about four and a half thousand students per annum twice a year and that normally results in about 500 people agreeing to be on the books with us at any one time. We commit to them that they will never have to do any more than four studies in a year. It's only four a year, they'll get dropped off after a year, they don't stay on the panel forever and they can leave at any point that they want to. We send them a welcome pack and then when they take part in any research with us, we tell them afterwards what we found out as a summary of the results and also what we're going to do with that information and how it's going to be used to improve our services. For those of them who don't take part in a specific piece of research, we send regular newsletters anyway to all the panel members so they know what's going on. We don't always sample all 500 for every piece of work. We don't tend to incentivize too much. We don't feel we need to. We've worked with the panel to understand what works for them and what doesn't. If they're doing something long, a half an hour-ish, we might send them, we will send them an Amazon voucher, but generally it's on goodwill. The representativeness is an interesting one and it was how we started initially was worrying about that like vocal minority. What we did was make sure that the panel was composed of a population similar to the university. In the last year or two, we've moved our thinking on this and we've started to weight the sampling to specific criteria as well to increase students from a BAME community and also those with a declared disability. BAME is a term that's very commonly used in our cultural setting in the UK university sector. It's not necessarily one that I'm always comfortable with but it is what my institution and a lot of the UK universities are talking about at the moment. Equality and diversity is a huge strategic priority for us and we want to do more in this area and we want to be better. So we've waited the panel slightly to... Well, we've waited the panel slightly to try and understand more what more we could do. The methods that we use with our panel vary everything from the classic of surveys and focus groups through to direct the storytelling, use touchstone tools, observational studies, love letters and breakup letters was quite fun. Variety of different creative techniques to try and understand what's going on with them. Now, all of that understandably is at a distance and that's came to some creative thinking for us and how we can do some of these things and we're always going to be innovative in this space. For example, last year we had a project looking into the information architecture of the library website. To do this, we set the panel members a number of different challenges and tasks and to go find where they would look for this information and what would be more successful routes. We used a online package called optimal sort trees testing for this. That's just one of the providers in this space. It's not necessarily the only one. Feel free to find who works for you if you want to do similar. It helped us to reform refine the library website structure but also work out which terms are more successful, which words resonate more with students and less and which phrases were tripping them up. So it was good from that point of view. From a more qualitative point of view, we also do diary studies and this has been a lovely little piece of work into understanding how our live training sessions could be better or could be strengthened and what works well, obviously. We sent home to panel members just a very simple sheet of A4 paper that... A4 paper in the UK, A4 paper size, letter size in the US, sorry. Very simple sheet of paper to them to print out and have alongside them when they were taken part in a live training session with one of our librarians. Throughout the session, various intervals they just kind of made a little pencil note of what they were thinking and how they were feeling about what was going on. Took a photo of it and sent it back to us for us to have a really rich idea of what's going on with the training sessions and how they can be improved. All of this insight comes together. Variety of different sources and methods to then help us improve our services to improve student success and it's important that you never look at any one thing in isolation. It is the whole collective. What are your users saying to you? What are you hearing from them? So going back to my model, we had university strategy. We've had the insight, the most important thing though and the one thing we're not loud enough about in our sector I feel is our expertise. We are professionals. We are library professionals and I've been advocating for loud librarianship for years now. We're not bookstackers. We're not working in an oversized Amazon warehouse. We are the ones that take that insight and that university strategy and turn it into a library strategy. Without us, there's nothing with the most valuable resource the library will ever have. So that brings me round to our model. How I build the library strategy is working with the university strategy, insight and our expertise. If you have just the university strategy and our expertise, but no insight from our users that this is something that they want and need, you're not gonna have any interest in the service and it's not gonna get any take up. You have to have all three elements. Similarly, if you have the expertise and the insight but it doesn't align to the university mission, you're not gonna get the resources whether the direct or indirect you need to drive that strategy forward. Everything costs and you need to be aligned to what the university wants you to do. Finally, if you have the expertise and you have the, sorry, you don't have the expertise but you have the university strategy and the insight. You don't have the capabilities to deliver on what your university wants from you and you'll need to build that insight of that capability within the team to drive that strategy forward. We are professionals. We are continually improving ourselves and our services and that capability is something I will always see and grow within every university library I've ever worked for. I have a lovely team at the university and I'm very proud to be one of them but I would like to just acknowledge a few of them that have helped me with this presentation. So thank you very much, team. And thank you very much, participants. If you have any questions, Selina Killick at open.ac.uk is my email address. The slides of my presentation is also available on my website, selinakillick.com. Thanks very much. Wonderful, Selina. Is it possible I wonder to show that slide with the Venn diagram? And yeah, a comment did come that people liked it and so if you have questions, please use the Q&A function and I will read the question for the speaker and audience. And if we can show that Venn diagram, I can formulate a little bit better the question. I think I have for you. You know, there is that. Can you see that now? It is, yeah. So, you know, it was intriguing the way you labeled the intersections of dark blue, like the no interest, no capability. Let me flip that one, because that's towards the end that was the first one. I'm still working on this model and I'd be interested to have some conversations with people about this. Yeah, that's, I'm hoping we can have at least a beginning of a conversation here. So there are situations, right? Where the university strategy is there and the expertise is there. And oftentimes one might say, you know, there are brilliant ideas to be implemented that users may not be quite aware or the market, the users, however you want to address the people who will be consuming this idea, they have no interest yet. Any thoughts on how to address that situation? Also, I think you mentioned there are many situations in these days where we have no capability, right? We have inside what the users need. We have also a strategy, but you know, we all want more data management people, not enough data management people in the world these days. So, you know, we need to be increasing. So any, again, insights on how to address that non-capability quadrant. And in terms of expertise and insight, again, it's an issue of, you know, there is a need and maybe a recognition that something is needed, but whether it's fading away or whether it's a far reaching idea that's coming from the future, how do you address this non-resource issue to address, you know, are you letting go or are you missing an opportunity? Yeah, I mean, there's varying different strategies you take on different approaches. I think from my point of view, the sort of expertise we lend into this space flows to both the university stakeholders to shape that strategy and also to our users to understand actually, you know, you think you want this to actually, wouldn't it be nice if you had that? You know, I've had many of things where our users come to me going, well, what I want is this one particular thing and that will fix my problem. And they've already formed an opinion on what the answer to their situation is. What we've tried to do with our research and I use the research is actually just to explore how they like to use information, what they like to do with their kind of research and studies to kind of work with them in partnership to come up with a solution rather than going to them with that solution if that makes sense. The other side of things is, you know, the capability thing is something we're always building on. Yep, definitely we always want more people involved in data and that side of things. I've always seen a strategy in libraries that I've worked with, there's been a strong, we call it staff development in the UK. I don't know if that translates well in the US but, you know, we always have a strong staff development culture. And the fact is we're even here at this conference is the fact that we're growing and learning all the time on how we can do things differently and better and what then keeping an eye on the future and what the future needs to be. I think part of my role as a leader for the team is to actually advocate to the university that that's what our team do and how we do that and what we do. You know, a lot of my academic community and yours might be the same have come to me with the preconceived idea of what a librarian is based on when they were an undergraduate student themselves. And actually, as our role as knowledge professionals, information professionals is where I come into that loud librarian shifting, getting shouty about actually what we actually do and the fact that we are, you know, educated to a high level that we are researching, we are publishing is a very vital thing to get that kind of moving on. I hope that kind of, I kind of talked around your question without answering it but I hope you didn't notice that. This is good. It's the beginning of a conversation, as I said. There is one more question in the Q&A but I will let you, I noticed the previous speakers answered the Q&A as we were running out of time. So if you can answer that, it's as if you can share protocols and practices for the student panels or point us to citations where you already have other studies. Yeah, so I've presented on this, actually I've co-presented with Sam Dick on this panel and some of the things we've done in the past and I can see another one that's come in on the chat as well and trying to wrap the two together, the incentivising. How we do the selection is actually we don't advertise that they can take part in this. We email directly. Now we work with our student recruitment and student surveys ethics team to get a representative sample sent to us of students to send the invitations to and they manage that process for us. We, you know, work with them to get a sample of our population and we send the invites out. That's been quite a strategic tactic on our part, mainly because some people really wanted to be part of the panel. That vocal minority I spoke about, they were the ones that were weird, the ones that should be shaping your strategy. And we were like, well, we are listening to you but what we'd like to do is get a more inclusive view. And so we work to have a representative sample and it's not an open invitation. We don't market that they can join the panel but we do email them directly, advertise it that way. And then that's what results in about about a 10% take up from the invoices that go out. Great, thank you, Selena. So we can move to our next presentation. Our next presentation is Designing Outcomes for Multifaceted Organizational Planning and Assessment, a case study of the logic model framework for the UC San Diego Library Organizational Renewal presented by Eric Mitchell, Audrey Geisel, University Librarian at UC San Diego Library and Jeffrey Lu, Clinical Librarian at the same institution. Hi, my name is Eric Mitchell, the Audrey Geisel University Librarian at UC San Diego. I'm joined today by Jeffrey Lu, the Clinical Librarian at UC San Diego. We are excited to speak with you today regarding our work and using logic models as a design and assessment framework and organizational and strategic planning. Using a case study approach, this presentation begins by defining the context and goals of our planning process, continues by exploring the logic model framework used in our planning and assessment and concludes with an analysis of how the logic model framework informed and supported us. Let me start today by setting up the context of our assessment and planning process. I joined UC San Diego in 2018 as a university librarian and like many new leaders focused on understanding our organization and our strategic priority. Our library was at a unique point with many leadership retirements and opportunities to look back on an organizational change that had occurred about five years prior. These factors led us to a planning process that included parallel efforts to renew our strategic priorities, recruit new senior leadership, revise the leadership portfolios and evaluate the impact of previous organizational changes. To bring structure to this process, we sought out frameworks that would align our three assessment areas and help us create a renewal plan grounded in our organizational assessment, strategic priorities and plan for organizational development. We embraced the logic model planning and assessment framework for a number of reasons that we will explore in this presentation. But at a high level, this model proved to be a good fit because of its focus on aligning activities with outcomes, its support for short and long-term assessment approaches, and its ability to represent high level goals in connection to more detailed structures and activities. Let me turn the presentation over to Jeff to define the logic model in more detail. Logic models are visual diagrams that depict an organization's work and the intended results. By connecting library activities to outcomes, the model helps facilitate planning and assessment. It's especially useful for studying the library's impact on the broader community. Government agencies, philanthropic organizations, universities and libraries have adopted this approach. The logic model is composed of three elements. The first is the work of an organization depicted in blue and it consists of resources and activities. Resources are what's needed to do the work and activities are what's done with the resources to implement a program. The second element is the intended results depicted in gold and is divided into outputs and outcomes. Outputs are the products and services that arise from work activities. And outcomes are the changes that occur due to the program work. Outcomes are defined as three types of changes. User learning, user action and user condition changes. A condition is the fundamental functioning in the community, including the environmental, social, economic or civic wellbeing. User changes are interrelated. If users experience positive changes in learning and action, then the community may improve its condition. Additionally, program outcomes can be differentiated by time period, short, intermediate or long-term. The last element is the relationships between the work and the intended results. We can use an if them chain of reasoning to draw relationships as follows. If we have the requisite resources, then we expect to accomplish the planned activities. If we achieve the activities, then we hope to produce the intended outputs. If we deliver the outputs, then we may realize the outcomes of positive change in user learning and action. And finally, if users benefit, then the community may improve its condition. Logic models vary in design and are structurally flexible. Models come in different sizes and formats, suiting various organizational parameters, needs and context. They can be detailed or simple and scoped at the project or organizational level. We can customize the logic model for maximum relevance. Here's an example of a customization. This is a generic logic model for library outreach services. It defines specific outcomes for outreach activities and their potential impact. Here's an example of a detailed logic model. This diagram is from the Rochester Public Library. It clearly outlines the service outputs and shows how short-term outcomes combine and build long-term impact. Logic models are used in health settings as well and contribute to program evaluation. For example, the CDC has developed an evaluation guide for its wise woman program for promoting heart healthy lifestyles. This logic model frames the metrics and indicators for process and outcome evaluations. Logic models are appealing for their explicit systematic and user-centered approach. Both the articulated outcomes and relationships are beneficial to organizational strategy and focus. The clear visual communicates the strategy and contributes to a shared understanding of program goals and work. Furthermore, the outcome definitions support assessment, serving as indicators of positive change. Lastly, the user focus and the holistic exploration of different stakeholder perspectives are beneficial. Here are some limitations of logic models. First, the if-then chain of reasoning may be too reductive. We could address this shortcoming by identifying dynamic relationships in our organization and expanding our perspective to a broad range of stakeholders, activities and contexts. Secondly, the logic model focuses on expected outcomes with risks of optimistic bias and unrealistic expectations. To address these tendencies, we need to investigate our assumptions and examine for unexpected outcomes plus negative and neutral changes. Thirdly, a logic model represents a moment in time. As our work and community change, the logic model requires updating. We need to closely align with user outcomes to move in step with broader academic change. In learning how to apply logic model frameworks at UC San Diego, we relied on the WK Kellogg Foundation's guide to logic models. You might also find the resources and online course by the University of Wisconsin-Madison Division of Extension useful as supplemental training. Relying on the Kellogg guide, we developed the model you see on your screen. In building our own framework, we implemented several modifications to help us stay focused at the right level in terms of planning. We adopted a high-level orientation and focused on core library roles rather than on specific services and activities so that we could capture the whole organization through our five frames for annual goal setting. We also consolidated several model elements to focus on essential activities and services. For instance, we summarized inputs, activities, and outputs in the second column. In another simplification tactic, we omitted user learning and action outcomes from the table and presented user condition changes instead. To help us communicate to a broad range of stakeholders, including staff, users, and campus administrators, our logic model relies on text narrative for straightforward explanation and includes UC San Diego goals to demonstrate the library's support of campus priorities. Now that we have shown our high-level model, I would like to talk about how we designed and validated this model. In our case, the data collection analysis and validation for our logic model occurred through our library's organization renewal process, following three phases built sequentially. In phase one, we conducted a leadership advisory and program structure analysis using library and university wide input. The outcome of phase one informed planning in phase two, where we focused on leadership renewal planning and strategic priority setting. And by the end of this phase, we had largely populated our logic model. In phase three, we implemented our plan, recruited new senior leaders for the re-envisioned administrative portfolios. Throughout our process, the logic model framework served as a way to bring together our organizational assessment and strategic planning activities. As we will observe in our finding section, the model served as a map for focusing on user-centered outcomes while also connecting our strategic priorities and organizational structures. In our finding section, we are gonna explore three contribution areas with regard to the logic model. Contributions to strategic planning, leadership planning, and organizational planning. In our paper, we summarize these three essential functions of logic models and describe the model's contribution to our process. Along the Y axis in this table, you should recognize three features we've already touched on, a focus on user change outcomes, connection to broad impact and support for holistic structured planning processes. Although we do not focus on it in detail in this presentation, this structured approach was highly informative in helping us think through how we should approach direct and indirect based decision-making structures. In our next three slides, we will explore the positive impact of logic models in each of the three planning areas represented along the X axis of this table. Strategic plans are simultaneously roadmaps for action, marketing documents, and tools for alignment with stakeholders, users, and library staff. It can be difficult to find the right balance for a strategic plan given these three roles, especially as many plans are formed with broad user and stakeholder input and often feature tactics for implementation along with strategic priorities. The logic model framework helps us balance these multiple roles. First, by using user change focus outcomes as the primary impact statement, the logic model frames priorities in a way that connects with users and stakeholder communities. With outcomes defined at a high level, the validation process can be highly inclusive, contributing to sustained buy-in and understanding. Second, the logic model's resources, activities, outputs, and outcomes framework can structure strategic plans to support detailed implementation and assessment procedures without losing focus on the Y. And finally, the logic model serves as a relatively simple and transparent information tool. It supports communication with users by exploring user outcomes and it supports communication within the library and the university using specific work activities and outputs. And serving is a bridge between high level intended outcomes of an organization and the more detailed daily work needed to achieve those outcomes, the logic model provides strategic plans with a way to balance the call to action without a risk of overwhelming detail. Logic models can also provide a framework for leadership decision-making in direct and indirect reporting lines. This was useful at UC San Diego Library given the fact that leadership planning occurred in the context of multiple leadership vacancies. In our instance, the logic model was a useful tool for keeping strategic priorities and organizational needs in mind during the leadership planning. For instance, it provided a way for staff at all organizational levels to understand and discuss how their work connects to leadership decisions and actions and it clarified where work should be done across the organization as to oppose within a specific program. And of course, finally, it provided a logic-based tool that supported delegated decision-making. Organizations often identify changes to their internal structure and operations after creating a strategic plan posing potential barriers to action. This can happen because the strengths and needs of an organization do not always surface in strategic planning activities. In the context of UC San Diego, the library had undertaken a significant restructuring approximate five years before this most recent planning process. We needed to assess the impact of these changes to ensure that the library had adapted to previous restructuring and that it was responsive to emerging needs. This internal assessment helped us ask important questions about our capacity to meet strategic priorities and the logic model proved the useful lens to explore potential changes. For example, the logic model diagrams a path from strategic priorities to resources, activities and outcomes. Making this connection supports decision-making and resource allocation that are centered on user needs and long-term objectives. Now, while we have found the logic model useful, we've also noted a few drawbacks and disadvantages. First, logic models invite a high-level perspective that may overlook daily operational details and folding every library activity into the logic model can work against its legibility and utility as a communication tool and cohesive strategic document. To counteract this risk, we believe that being intentional about how granular to be in the plan is important. If more levels of detail are needed, subplans could be created to capture inputs, activities and outputs in more detail. Second, by being action-focused, logic models can be at risk of becoming obsolete if the context changes or priorities shift over time. We plan to mitigate this risk by keeping the library-wide model up-to-date through an annual renewal process. And finally, as in all strategic planning exercises, gathering a broad range of perspectives and building consensus around priorities and intended outcomes can be challenging. Additionally, making decisions about which priorities to include or which outcome areas to focus on requires an engaging and meaningful process. While the logic model provides a construct to support more detailed organizational planning, it lacks significant structures for determining priorities. Relying on broader organizational goals, employing prioritization frameworks or using consensus-building activities are some possible approaches for mitigation. In summary, we feel that logic models have advantages for libraries and that they help libraries address some of our own unique challenges. Specifically, logic models help us align our activities with long-term outcomes, improving our ability to plan and assess. And additionally, logic models provide a clear path between our work and broader organizational goals. Thank you. Thank you, Eric and Jeffrey. So if you have questions, please use the Q&A function and I will read the question for the speaker and the audience. And as you think of questions, I do want to thank both of you for just showing us a real-life example of how a logic model in some ways is a strategy map of some sort or as we used to say in the old days, a strategy map is a kind of a logic model. I noticed that you have the document you shared. It was just for 2019-2020. What is the long-term framework that your logic model can address? Thanks so much, Martha. And I had to admit, I missed about half that question. My home internet seems to be a little unstable, but I heard the last piece of it is kind of the long-term framework. I think for me, one of my aha moments for using logic model was the value of shifting to an annual strategic planning process, grounded in some actionable priorities. And so our current strategic plan is a one-pager that's focused on these priorities. University strategic plan and priorities. And this isn't maybe directly addressing your question, except for to say that for me, the value of strategic planning for the library is to be flexible and responsive to changing university priorities. And an example where this is kind of buried fruit this year is by using the logic model framework we're able to incorporate a new strategic planning process. Our university's put in place called the strategic plan for inclusive excellence centered on diversity, equity, and inclusion issues. And so I think for me, the flexibility of the logic model is the thing that maybe it's the long-term value. So would it be fair to say that in terms of timeframe, you follow the university timeframe for your long-term strategy? I don't know if your current university strategic plan covers whatever year period it covers, is it are you basically following the same long-term perspective? Great question. I think the model we're moving to in the library is an annual planning process. It guys are at the university, UC San Diego an annual planning process grounded in a three-year or so priority refresh. The priorities we refreshed a couple of years ago we kind of put a three-year timeframe out there for. I expect our universities, I forget before review, when that happens, I would expect that we would take their renewed priorities and kind of go through an alignment process. And again, it was one of our slides where we showed university priorities kind of at a high level connected on the right side of our logic model. I don't think they don't have to fully align. And I think it's reasonable for sub-organizations, you know, like the library in a higher ed setting to do shorter iterations on strategic planning. Great. A question, do you employ logic models at individual assessment levels? So actually a really interesting work with the group of libraries, UC San Diego, Corey Burnetti, Tamara Rhodes, I'm forgetting there's a couple of the folks who are maybe involved to just use logic models to design outreach around kind of information literacy and academic engagement. And so that actually was in part the inspiration for using logic models at this higher level. Yeah, and actually one of the questions was about logic models at the departmental level, research and instruction was an example, for example, thank you. So one last one, do you have your library departments create department level logic models? So I would say we are about halfway through the kind of evolution to a new strategic planning process. I mentioned in our presentation, we were onboarding new senior leadership. And so what we have planned for the first half of 2021 is to go through that annual strategic renewal process where we maybe do a little bit more strategic priority forecasting. Something that UC San Diego has done really well prior to my arrival, which we've continued is an annual goal setting process that aligns with strategic priorities at the program or departmental level. Great. Model would prove to be a good assessment framework in that case, a good replacement framework. I'm not sure that would necessarily be the case. I think the again, kind of one of our aha moments here was that the logic model helps you think about approaches to assessment differently and avoid designing an overloaded assessment framework that may not fit in every situation. Wonderful. Thank you, Eric and Jeffrey. A couple more questions are in the Q&A if you can answer them using the Q&A panel. We're going to move to our next presentation. Our next presentation is entitled Like Peas and Carrots, A Case for Assessment and UX Teams presented by Emily Daly, Head of Assessment and User Experience at Duke University Libraries and Michael Luther, Head of Assessment and User Experience at Emory University Libraries. Thanks so much. I'll go and share my screen. All right. So thank you, Martha. Good afternoon, everyone. Thank you for joining us for our presentation, Like Peas and Carrots, A Case for Assessment and UX Teams. So my name is Michael Luther and I'm the Head of Assessment and User Experience at Emory Libraries in Atlanta, Georgia. And my name is Emily Daly. I'm Head of Assessment and User Experience at Duke University Libraries in Durham, North Carolina. And today we want to talk about our assessment and UX teams, our assessment and user experience teams, how those teams came to be, the different ways we approach our work and some of the things that we've learned as AUX team leaders. So to begin, when we talk about AUX, we're talking about the union of the functions of assessment and user experience. When viewed narrowly, assessment is about tracking performance or measuring outcomes, whereas user experience, again viewed narrowly, is usually associated with the IT world with design and specifically with user interfaces. So why combine them? Because when you take a more holistic view, these two functions have a lot in common. They often use similar methods and techniques and they can be applied in both the physical and virtual worlds. This holistic view allows a lot of flexibility in how you build an AUX team. More fundamentally, I would argue that the work of AUX comprises several distinct strands, although these strands frequently get braided together. In most cases, the initial driver for AUX work is the need to report information or track performance. Digging deeper, I believe the organization seeks to discover unknowns and make more informed decisions based on a robust mixed methods body of evidence. We want to better understand our communities in order to better serve them. So now we'd like to discuss these two AUX teams and how they came to be. And this is an interesting point of comparison because Emily's been at Duke since 2006, serving various roles before forming the nucleus of her AUX team in 2013. I on the other hand was hired by Emory in 2019 to lead a team of folks who had worked at Emory for some time, but had only recently come together. I was interested to learn that assessment at Emory got its start back in 2005 through an ARL initiative. LAC's own Steve Hiller came to Emory and consulted with staff to help launch the program. An assessment integration group was founded in 2015 to support a library's wide assessment approach. And this evolution continued with the founding of AUX as a team in 2018. And I was hired to lead it in early 2019. It might be interesting to note at this point that Emory's AUX team is located in the services division, the idea being to place it closer to the users. Duke's AUX program however was started in the service division, but has since moved to the digital strategies and technology division in order to strengthen assessment in UX practices and library IT. So here we are. On the left is Kristen Majors who coordinates assessment efforts for the services division and serves as the division representative on the assessment integration group. Chris Bledis and Grace Shirt. Chris is the service design librarian and also the product manager for the SpringShare platform. I'm there in the middle. Doug Slaughter is in the blue shirt and he is our collections analyst who spends much of his time studying and visualizing collections data from Alma and our vendors. And on the right is Pat Culpepper who coordinates the assessment program for Emory libraries as a whole. She manages our LibPass data warehouse and visualizes this data in Tableau. Our team is responsible for a lot of data management, visualization, reporting the biennial library survey as well as assessing library outcomes. On the UX side, our responsibilities are a bit less routine but I would include here SpringShare product management as well as UX projects, both large and small. We also serve a consulting role for our colleagues around Emory libraries. So now I'll turn it over to Emily to talk about her team at Duke. Thanks, Michael. So assessment at Duke libraries also goes back many years but our early assessment efforts were focused on external reporting and working with faculty to assess student learning outcomes. In 2013 and 2014, as a result of staffing changes and retirements we had an opportunity to rethink our approach to library assessment. My AUL at the time and I successfully advocated for a new UX department with just two staff, Tom, a web developer with a strong interest in UX who was already at Duke libraries and me as department head of this small team. The UX department officially started in January of 2013 and soon after we began advocating for a larger scope and more staff to support our increasing responsibilities. By May 2018, we had hired one assessment analyst, Joyce, pictured on the top row between Tom and me and welcomed Corey and Angela from other units in the library. Corey and Angela pictured in the second row were interested in AUX's work and could help us expand our capacity in project management and data visualization. In 2016, we serendipitously connected with Duke's PhD program in psychology and neuroscience and have since hired three outstanding graduate student assistants from these departments. Our current student Candace works just five hours per week so it's extremely part-time but these graduate students have been in an invaluable addition helping provide a student perspective when we need it and increasing our ability to move projects forward. In May 2018, when we welcomed Corey and Angela to AUX we took on a third core function, project management. Our department had gained a lot of experience managing AUX and assessment projects so library administration asked that we formalize this function and support our colleagues' leading projects throughout the library. Now we'll highlight a few projects that we hope illustrate the natural connections and overlap between assessment and AUX. On the surface, our biennial user satisfaction survey may seem like pure assessment but it is strongly connected to AUX. This is one of the main projects for our library-wide assessment core team who work with AUX staff to do everything from determining the target audience of the survey to providing input on survey data dashboards like the one pictured here. Following every biennial survey, AUX leads a library-wide staff survey workshop. This year over 70 staff registered for the workshop held by Zoom of course and we reviewed survey data and then developed and prioritized recommendations for improvement. AUX is truly woven throughout the biennial survey from initial user testing of the survey instrument to implementing changes based on what we learn. Our department also played a large role in developing a new library catalog. The lead developer for the catalog is part of AUX and I serve as the product owner. Our expertise in website AUX and user research was critical to the success of this new catalog which we designed to be highly usable for novice and advanced researchers alike. This project also exemplifies the agile approach we've adapted for Duke libraries. For nearly all website and software projects we work in two-week sprints and identify issues for each sprint, assign story points to issues, et cetera. Finally, I'll highlight our recent project that involved assessment, AUX and project management. Because of AUX's extensive work with students, spaces and online interfaces, we took the lead on several aspects of reopening the libraries to students and faculty in August. One of these projects was to implement a seats reservation system. Our team considered every step of the journey that users would take to reserve a study seat and then made decisions about policies, signage, seat descriptions, floor plans, seat labels and communication. And because we had to make decisions quickly and in some cases in the absence of user feedback, we planned a number of assessments for after we launched the new service. Since mid-August, we've analyzed usage data and created dashboards like the one pictured here. We've reviewed the analytics of relevant web pages. We've run user feedback surveys, conducted remote usability testing and led one-on-one user interviews. We've rolled out improvements based on what we've learned and we will continue to make changes as we prepare for the spring semester. With that, I'll hand things over to Michael to describe a few projects from memory. So here's an example of a collection management dashboard that Doug recently developed. So this dashboard visualizes firm order data. The visualization combines the cost per use analysis for each subject and plots these titles on the graph above. This view is filterable such that both the table and the graph update as the filters are set. Also, covering over a particular circle shows the title level detail. A collection tool like this one can be very helpful to a collection manager in determining some gaps and informing budget allocations. So this is a project that Chris and I worked on to test our newly launched library website on real users. We ran 14 participants from three user groups through 24 tasks. In this project, we really wanted to bring together multiple methods to better understand the user experience with the site. We combined direct measures of success or failure with technical observations of which route the participant took through the website. And we included in the report many illustrative comments to help flesh out the user's thinking. Based on this data for each task, we identified themes notated any areas where the site wasn't performing as intended. As a third example, I wanted to talk about a study of database types that we conducted. This is an example of a study that, while very narrowly focused, can become part of a broad base of knowledge about user behavior. This study was a survey that mimicked a card sorting activity. It was administered to a group of subject librarians and to library users. And we wanted to find out to what extent users and librarians would sort of specific resource into the same database category. We learned that in some areas, there was a lot of agreement between the two groups and in other areas, there was much less agreement. This has potential implications for several areas of the library. So Emily and I have talked a lot about the privilege of having a team of five to dedicate to this work. And we feel it's important to acknowledge that here. To speak frankly, Emery and Duke have a lot in the way of resources to fund this work that many other universities do not. So with that in mind, we want to talk a little bit about the joys as well as some of the potential tribulations of AUX teams. In terms of joys, first and foremost, a team of five provides a broader skill set. This is an opportunity, there is an opportunity for some specialization. As we've already mentioned, the field on which we play is quite large and complex and so diverse skills allow for more flexibility in your approach. We all know also how important buy-in from library leadership is and having a team of five kind of presupposes that buy-in. You don't get a team of five unless you already have a certain acknowledgement from library leadership that the work is important and worthy of investment. But once that investment is made, people are rightly expected to pay dividends. The principal challenge I would say is the notion that any project that has to do with assessment in UX should be led by the AUX team. And I'll add that this notion can come from outside the AUX team, but it can also be something that the team unrealistically expects of itself. There are times when we may need to manage expectations and share the responsibility with our very capable colleagues. Prioritization, coordination and communication are key and any channels or systems that we can establish to improve that over time is going to be increasingly important. Now Emily is going to talk about some of the lessons we have learned as AUX team leaders. So as many of you know, I'm sure UX and assessment work is endless and it can be tempting to attend to whatever is in front of us, but it's important to be strategic to consider what will have the highest impact on users and then determine how best to resource a particular project. It's crucial to be aware of the potential for scope creep and focus projects as narrowly as possible. It's also important. I've also learned to acknowledge that projects my department initiates always lead to more work for our colleagues. I'm transparent from the start. I tell my colleagues that I get it. This creates more work for you, but let's collaborate to scope a project that is well worth your effort. Just as Aaron Nolan mentioned during his presentation earlier today, assessment is viewed by many of our colleagues as an add-on or nice to have phase of a project. So we've learned not to push too hard when our colleagues are feeling overwhelmed by other aspects of their jobs. Finally, we rely heavily on our colleagues to conduct assessments and implement improvements. So it's critical that we have library leadership support to spin up yet another project team or evaluate library services or spaces that one of their departments manages. Michael and I are fortunate to have full departments for this work, but we've seen ways to advance assessment in UX without a full department or even a dedicated position. Many libraries have effective assessment teams with staff who might already do some assessment as part of their job responsibilities. Some libraries have explored incorporating phases for assessment and UX research into existing or new projects or leveraging consulting as a service. And developing even informal communities of practice can be a great way to share ideas, build support and learn from others' challenges and successes. As I hope we've demonstrated, our work has evolved to meet the needs of our libraries and our users and I anticipate that continuing. Let's wrap up now with some things we'll be working on in the coming months. We've been able to continue our successful AUX practicum and paid graduate assistantship in our remote work environment and my department is actively working to recruit practicum students from underrepresented populations. We will continue to assess fall services and help plan for spring 2021 and beyond. We are beginning a third in-depth user study, this one of international students at Duke. We're also migrating our library website from Drupal 7 to Drupal 9 and we're taking this opportunity to streamline our content and ensure it is user focused. All right, Michael, your turn. We have several projects that are either already in the works or on the horizon. One opportunity I believe is with our spring share platform to make more meaningful use of all the data that these tools produce. We're also conducting a needs assessment of Emory libraries in collaboration with our assessment integration group. Specifically, we want to determine the AUX needs for the libraries. Where are things working? Where do people feel they don't have the information they need to make informed decisions? My hope is that this project will help us to sort out roles and responsibilities around AUX function libraries wide. What makes sense to centralize within AUX? What makes sense to keep at the division or library level, perhaps with some consultation from our team? And what needs to be coordinated through the assessment integration group? These are important questions. Next, Pat is putting the final touches on a dashboard of core metrics that can be shared with university stakeholders. And finally, our spring survey that was slated to launch on March 31st, 2020, needless to say, did not happen. So we're in the middle of a much smaller survey this fall, more related to remote teaching and learning. And so with that, we thank you for your time and attention and we are happy to answer your questions. Thank you, Emily and Michael. So if you have questions, please use the Q&A function. And I will read the question for the speaker and the audience. And there is a question that has come. There's interest in your database sorting activity. What did you do with the results? That's an interesting question. So it was primarily, it sort of was aligned with a larger project that the folks over in our technical services unit were doing to try to redevelop some of those database categories. So it was sort of like they didn't want to do that in a vacuum. So this project sort of mirrored that one so that they could sort of have some firm ground to stand on as to why they made the choices they did. So that was one of the applications. Potentially, I think library instruction would be another one and there are others that probably haven't been fully implemented but that was the main one, that's the one that I mentioned. Great, another question for you, Michael. What platform do you use for the library profile dashboard? So we use Tableau. We have a couple of folks, both Pat and Doug have become quite proficient in that tool. And so that is sort of the tool that we use most frequently and most efficiently. So yeah. And Emily, any insights on how you use Tableau? Sure, we use Tableau for our biennial survey. The dashboards that I was showing actually for the biennial survey are from Tableau. And then I also showed some of the dashboards from our SEATS project back here. And those are also from Tableau. So we use those both for helping to share, you know, snapshots of data like this. And then we also have some that are available for others to look at live. And so we can keep the data updated on a live basis. So for instance, our SEATS study spaces, we were producing this data and producing graphs to share with library leadership just once a month. And then we realized it would be really helpful to be able to keep these updated on a regular basis. And so we've done that and can re-run the data more easily as a result. Great. So the last two questions, I'm going to combine them. One asks about the Tableau we use, whether it's public facing or just internal or both. And the other is actually targeted specifically for Michael, is there some way to access the collection management dashboard for firm orders? Can you speak generally about the software and process involved? So maybe, Emily, you addressed the Tableau public facing internal or both? Yeah, so we have both for things that have private data. So comments, for instance, from our biennial surveys that we don't want to make publicly available. We have internal dashboards only for dashboards that we want to have our subject librarians use for collecting decisions and that kind of thing that we don't necessarily want to share externally. We use privately. And then we also have public facing data that we're happy to share. And those are on the public Tableau server and then are linked to our library website. And so it really just depends on the nature of the data as to whether or not we want to make it internal use only authenticated behind a username and password or if that's something that we feel comfortable linking to from our library website for all to use. Yeah, and at Emory, I would say at this point, most of our Tableau dashboards are internal. I would like to get to a point where there's stuff for a wider audience and we're moving to that point, but right now I think we're moving to more of a university audience, still not maybe a public audience. But I think ultimately that's the direction that will go over time. In terms of the collection management dashboard, I really appreciate your interest in that. It's not something that you can get to right now, but it's something that I think maybe we can follow up with you after the session and talk with you a little bit more about that. Be happy to do that. But when you're getting into resource pricing and things like that, things can get a little dodgy. And so that's not something that I think we would put out into the world, but it's something that I would definitely like to talk with you more about. So feel free to follow up with me afterwards. Thank you, Michael and Emily. It's been very informative. I'm gonna actually leave the last question on the Q&A if you want to mention anything. There, Michael, go ahead. So I'm going to let move on our next presentation. Next, we have analyzing library experience. How Gonzaga University's Foley Library interrogates expertise data towards data-driven recruitment and professional development, presented by Tony Zander's founder of SkillType and Paul Bracky, Dean of Library Services at Gonzaga University. Hi, everyone. Thanks for joining us today. And I'm Paul Bracky, Dean of the Foley Library at Gonzaga University in Spokane, Washington. And I'm here with my colleague and co-presenter, Tony Zander's founder and CEO of SkillType to share some information about our collaboration in developing data-informed approaches to talent management, particularly in terms of professional development and recruitment. So the collaboration has its roots in an environment that I think we all find ourselves in and that shape the ongoing evolution changes at academic libraries generally. There are a few trends that stand out to us, but I'm sure that there are also others that might occur to you. But the overarching point here is that there are forces impacting academic libraries that are changing our needs for skills and expertise among our employees. And for getting Gonzaga's case, this would include both of our faculty librarians and our staff more broadly. So we see things like the emergence of increasingly digital workflows requiring new skill sets among our staff. Emergence of engagement-centered models of librarianship is necessitating development of skills related to relationship building, digital scholarship and embedded practices among others. And we see commitments to equity, diversity and inclusion certainly have implications for recruitment and how we approach learning initiatives. So, and these are just a few examples. And for us at Gonzaga, I think that we are also finding ourselves in a very different position than when we started this collaboration that I think it feels for us at least that COVID and everything, all the responses to that have really intensified some of these trends and forces on the library and really made some of the training and recruitment needs that we have even much more pronounced than they had been before. So with that, I'd like to just start with introducing you to the partnership that we've had and just give you an overview of the timeline. So we've been actually engaged with skill type for two years now. So in October, 2018, our development partnership began and from an internal perspective at Gonzaga, this has really been managed and led by members of our organizational effectiveness committee. So one of the things that we have in our library is a set of committees that manage cross cutting functions for the library. And so we have one that is dedicated to things like organizational climate, training programs, these sorts of things. And so the folks on that team have really been instrumental in this partnership on the Gonzaga side. Following that, that are entering into the development partnership in February of 2019, just after ALA midwinter Seattle where this photo was taken, we had an onsite kickoff here in Spokane at the GU campus. And we were intending to have a rollout of this that was integrated into our planning processes this summer, but we had a little distraction come up that has really delayed some of our process. However, so as a result of that, some of what we're gonna be presenting here is probably a little bit more aspirational than we had hoped for when we submitted the proposal. But hopefully this will still be of interest. And we're currently in the process of rolling out skill type as part of our planning process. Actually, I have some speak. So Tony. Thank you so much, Paul. Nice to meet everyone virtually. As Paul mentioned, my name's Tony Zanders, founder and CEO here at skill type. And what I'd like to share today is some of the accomplishments and developments we did achieve during this development partnership with Gonzaga and a cohort of about eight other academic libraries here in the United States. So the first I'd like to focus on is one of the biggest challenges we faced when trying to rethink the way professional development and talent management is done using data and analytics. And the first challenge was to come up with a shared descriptive framework. As we know, different organizations and also different locales, even different states within a single country all have different terms and colloquialisms to describe our expertise. The job families at each institution are different, for example. And so we wanted to start from sort of the most essential unit we could which we identified as the core competency frameworks that are across the information profession. We analyzed over a dozen of these that we outlined in our paper that allowed us to create an ontology of over 600 skills. There's a screenshot here of some of the earlier ones that we began analyzing and distilling down into machine readable terms. Once that project was complete and we had sort of a baseline for a descriptive framework to describe expertise, we then set onto the next challenge which was there's a world of professional development and training opportunities that exists on thousands of Drupal and WordPress websites around the web. We set out to start to aggregate and describe these resources using this control vocabulary. And the counterpart to this is each of the people in the organization at Gonzaga and our other partner institutions are also using this control vocabulary to describe themselves. So we now have a common framework to describe ourselves along with the resources and opportunities for professional development. Fast forward to 2020, what this creates for us is an opportunity to describe our needs as an organization using the same terms that other institutions are and map those to all of the resources and opportunities for development. And I should mention that the reason we're focusing on professional development to start was directly influenced by COVID. Prior to everyone going on to a hiring freeze or some sort of exception based hiring policy, we were more so thinking about recruitment but we had to adapt to the environment. Where this is headed, what this allowed us to focus on was building a model for the future of professional development going away from what we would describe as an independent study model where everyone in our organization is on their own to identify not only what skills do I have and what I want to learn and what should I be learning along my career pathway. We want to evolve from that to more of what we call AI powered cohort model where you have multiple groups influencing a person's career direction. In this case, training providers like conference organizers and professional associations that are producing trainings for people. The second column here looks at employers in this case, the Foley Library at Gonzaga and managers and supervisors who have the goals they want to meet for their direct reports that should also be shaping a person's professional development planning and their career pathway. And then lastly, the community because many people even at this particular conference for assessment are all focused on what are the best practices for assessment around the world. So the community itself can also help to recommend to me and shape what are the skills and expertise that I should be developing. This is where we're headed with this model that began during a conversation between Gonzaga and SkillType. Back to you, Paul. Great, thanks. So I wanted to share just some more information about why SkillType from Gonzaga's perspective and then some more details of what we've done and what we hope to do next. So I think the first question that I'm asked a lot is why did we choose to partner with SkillType? And I think that there are three reasons. One is really connected to our institutional mission. They're also strategic and practical considerations. But from a mission consideration, Gonzaga is a graduate university. So we really care about development of the whole person. And from my perspective as a library leader, that means really providing opportunities for our staff to develop and kind of be their best selves We're also very much interested in developing Foley to be a learning organization, developing, learning new skills and using that to transform and evolve what we do. And we're also struggling with some practical considerations. Spokane is not a place where a lot of external professional development comes to physically. So in a pre-COVID world, we were really having to think about how do we get people to professional development opportunities in a physical sense. And also how do we navigate the increase in online opportunities and connecting people effectively, connecting our employees effectively to those opportunities that are out there. It's part of the reason we were so excited about the lyrics of learning. Next. So the way we're trying to roll this out is really integrated into our planning process. So we have a fairly integrated planning process here. Where we, on the one hand, tried to connect library goal setting on an annual basis to, which is connected to our library strategic plan, of course, to all sorts of university level processes and university plans, but then also connecting down into unit goals and planning and individual learning plans. And within this context, there are really three levels where we have conversations about talent management and professional development. And then we also have conversations about the level of individual performance management. So everybody in the library develops an individual learning plan for the year, which they integrate the learnings from into their work. We ask our units, as they're coming up with goals for the year, to identify the needs and the skills that are required to carry those out. And then we use those to have a conversation about where we either need to develop new skills or to engage in recruiting processes. So where are we in the process of implementation? First, as I mentioned, because of COVID, we were delayed in rolling the skill type part of this out. But where we are in using skill type is that first of all, employees have been invited to create profiles. At this point, we're not requiring the user skill type, but we are encouraging it. Most of our employees have created profiles and used those as one, not the only, but one of the resources available to them in developing their individual learning plans, along with GU-specific training materials. So that is in process right now. But what we're really hoping to do is to leverage this work into making, using, having data informed and assessment-driven talent management decisions. So for us, Power BI is our analytics platform on a campus level. And so we're really looking at to use that to analyze data from the event of improve, which is what we use to manage our assessment and planning processes with skill type data. So knowing a little bit more about the skills and interests of our employees, that we think that that's really going to help us to provide opportunities for employees, both to learn the skills, to gain some of the skills necessary to contribute to the library moving forward. But also we think providing, perhaps identifying in some cases places where there are, we're unknown interests and such and giving our staff and our faculty opportunities to do things that they might not have otherwise had the opportunity to do. And then the last thing here that I really wanted to mention is I think that we're really hoping to use this for gap analysis. So being able to identify skills that are really needed in the library in the planning process and then feeding that into really human discussions but about where we invest in developing the skills of our current employees, which I think is always our first priority for how to do things versus external recruitment and what those external recruitment need to look like. So with that, I think we are at time. So thanks for your attention and we'll take some questions. Great. Thank you, Tony and Paul. So if you have questions, please use the Q&A function. And I will read the question for the speaker and the audience. So it's a, how has the the staff received this, you know, you said the majority have been creating profiles. Any success stories of being able to find good training that they wouldn't have found any, in any other way? You know, the stories that I've heard about most really have to do with the lyricist learning partnership that we've had staff who have been able to much more efficiently find, you know, I think that there were, there were always, we had a lyricist learning subscription prior to that integration. And that folks were just frustrated by the, the amount of information there that wasn't organized in a way that was tailored for them. Right. And so I've heard, you know, quite a few stories of people being able to identify. I mean, it's really a range of training materials, but things that they felt were much more tailored to what they were really interested in. And the user research we've done, not with the, the, the, the, the, and the user research we've done, not with the executive team at Gonzaga, but with sort of the users is along the same lines. And so prior to skill type, they would have to go to every individual website, whether it's a YouTube feed for a conference or a training portal for a vendor. And now it's, it's being aggregated in the same place. It would be overwhelming, but they're able to slice the slice of trainings that are relevant to what they're interested in because it's being matched based on the interests that are on their profile. And so instead of having to browse thousands of videos and articles, it's sort of the needles in the haystack are being presented, which is most of the feedback that we get. Great. Another question, have there been issues with HR regarding staff level compensations and training for activities beyond their compensation level? That's probably an issue in some union based environments. Yeah. So, so first of all, we're not union based. So I think that we've had some, some latitude with that, but I would also say that for us, the issues are more when we're, it's not so much for the training itself as when, you know, position responsibilities change as it was a result of training. And that's always an ongoing conversation with, with us and HR and the employees. Yeah, I could speak to that a little bit. We have clients who are unionized and organized in that way. And the platform provides access to everything that we do. And so when we work with a training provider to load their material into skill type, we are getting access to everyone for that. And so it's, it's our, our philosophy is more in line with the diversity, equity and inclusion conversation, which is if work, if a, instead of only giving access to five people for the conference attendance or registration, we need to be rethinking the, the, the access model to make a more equitable relationship between a training provider and an organization. We have the same approach when it comes to negotiating with publishers to get access to all patrons. We're just borrowing that same concept when it comes to providing training and professional development. Great. And last question, is there a cost for individuals to use skill type? Or is it intended for organizations to subscribe to, as they, for their employees? Sure. It's free for anyone to sign up and use on their own. And people can just visit the website and request access. And we onboard people every week. We earn revenue from institutions who want to get access to the data and analytics across their, their organization. So that's a great question. Thank you, Tony. Thank you. Institutions who want to get access to the data and analytics across their, their organization. So it's, it costs for institutions, but it's, it's free for individuals. Wonderful. Thank you, Tony and Paul. Thank you. It's been insightful and good to know that anybody can sign in. No cost. So, first up, we have a presentation with the title assessing the effectiveness of a university library's strategic initiative to foster data informed decision making presented by Dana, Peter, man, digital and distinctive product manager at the UCLA library. And Sharon Schaeffer, Library Search and Assessment. Hello, my name is Dana Peterman. I am the Digital and Distinctive Project Manager at UCLA Library and this is... Hello, I am Sharon Schaefer. I am the head of Library Search and Assessment, also from the Digital Library. And we're assessing the effectiveness of a University Library Strategic Initiative to foster data-informed decision-making. We are in the process of instilling a culture of assessment that encourages data-informed decision making. We've been using a three-pronged approach that relies on engagement and learning and change, spreading the message and creating buy-in. As a group, ACT, or Assessment for Change Team, was supported by library leaders to use public menus such as all staff library meetings. We also had meetings with individual departments and decisions to reframe assessment as a positive term, addressing head-on fears of punitive associations with the term. We educated people to understand the value of assessment for the organization and as a means of growth and potential resource capture. Two-way communication and our education and workshops, as we educated our colleagues about assessment methodologies we listened to them to, doing needs assessment along the way that made assessment our own. We developed a tool we called Data Lake, created in Confluence, which is sort of a wiki. Data Lake acts as a planning platform and an abstracting and indexing repository or inventory for data tools for assessment and total locally created reports. Most importantly for the organization, Data Lake serves to centralize assessment and its importance to the organization. But we don't know how it was really going. How far down the road to assessment had we gotten in creating a culture of assessment. In order to find out, we needed a model that would allow us to assess our assessment. Based on the fact that data informed decision making was our goal, the logical choice was to view this from a data sciences maturity model. So a maturity model, what is it? Well simply put, a maturity model is an aid for an enterprise to understand their current and target states in a particular competency. Think of maturity model as a roadmap like Dana had said. For the library, we wanted to look at our competency using data informed decision making. We had a need to challenge a solution, our need. What practices should we measure? For example, the practice of alignment. Does alignment follow organizational values and roles? Goals. We had a challenge. Literature review revealed many types of maturity models, but not one was a good fit. Most maturity models found use levels of competency based on the consistent application of behaviors that range from least database to most data driven. But the models did not use examples that crossed over well into the library setting. In addition to the connection to data driven decision, short changes what libraries value more, which is data informed decision making. We had a solution. We created a maturity model for informed decision making in the library. We'll look at the maturity model that we came up with in an upcoming slide. We used our maturity model to look at these multiple sources of data in order to create a larger picture of our place on the path to a culture of assessment. We created a survey to capture the thoughts of the organization as a whole. We interviewed library leaders to get an impression based on their experiences where we are in developing a culture of assessment. We looked over the products that have been created over the past two years, plans for assessments, and the reports that have been created showing the outcomes and recommendations of assessment. And we judged their quality using two simple rubrics. As we reviewed assessment plans and reports, we recorded our impressions about their quality both from the forms themselves and from direct observations and consultations as we worked with library colleagues. So we sent out a survey to all library staff. The survey questions measured where we are on the maturity model for data informed decision making. Let's go ahead, Dana, switch to the maturity model that we created and used. Here is the maturity model that we created and used. Note that there are three columns. Column one, the practice being measured. Remember the example practice of alignment that I mentioned. Column two is the definition. So if we scroll down to alignment, that's the practice that we're trying to... And then the column two is follows organizational values and goals. That's the description of alignment. The column three is the associated survey questions to measure the practice of alignment. And you can see a question there that was on the survey. A Leichert scale of one to five was used in many of these survey questions. One would be strongly agree where a five would be strongly disagree. Dana, if we could come back to the slide. This chart shows maturity model practices and where their associated survey responses skewed. Responses could skew left as in strongly agree, which means we're a bit further along on our maturity model roadmap. Responses could be symmetrical that middle column, which is a sort of meh, being a score of three on a one to five. Think of it like a C. Responses could skew right strongly disagree, which means we're not as far along on the maturity model roadmap as we'd like to be. Our recommendation is to concentrate our efforts on those practices that had responses that skewed to the right. So let's dive a little bit deeper into this. So as we take a look at the questions, what we tended to find was that people liked their units better than they liked the division, the library as a whole. So they tended tended to trend more towards a positive feeling towards their unit. Just taking a look at the data here. You can see the types of questions that we got that were positive included UCLA library workers and my unit asked questions of data to guide their work towards fulfilling unit goals. More negative response would be UCLA library allocates resources by making data informed decisions. So you can see by these blue and yellow columns, what tended to be positive and what tended to be negative as we did a quick analysis of those things that were of value. In the all library staff survey, we also asked the open ended text question. Do you have any suggestions for prioritizing areas of assessment in the UCLA library over the next two years? Analysis of the responses led to organization by category and subcategories which were then crosswalk to priority areas of assessment for UCLA library to concentrate on. A note one trend in the responses, it was a desire for leadership. Staff were looking for a prioritization of assessment projects and the rationale supporting decisions. We interviewed six library leaders among different divisions and widely different perspectives and asked them two basic questions about how in the past two years data informed decision making had changed. One interesting quote had to do with transparency that I thought I should highlight here. After years spent in academia, it's clear that transparency outside the library means obvious and simplistic. Transparency has value in context where library is not trusted. It's not clear to me that the more we try to be transparent, the more we are trusted. We did a review of assessments that were conducted or planned by library staff. We used a rubric with two parameters. Did they understand what they wanted to do in that assessment? Did they identify appropriate data for what they wanted to do? We also observed, watched the staff filling out the assessment forms. Our findings not too bad, but we did note a lot of assistance was needed to clearly state an assessment. Users also had trouble identifying appropriate data. We also took a look at reports. These are reports that show recommendations based on an assessment or outcomes. What we found is that people tended not to do as well on reports as they did on assessments. There was often a confusion of data with report and often a weak connection to any specific assessment goal. So the findings from the survey. Here are the findings from the staff survey responses. On our roadmap to foster data-informed decision making, we need to concentrate on improving these characteristics here. We heard a great deal about making decisions, but the library as a whole make those decisions more transparent, especially supplying appropriate rationale for decisions. There was also the trend where unit level actions were more positively viewed, as opposed to all library level actions. A recommendation would be to continue our centralization efforts. When we interviewed the library leaders, we categorized all of our findings among our particular practices. And what we found was the primary concern to the library leaders was the idea of collaboration campus-wide and system-wide. And primarily actually in the area of readiness and even as to resource allocation and transparency, it had a lot to do with data, being able to analyze data, being able to use data, and be able to have data right on hand at the point of need. Our findings, we need to continue our work with library staff to learn how to specify and assess them clearly, gather and use appropriate data, communicate outcomes, or recommendations, understand the definitions of data report and assessment, and harmonization efforts surrounding data-informed decision making. So we welcome all your questions. Our names, Dana Peterman, Sharon Shafer, contact us here or in the web. Great, thank you Dana and Sharon. So there is a question. When developing the quality maturity model, I found that this person actually has worked with maturity models before and noticed that she found that there were differences in rating between those lower down the hierarchy from those in the middle of the hierarchy and from those in your positions. Did you find a similar stratification? I think we found that. Go ahead. Go Dana first and then I'll jump in. We actually found it very difficult to qualify or quantify levels when we were actually surveying people and looking at really kind of affective responses to a survey concerning where we were on the roadmap. So the levels didn't work so well. We did try to employ them certainly in the area of talking about collaboration. So maybe Sharon has some more. But I feel that this person's asking about the level of the staff responding. So in terms of is the person in a leadership role or a staff member. In terms of the maturity model and creation of that, I would say that we found a difference in the practice of transparency, as Dana pointed out. There was also a difference in the practice in the maturity model of collaboration. I think that the leadership was more interested in higher levels of collaboration. Collaboration with the office of the registrar versus collaboration with the acquisitions department. But there might have been more. Yeah, we had only we had 52 responses and 10 of them were library staff. The other 42 were librarians. Yeah, that's also a good point. Yeah. So I wonder if, you know, people feel more comfortable within their unit because they collaborate intensely within the unit. And, you know, the level of of desirability from the leaders to see more collaboration across the university or across the campuses at the UC system. I wonder, you know, whether they see a need to move more being more comfortable and, you know, the level of comfort people have within their units, whether the leaders see the need to have that level of comfort across units and across the system. Any comments on that? I think they would love that. But I think I think it's basic. I don't want to kind of devolve to this idea, but it's probably true. You're more you're more happy with those things that you're familiar with. So it seems to me like this is kind of a no duh response when people say that they trust their unit more than they trust the greater organization. But as some library leaders pointed out, really intelligently, this is really a matter of perspective. And, you know, people have this sense, as some have said, that there's so much more to a decision than there really is because they think that there's hidden data or there's hidden information when there isn't. So it's, I think a lot of this has to do with being in a large organization and developing and maintaining and fostering trust. Yeah, I'll echo what Dana said. In our one-on-one interviews with library leadership, the word perspective came up quite often. And I wonder whether that word is also reflective from an earlier presentation today when they talked about mastery experience? Absolutely. Yeah. Absolutely. I love that presentation. Excellent. Great. Wonderful. Thank you very much, Dana and Sharon. I'm going to move forward here. Our next presentation is climate call and people focus strategic planning. It's presented by Susanna Cowan, coordinator for library research and assessment at the University of Connecticut and Lauren Slinkliffe, associate dean at the University of Connecticut. Hello. Thank you so much. And first let me say, I don't think I have ever heard such an excellent pronunciation of my last name before. So thank you. What a perk for starting off. So yes, I am Lauren Slinkliffe. I'm the associate dean at UConn Library and I'm joined by my colleague Susanna Cowan, the coordinator for library research and assessment at UConn Library. And we'll be discussing our experience with climate qual and the assessment that was conducted for the library strategic planning process, particularly the people focused approach that we took. So Susanna and I served on the strategic framework steering committee and supported the running of climate qual in the library. Next slide. Thank you. So the UConn Library ran climate qual in the fall of 2019, at the same time that we were undertaking a strategic planning process, referred to as a strategic framework at UConn Library. So those parallel institutional level priorities resulted in two major assessment initiatives by the library, the implementation of ARL's climate qual survey and data gathering in support of the library strategic framework process. These two processes ran simultaneously and I think Susanna and I can share at first we did have concerns about confusing library staff or having bandwidth to run them concurrently. Ultimately though, we found that the juxtaposition of the two really added richness to both. So the emphasis on library staff in climate qual kept the need to listen to individuals first and foremost in our minds when we planned our strategic planning assessment. We prioritized focusing on our stakeholders while reflecting constantly on our own organizational strengths and challenges. Next slide please. Thank you. So climate qual ran during three weeks in October 2019, which I'm sure for many of you feels like ages ago, I know it does for me. Simultaneously, the strategic framework steering committee worked through the data gathering stage of developing a strategic framework for the library. So climate qual is ARL's organizational climate and diversity assessment. It's a well-regarded, highly vetted survey-based instrument which uses quantitative measures to assess an institution according to various climates or scales measuring organizational climate and attitudes. By contrast, the qualitative methods developed for the strategic framework process comprised a survey and a series of conversations based on just three intentionally broad open-ended questions that were intended to solicit a wide range of responses from faculty, students, and staff. So climate qual reflected in its own right a focus commitment to organizational health, but it also provided for the purposes of the strategic framework process a benchmark for assessing the culture and climate of the UConn library and highlighting our particular strengths and challenges. These areas of focus provided insight into what existing strengths we could leverage in our strategic thinking and where organizational health might need to be bolstered and planning for the future. The emphasis on people for the strategic framework meant that we devoted considerable time as a committee to identifying our stakeholders and effective mechanisms for hearing them from them. So as part of the data strategy for the framework, we developed three open-ended and qualitative questions that were used on the survey in focus group discussions and open office hours as well as prompts for whiteboard conversations. In our stakeholder selection process, we felt it was essential we hear not just from library users and our advocates and our supporters, but also from UConn community members who never stepped foot in our spaces. So to this end, we put whiteboards in different spaces across our various campuses, allowing people to leave post-it notes with their responses to our three open-ended questions, and we had them color-coded by user type. Again, it's self-reporting, same as the survey was that we did, but patrons could select their type, whether it be faculty, staff, or student, and a color-coded post-it note so that we were able to code their responses based on patron type and receive open-ended feedback from them. So the questions that we asked across these various formats were, what does the library mean to you? If the library had unlimited resources, what would it look like in terms of collections, space, services, and our staff? And if the library had extremely limited resources, what collections, space, services, and our staff would be most critical to support? So these were carefully crafted to be open-ended and to not apply our own value judgments to them. And the first question captured what people thought of the library and what they associated with it. The second was intended to encourage expansive future thinking. While the third helped identify those things that were most vital to our community members and they would want to maintain at all costs. So additionally, we added a range of other data sources to the framework process that bridged quantitative and qualitative data, rounding out our own data, such as ARL and iPads metrics and results from the Ithaca SNR faculty survey. And with that, I will turn it over to Susanna. Thank you, Laura. I'm just making sure I'm unmuted here. As we moved into the strategic framework process, it was quickly clear that climate qual would be important in informing its work. With the blessings of the climate qual implementation group, we made the decision to give the strategic framework steering committee a preview of the complete climate qual numerical report shortly after receiving these results in November 2019. And this was before completing the detailed analysis of the climate qual findings and well before the climate qual formal executive summary was completed or the results shared with staff at large. Discussing climate qual findings with the group with some informal preliminary summative findings helped frame results of the thematic analyses we did of the survey, dialogue, and other data. As Lauren mentioned, findings from climate qual were key to underscore in particular strengths and challenges in our organization from the perspective of our staff. This in turn was helpful in guiding the development of the strategic framework itself. As we crafted the language of the framework, we bore the insights we had gleamed from climate qual in mind. Across all three areas of the developing framework, these would eventually become connect, empower, and engage. And we'll show you a slide of that in a bit. But especially as we refine the empower section, which focuses on leadership and collaboration, inclusiveness, and leveraging the skills and knowledge of our staff. Strengths in climate qual showed us committed to diversity and inclusion, engaged in our work, and empowered by our functional units. We were able to incorporate these strengths into the final framework. Climate qual in essence kept us honest and rooted, reminding us to build our vision of the library's future work on the library we were with a clear understanding of ourselves in our own eyes, not an idea of ourselves removed from that grounding. In turn, the strategic framework influenced the framing and language of the executive summary report of the climate qual findings. The primary purpose of the report was, of course, to highlight our notable strengths and also areas in need of further investigation and focus improvement. What the strategic framework process provided was an understanding that both strengths and concerns were strategic and that giving them attention, that is leveraging our strengths and bolstering where we fell short relative to other organizational characteristics, were essential to the library's forward motion. In other words, the strategic framework further grounded the purpose behind climate qual to build on our strength and address our challenges so that we can do great work. The strategic framework therefore provided an articulation of our future as an organization and thus also became a vehicle for introducing climate qual to our staff. And hindsight is 2020. So this is the last picture slide of our presentation. And perfect aim comes with practice. You see picture here, me on the left and Lauren on the right at the celebratory axe throwing excursion the strategic framework steering committee took at the end of the process. And I just want to say full disclosure, I think that was my single bull's eye during our time there. Discovering that the implementation of climate qual and the strategic framework process were mutually informing was an accident. Although it would make a sound really smart to say we planned it that way from the start. We realized how much they would inform each other very early in the work of the strategic framework steering committee, particularly as we work to define stakeholders and wrestled with the degree to which staff visions of the library should inform both the framework process and its outcome. Strategic planning requires significant input from key stakeholders and library staff were of course a core stakeholder as we devised means to gather data specifically for the framework. Having climate qual run almost alongside the strategic framework process both took place in fall 2019 increased our confidence that the values and vision being expressed in our framework data gathering methods were representative of the culture and beliefs of our staff put differently. Our strategic framework data gathering drew initially from as Lauren said three main sources existing institutional data open ended questions. Question asking via survey whiteboard and guided discussion and national benchmarking data added climate qual as a powerful mostly quantitative source of data that provided a macro view of staff perceptions of our organization was a tremendous boon to that process and added context to the range of responses that emerged from our qualitative work. Climate qual is a complex assessment instrument and its influence on our organization is only just being tapped both as a tool, meaning the fact that we were running it at all and as a rich distillation of staff perspective on a range of issues related to our library as an organization. It became an essential component of the feedback analysis and reflection and articulation stages of the framework. It is difficult now to conceive of one process without the other more wisdom from the trenches I call this last bit. Both processes also turned out to be gifts to our organization. For us, just like many of you, Friday, March 13 was the last day in the office for most of our staff. I mean who knew Friday the 13 would have such resonance and we just had another one. Just over a week before on March 4, we had formally shared the strategic framework during an all staff forum. Although it would due to the pandemic take longer to get climate qual disseminated, by March we had drafted the summary report. Taken together, the processes put us on strong footing as we entered the months of dislocation and remote work. Both processes also refocused our attention on diversity in the broadest and deepest sense. We scored high in most of the climate qual's diversity scales but came out of the assessment aware that broad commitment to diversity neither meant we were fully diverse nor that we had captured the voices, capital V, of our least represented populations and perspectives. Like Yukon at large and higher ed in general, we find ourselves reassessing DEI initiatives not because any evidence shows us having failed in any particular area. In fact, we have data suggesting this is a strength, but because both local and national data have invigorated a commitment to dig deeper to ensure diversity, equity and inclusion work is ongoing, sustainable and impactful. And then we'll finish just by putting up on the screen the sort of the one liners that our strategic framework and the three court areas connect empower and engage and we put the link there and that of course will be available in the slides and I will just finish there. Wonderful, Susanna and Lauren, thank you so much. Really wonderful to see this effort happening right before the pandemic came for you and now having a sort of deep experience with climate call. I have a personal question here you did as you said, you know, you have good performance scores on diversity. One of the things we know from the salary survey data you are reporting is that the salaries, the average salaries at Yukon are amongst the highest in the amongst the area libraries. I haven't looked at the last couple of years of the data but I assume this trend continues. At the same time we know historically in the climate call scales, the scale related to distributive justice is one of the scales where most libraries tend to score low. Again, partly distributive justice has to do with issues of compensation. Have you maintained your relative strength in that scale? I'll let Susanna dive a little bit into how we did on the scale for distributive justice but I do think it's important to note that part of the reason why Yukon is always so high for salaries is because fringe rates are included in the salary lines when we report our data and the state of Connecticut has a very fun problem known as an unfunded pension liability that makes our fringe rates just about the highest in the country so I can say that we see high compensation for our staff but we look particularly inflated when you look at the numbers because of the under the unfunded pension liability. Thank you. Susanna, do you want to you're muted? You're still muted. Just going to make sure I was going to speak honestly. If I recall, I was just looking at this data recently, that was also that subcategory of distributive justice of that organizational justice category that we also suffered a little and although not by any means, one of the areas we actually highlighted as one of our areas most in need of attention and I was actually going to echo what Lauren said is salaries are a complex issue at Yukon in general and our range of salaries tend not to reflect well the sort of real story under there because of a quite exceptionally high fringe rate at the university. In fact such an issue at the university that it continues to be the thing mentioned by our president and provost when talking about how to get under financial distress that is perhaps the single issue in addition to the pandemic that our university continues to fight at the state level. Yeah and you know the situation I think we're describing with the fringe benefits it's a Connecticut wide state employee situation and it sort of creates to the underlying state deficit on the budget right yeah thank you thank you for since you mentioned the pandemic one one more question before we move to the the final presentation if you were to do climate call again now based on the experience we've had the last few months would you have seen any of these scales going down going up which ones would you guess would change and in what direction I think Lauren could probably speak interestingly to this I'll just give I'll give my step where my mind went first and then I don't know what Lauren's mind probably went somewhere too. I think what struck me first although you asked about areas of concern was how very much our areas of strength have been so evident to us particularly the benefit of teams area which is absolutely that and task engagement are very very high for us and we're seeing over and over and over again the strengths of our units that whatever else is going on the distress about bigger stuff concerns of course about what's going on at the university level and the state level and the local and personal level again and again and again we're hearing shout outs from unit members to their colleagues and expressions of tremendous strong and supportive relationships in those units and so that absolutely supported our what we found in climate call before the pandemic in terms of our areas of concern sure yeah one of our areas of concern is psychological empowerment I imagine that's taking a bit of a further ding right now and so I think I think the plan always has been to run it again and I don't I'm not sure what to say at this point before ish years you know someplace I'm not sure and I think we can probably get ahead of that data by by already knowing that not only is an area of concern but it's it's probably taken a little further hit during these times so and I just couldn't agree more with Suzanne I think we would see a lot of the same trends just amplified possibly our our main strength of task engagement I think all of us are dealing with the struggles of working from home and you know my son is distracted by Winnie the Pooh right now so I think we all see that task engagement is more challenging and I imagine concerns about distributive justice and support for teamwork we could expect those where we were a little bit lower those would be even lower as as that continues to be challenging in this remote work environment thank you both Susanna and Lauren and you know contributing to the library assessment conference I don't think I can leave you without mentioning how difficult this year was to see the loss of Brinley Franklin who has contributed so much in the library assessment community over the years he was one of the first in the first cohort that was recognized for their contributions to library assessment so really glad to see you continuing the tradition of being present here with us today thank you first thank you and I will give credit in terms of the way I pronounced your last name I will give credit to my ARL colleague Angela who asked me to review the names before this session thank you so much so we are at our final presentation here and the title of it is optimizing staffing through quantifying workload per position presented by Jennifer Livingstone independent data analytics consultant and Vicky Thompson grants manager at the Enesmuch Foundation Jennifer and Vicky perfect so welcome to our presentation it's entitled optimizing staffing through quantifying workload per position and yes I'm Jennifer and I'm here with Vicky Thompson hi so I'm Vicky this project was initiated to help facilitate and add transparency to the process of allocating staff across the library system our library system is 19 branches with roughly 450 staff adding data helps increase that evidence-based decision-making in the process but it also provides leadership with evidence to share with branch managers on why each location was allocated what secondly at a local level branch managers needed to determine how to best staff various positions at optimum times based on workload this is typically done based on observation and experience next slide it doesn't seem to be switching I think my screen's frozen I can share mine okay sorry about that um okay um so because we have a two-layered solution um we had to address that with a two-pronged um solution where we basically wanted to add visibility through data at both levels um so as Vicky mentioned the first challenge was that library branches do compete for a limited number of staff and resources across the system so to address this first problem we started a staffing study where we looked at quantifying the actual workload per staff position and we were therefore able to then create a recommendation for which positions at which branches could use more and less FTE full-time equivalents over time and the second part of the solution was then addressing at the local level visibility and to do so we introduced the tool tableau where we allowed local managers to see what time of the day and week different customer activities were at their busiest and they could then schedule their resources accordingly so now we'll walk you through each of these solutions so the first one the system-wide staffing study so if you notice down here so this project is built in the premise that in theory staff of the same job title at different branches should have the same workload you know we're in a system with 19 different branches um each saying I guess that they're busier than the other and so we needed a way to objectively measure um whether or not that was the case for different titles so we looked we looked at what data we had available to us and I've listed just a few of the different components here these aren't all of them we ended up using over 30 different variables some of them were of course information from our ILS that includes volume of checkouts returns holds new cards for new items renewed we have access to door counters that track entries and exits as well as occupancy we have access to technology help resources so we can track the number of computer uses printer uses copies and facts the assumption being that a percentage of those are going to require staff assistance and therefore staff time we do also track throughout the year at intervals reference questions since we have numbers on those that we used and we have community data so information on what the estimated population is that each individual branch serves as well as something called a deprivation index and that is a variable that takes into account various different metrics so things like uh the prevalence of homelessness uh the prevalence of low income households a lack of availability of transportation things like that that help to create one measurement that we can then use um to assess how deprived or not a certain community is and we were able to map that to our different branches based on census tracks so now we had our different inputs some of the hard part was actually deciding how you weigh all these different metrics against each other and for that we relied heavily on our staff and our managers there were many workshops where you tried to weigh you know how much each different position spends on the different activities versus the others and we were therefore able to come up with different activity weightings that we then assigned to each um each activity and for each position so activity could of course have a different weight depending on the type the role of the staff member that was doing the activity some of these were then weighted again on the population and community deprivation the idea being that um a librarian without reach or programming responsibilities may have a slightly different workload depending on the community and population served um all that said you're then able to calculate a workload per FTE per position per branch um which would therefore allow us to have a more objective way of comparing per FTE per position each branch what the relative workloads looked like um and you were therefore able to say um where we think uh staff need more resources and support and the next slide um gives just a very you know hot example of the output and this is just dummy you know numbers it's for illustrative purposes only but you can see here so this output would say that at branch B pages are slightly understaffed they've got slightly higher workload per FTE at that branch for cert clerk so would say that branch C is understaffed and branch B in this case is overstaffed and for librarians it would say that branch B is slightly overstaffed and I do want to um emphasize that I've listed here that these are perfect world recommendations and that's because it was never the intent of this project to um hire and fire staff you know solely based off results of of of this study the idea was more that the study is providing a roadmap so that as opportunities do arise say in the next fiscal year we have additional funding we're able to get an extra headcount we could then use the status see which position at which branch would be most best served or if we had an opportunity come out through natural attrition we could decide um perhaps we won't just refill that same position at the same branch um perhaps that position could be better to use somewhere else where maybe we'll even convert that position from one to the other if we notice a particular job category in particular um is lacking slightly so that's the first area of of the solution second is with customer activity dashboards so for these we were aiming to help the local library managers and supervisors who were responsible working with the staff that they had to make sure that they were scheduling most effectively to best meet customer demand and as Vicki said um before this they were working primarily off of observation um and they didn't have many you know finite metrics to look at regarding what time different activities were busiest in the branches and all the data that I mentioned to you that we used in the staffing study um we have available through various data sources and so we connected these to a a platform a tool called Tableau where we're then able to create different dashboards that the managers could interact with um to see for themselves um ask and answer questions about uh what their library activity looked like so this to start off with is just a really high level overview this is kind of where we want our managers to start it doesn't have any numbers it's purely color coded so the dark blue is the the third of the time that the library is most busy for that activity and activities are here on the right the light blue is color coded for kind of the middle third of busyness and the gray is for the least busy the least third busy periods and we have the days of the week here on the left and then we have the hours of the day going across the top um and so the library manager can then filter for their branch and they can then filter for a custom time period so they can filter for one week you know two months a whole year whatever range that they're looking to analyze um and they can get just at a a quick glance and an impression of how busy their library is and you can see here it's pretty evident it looks like from four to five on week days the library picks up it looks like the first hour of opening on most days is very busy and it looks like activity changes on the weekends so it looks like on the weekends the after an early afternoon period one to two p.m. and two to three p.m. become busy and so the manager then can assess their scheduling accordingly and this again is a starting place from here we've created uh workbooks where they can dive deeper into the numbers and they can look at uh computer usage specifically check out specifically return specifically um and i'll walk you through those next so these are just screenshots for a couple examples of different workbooks and there's a lot more kind of involved each of these workbooks probably a seven or eight different worksheets um where the managers can then take a deeper look so this is an example of a computer usage report that they can filter for on their own so it shows the computer usage at different times of the days they can filter for different age groups and card types here is a returns workbook so same kind of information but shown in a slightly different way and again lots of different filters and we wanted to provide this um varying levels of detail because we know that we have many staff members with different levels of comfortability with data we have those that are less familiar with kind of deeper diving into the data and that's where we want to give them the option for to look at you know something like this um if this asks what they're comfortable with but we do know we have lots of data geeks and gurus out there as well and we wanted to give them the ability to go really deep into the numbers and they can download exports of the data and things like that if they want to as well uh here we have two more so this shows the checkouts by the time of the day and here it's also filtered by each group of material and on the different checkout related workbooks we do allow them to filter based on the item information that we have an ILS system and here we have some information where they can see again sort of busyness by the day of the week and the time of the day for the occupancy in general so how many people are in the building at any given time and schedule according to that and all of these different databases uh Tableau workbooks are connected directly to the data sources so that the information is readily updated we had everything update on a nightly refresh so I think it up three or four a.m five a.m um for the later workbooks so that they wouldn't interfere with the speed of any of our servers but staff and managers could come in the morning and get new and pretty live and accurate data so the results of this were more fact-based staffing decisions at the system level around budget time and as opportunities arose due to natural attrition as Jennifer mentioned earlier at the local level managers had data at their disposal to guide them in deciding when to schedule staff and then over time staffing decisions in workload are fairer and more equitable helps kind of each location see why they're receiving a certain number of staff and then it avoids you know thoughts of you know unfairness or that maybe favoritism is leading to certain decisions um and then lastly um it really helped us try to build a culture of data and evidence-based decision making um this is a topic that you know a lot of our staff is interested in um so it was a good way to get buy-in from kind of managers across the system and introduce them to um data and that's ends our presentation we'll now open it up for any questions great um myself here too thank you very much Jennifer and Vicky uh for the audience uh you know thank you for staying with us do um use the Q&A function if you have questions and I will read the question for the speaker and the audience and there there is one question um all branch managers all branch libraries um saying they were busier um than the others did you get any pushback before your study from branch managers worried suspected your assessment might show they were not as busy as the others if you did get pushback and got over it uh would love to hear if um you can share some tips on how I think people were concerned I remember when they sort of heard the rumors of there's going to be a staffing study I think there were there were concerns um because people are always sort of skeptical of a new approach um so we were very conscious of that possibility from the beginning so in order to address that our approach was to really involve everyone from the beginning and we involved everyone in the design of the study and when I mentioned those particular activities and the weightings so we involved all of we all of the branch managers and our system divides it at a deeper level too to access engagement managers so we invited them as well um to help us weight the activities and so I think that really helped because then it was sort of everybody was waiting it together and um then if they once everybody agreed with the weights it was sort of let's just see what the numbers are you agree with the approach um and then you know we have to sort of trust the numbers that we get from that so we did have that problem but I think it definitely helped to involve people from the very beginning great um another comment love seeing this work and a question it seems like your focus is largely influenced by a service desk model any plans to expand this beyond service points that is a great question um because that is that's definitely the harder part the easier part yeah is is the desk functions because we just have so much better data on that we have you know data on the return the checkouts holds pulled renews the part that we have less data on is the um you know the programming parts outreach parts and I will admit that that is a weaker part of the model we we did attempt it and that's where a lot of the population metrics came in and we did look at numbers of programs we did look at room reservations and outreach uh but that definitely is the weaker part of the model and it could definitely we had we had plans for how that could be strengthened the intention I think is that's a work in progress where the service side is a little bit stronger because there's just so much more data yeah all these transactions that is easy to track uh and uh another question that's coming through the chat were you able to build trend analysis into the model forecasting the staff level needed based on year on year trend changes we attempted that um it's a little trickier just because some of our systems we haven't had for so many years um for example I think our computer reservation system we've had for four years or actually in the middle of transitioning to a new one you know and then our ILS where we had the best data I think our ILS we had data going back um eight nine years so we were able to to see trends there but it's also difficult because in the meantime we've had so many changes in the system we've had libraries move or become expanded and close for you know a year for renovations and then now with COVID you know um I think that's all the data I don't I'm just kind of who knows what it was going to look like in the next run um so it definitely is attempted but it's it's I will admit that's it's a more difficult part our focus was sort of just showing everybody the raw information at least in that those tab workbooks so that they want they could look and see okay what it would look like in 2018 17 16 and you know make their own assumptions but also remembering oh but that was the year where that was closed in in November or that was the year we had the giant ice storm you know during this month or are things like that so we did try to leave at least the uh lower level manager part less prescripted but more just here's the information um and maybe you can yeah draw some conclusions this is great uh so um people asked before are these tabloid dashboards public private both uh right now they are not public um but I'd be happy to share you know screenshots or people want one-on-one you know conversations you know things like that but right now they're not public how many dashboards are there so I'm trying to think through we have one basically for each system so there's one for the um door sensors then we have one for returns it's not per system it's sort of like per idea so one for returns um one for checkouts then there is a computer reservation there's also programming um then so those are our main like time and then there's the aggregate so those there's six like time sensitive ones and then we do have additional ones like maps and things like that where uh they can look at you know different types of information but at least on the time sensitive ones there are six great um and have you considered including job satisfaction um we have and have so we there was actually a separate project where we had a um it gets like the library culture survey right yeah so we had a separate project and that was our name for it where um we had morale it has been an issue with our system for many years and it's recently has been challenged it's been a you know particularly hard challenge lately um and so that it has been a focus but it was not lumped up into this project we kind of looked at it separately um through library culture survey we made a survey where we yeah had different metrics and tried to assess how different teams were feeling and what could we do great let's quickly go through the last two questions we have in the clinic for your library employee survey we administered locally indicated that 50 felt their workload was more during remote and hybrid work situations because of covid do you have any additional advice about how to assess workload remotely um i mean that's to be honest that's something that i think yeah we haven't had a chance to yet you know because of how everything's changing but we do know that in any future you know revisions it will have to be we'll have to really kind of rethink how we're looking at it for this year in sort of the immediate like big changes and also in our whatever our new normal might be if things return exactly to how they were or if there's some sort of in between stage then probably all these activities will need different weightings um and there may even be a change up in what activities we have listed but yeah the honest answer is um yeah we don't have any advice yet but maybe the future thank you aware most of your systems easy to connect directly into tabloid or did you have to do a lot of work to get the data into a format that can automatically update daily most of them so if it's a professional come like bigger company um most than have sort of ability to give you read only access to their sequel databases and so our is based on an oracle um and so we can connect directly and read only um permissions into that oracle database and then i think we had also a my sequel and a postgres as long as they have if they're a bigger company they'll let you we do have a couple um which it is our programming for example is a more sort of homegrown company where that requires manual download and import of information um so it's a bit of both but i will say the most part is we've been lucky in that some of the we work with bigger vendors that um kind of have better capabilities i would if you reach out to your vendor um yeah and they're they should be able to help figure out that's possible or not great thank you jennifer and viki um i'd like to ask uh let's stop sharing the slides that we can see all the panelists together bring everybody together we have um about 10 minutes for additional questions so feel free to include your questions in the q&a box and if that your question is for a specific person in our panel please uh indicate that in the q&a box uh we do have um something we left uh earlier um that was a question for um aron noland but uh you know this concept of mastery experience i think it might be something everybody would want to comment uh so aron the question was um in relation to something uh mentioned um or heard um it goes like so did i understand correctly that one person is going to lead mastery experience i see so uh just for refreshing context quickly that was um the sort of highest level of increasing self-efficacy was to participate in a mastery experience and and no um not not just one person that would you know if i participate in a mastery experience it will increase my self-efficacy for whatever um that experience is is about in this case um assessment initiatives but the the notion of of doing it organizationally of broadening involvement sort of in a pilot project that allows people to to work through it um and do so successfully that would then tick up or increase efficacy about being able to to do so in the future hopefully that answers that question yeah i think the way i read the question is do you have someone coordinating and paying attention and you know creating the environment for mastery experiences for the people i see i see i'm sorry for thank you for i don't know if that's the right interpretation but that's how i read it well that that's good that's fair i think that's um that's incumbent upon um assessment staff and and library leadership to to be really helping make this create space for that um and being open to doing so partnering rather than doing assessment for folks and that that helps with buy-in it helps with meaningfulness and we often miss the mark because we're not subject matter experts and and what we're trying to assess and so the the more we can partner the the easier it is to actually create that culture of assessment rather than having assessment sort of lay as an administrative layer over top of activities great um any anyone else would like to comment on this concept of mastery experience clearly we're all mastering the experience of living through COVID-19 right any any insights any other do you feel your organizations your staff or do you feel yourself are mastering that new environment in a creative way optimistic way i see master i can come in i mean i've just seen one of the q and a's coming in about um interesting here what the leaders can do to support encourages work and i think there's something that in me about empowering and empowering my team to to derive and deliver on this and expecting them and understanding with them and supporting them you know recognizing that i don't expect them to come to me to ask to solve the problems i expect us to work on it as a partnership as a team and you know i've gone through a massive organizational cultural change in the last five years and it's been really lovely to get us to a space where we are working as a collaborative team and we are supporting each other to solve the problems to come up with new creative suggestions you know COVID for us being an online university you would think COVID is fine for us but we're you know i've immediately switched 100 people to working from home for the first time and you know we've supported each other throughout that as i'm sure you all have to and actually come up with very rapid decision making on creative solutions on how we can support our customers how we can support each other at this time has been really lovely to see and it's also kind of humanize things a little to me you know we are all looking after each other on a personal level as well as an organizational level and you know it's been quite nice to see people's cats wandering to shot and and have a laugh about you know kids wandering behind you as you're trying to balance school work with life with you know priorities with being a senior library manager and just getting things done together as a team is really wonderful great thank you thank you Selena for reminding us we have each other even if it's virtual uh what are our assessment um professional supposed to do if the leadership isn't supportive i think that's in the minds of a few people shall i call upon you feel free to unmute everybody's muted i see just let's let's everybody let's get everybody to unmute so we can have a conversation Arthur do you want to ask that question again i want to make sure i heard it correctly yeah what are assessment professionals supposed to do if their leadership is not supportive well there's a lot there right there that's that's a big question um i would say that if assessment professionals are trying to make a case among their colleagues to library leadership there might be a role there but i have a heart of imagining being able to do any kind of surveying or climate evaluation without some support from library leadership you know we've done some pretty heavy hitting surveys of staff at duke university libraries but we've had to do that with support from our library leadership they've had to um help encourage people to take the surveys to let staff to help feel staff feel um um confident comfortable in responding so it's hard for me to imagine what we can do without support from library leadership in a really you know long lasting meaningful way but if others have ideas then i would love to hear them i think the first step there would be to establish a really a better rapport relationship with library leadership to help them understand the importance of assessment and the role that we can play yeah susanna it's just good to say that um i think sometimes uh what feels like lack of support is um and i i know this frustration and not not to say that my leadership hasn't been supportive but that sometimes there can be a disconnect between i think there are very few library leaders who are anti-assessment but i do think that sometimes what assessment professionals you know you you soon become the person who knows most about library assessment in the library because it is a niche kind of specialty and and so as you go deeper and you go down the rabbit holes of it you soon start to bubble up with all sorts of ideas about best practices and what you should be doing and and it may not be may not be it's may that library leaders are opposed to that but that they have their own notions of what kind of assessment are valuable and you should be working on and and sometimes finding the the meeting place of those two is where the work actually has to be done um i think most library leaders probably know enough about the profession to know that it's something they should value but there can be all sorts of assessment as we know and all sorts of ways of valuing it so just my thought yes eric and i just building on emily and susanna's comments i think two interesting ideas i heard today uh one erin just used the word with i think rather than two and kind of in talk you know doing assessment with others and i i think it was emily's presentation but i might be misciting just this notion that assessment is part of a change management process and so that that might be my advice to an assessment professional who feels like they don't have buy-in is to you know go go look at the adcar modeler think about prosire you know kind of think about ways in which you would use an assessment method to get buy-in of your leadership could directly to susanna's point you know i think is a profession we care a lot about assessment and often the disagreements from what i see come around to disagreements around methodology or disagreements around the core question not about the value of assessment michael go ahead please oh well i was thank you i was just going to say um sort of in relation to what emily's was saying but it was in a training earlier today and we were talking about sort of bottom up versus top down and you know which approach is better and and you know to emily's point if you're doing something sort of campus-wide at that global level you know yeah you have absolutely have to have that sort of top-down perspective and leadership um however at a couple of on a couple of occasions you know in my career in this field if you can get two or three middle managers on the same page about something for the for the exact reason that erin mentioned that we don't have the subject matter expertise in many cases they do and so those collaborations if you can get two or three of those middle managers on the same page and bring a solution to library leadership that can be a very helpful way of um earning their trust and respect can i i this is maybe controversial i would say don't use the word assessment it's a loaded term um it's probably has a lot of signal reactions talk about what it is that you're actually trying to to aim at and i think to to laura's question which is in the chat about being sort of the the lone and first assessment librarian um and and how to start and how to get prioritized um don't don't start with assessment models that are built for large institutions that have robust assessment infrastructure and history and buy-in and resources start with with what you can sort of chip away at and and you'll you'll build it over time but i would sort of say controversially do not use the word assessment talk about why talk about improvement talk about understanding how users are orienting to programs activities whatever it is that you're going on and um do not do not talk about data management and i'll just add to that that i've had success using the phrase problem solving you look for a thorny a thorny problem to solve that will not only help your users but also make life easier and better for your colleagues and that can be a good way to get started there was that worked for people yeah absolutely i think going back to laura's comment and because laura's put the question into the chat for just a panelist to see and it's about being that lone assessment librarian and for somebody who was that lone assessment librarian i know how lonely that is and what i would say laura if you could just starting out in this field welcome to the tribe welcome to us you know pick on these people these are the people that have i'm surrounded in this zoom call with people who i've been reading about for years or supporting me through decades now with you know helping each other out we are normally lone professionals in this field and this conference i'm going to plug the library performance measurement conference in the uk europe you know the sister conference of this conference annually keeps us going keeps us motivated keeps us churning around with that kind of building on our ideas sharing our practice and then making those connections that you can pick up the phone to what skype to zoom to whoever it is these days to keep that conversation flowing throughout the year so you know what's going on and you can help each other out and just share ideas it's really great maybe we'll should make this the library problem solving conference yeah there you go that's the answer so let's do a round robin the one thing senior leaders can do to support and encourage this work and i'm going to go through my screen jennifer you are there on the top left leaders can do to support um i think one thing is just to to give us the the space um and i guess to yeah we've struggled with that sun and sort of for a while we were there's the two of us you know vicki and me but we felt very alone and sort of struggled to be heard or be invited to to the meetings and things like that and so i think just being included um even if we're just there to listen you know i think just being involved um at at all levels and sort of um you know is is what we need salina even you know if you're trying to get this going given the time and the space for it you know i'm not in a position where i've got a sole post just for assessment it's a it's a shared responsibility about the number of teams and actually enable that prioritization to happen is really valuable and ensuring that those team members recognize the value of that i've come to this in the opposite way i guess i'm sort of a library leader now but i'm a library assessment professional from my heart my roots and so actually what i'm trying to do is instill a culture of library assessment into my team and so that's an interesting challenge for me is to actually show them the data like these you know jennifer vicki your presentation just said i just want to play it back to my team tomorrow because i'm like this is what i'm being talking about that data insight is so valuable that you know trying to share that kind of the power this has is really the next challenge for me i guess honk see i thought you're going by alphabetical by last name so i thought i could commit for everybody else um i i've been in a unique position that i i've been wholeheartedly supported from the library administration from the from the get go and so and still am but i think that one of the things that you look to do is i think of the next level of leader down from from the university librarian and associate university librarians is a concept we call low hanging fruit find the ones that want to work with you that are easy to work with you and then just build from them and build success and it's one of those things that that as you build those kind of successes others kind of take a look and say well why not what what evidence do you have let's take a look and it and it starts becoming that question of at all levels of what data do you have to support that decision and and but but i i just we've always tried to start with low hanging fruit that what's the easiest to get at what's the one that we're we're going to have to build some success on and where they can see the see how things are progressing and and the difference it makes to be able to know the data to be able to make the to inform the decision Aaron your next day was it Sharon or Aaron i wasn't hearing very well was it Sharon i have Aaron Michael and then Sharon but let's go with Sharon and then we come back to you sorry Aaron um i think that library leadership can do a couple things one actually document the concept of assessment into the strategic plan in writing the next thing would be at any public gatherings all library staff meetings to verbally advocate for the concept of assessment and then the third thing would be to actually cross block the assessment work that's been done into actual changes improvements decisions or resource allocation so three things coming back to her i that's uh that stuff should be written down by all of us i think that's really quite good um i think lower the the stakes and increase the visibility so it's sort of a needle-threading but assessment inherently is about experimentation any improvement is about experimentation and so taking that that sort of approach without without raising the stakes too much but but as Sharon was talking about really noting how how valuable this sort of thing can be Michael you're next to my on my screen so i think i think assessment professionals i almost think of this work is is rather than the traditional um library silos i almost think of it as being like finance or hr or something that's you know a little more of an umbrella function for the university and sort of to that to that end i i think that we really need to have an intimate idea of what's going on at the top levels of the organization and not not just reading a strategic plan not just here's the plan read it and assess it but having that back and forth communication that takes place over months and years to really learn like what are we actually trying to accomplish here and being a participant in that conversation and trying to communicate back the feasibility of those approaches and the assessment approaches to that so i would say um to sum it up you know sort of a seat at the table kind of kind of thing so you know i have Dana and then Eric Emily Susanna and Vicky on my screen if you can finish the roundtrip in this way so i don't have much to add to what other people have said because it's been so great um except i did have once one thought because i'm most honest strategic planning committee right now and um that is kind of thinking about assessment and talking about assessment as a people-centered activity and to keep it in that frame as much as possible keep it out of the frame of assessment and data that's about it i think uh i was listed next after Dana Martha you know uh something Holt said a few minutes ago really connected for me and this isn't quite how he said it but um you know demonstrates support for assessment by asking assessment centered or data centered questions uh when you're trying to evaluate new ideas i think that's uh i've heard many of my co-panelists say this you know make assessment more visible just through uh simple behaviors i think that would be transformative if uh you know senior leadership did that Emily Susanna Vicky um sure so it sounds really simple but it can be surprisingly hard and that is listen um rather than be defensive resist that urge to be defensive but instead just listen and and serve as a role model to colleagues throughout the library that that listening is so critical to really understanding what is happening and making a meaningful change uh hey Susanna um i was going to say that so yay oh i know but i was i wrote listen and then i and then i went in the opposite direction or both and um i think library leaders if they would listen but also share in the sense of mind dump and here's what i was thinking that imagining you already know where assessment questions will come from it's sort of i mean they come from everywhere right they they come from um the sort of obvious things that we expect those questions to come from but they come from the day to day they come from the little stuff and and i find myself having the most exciting ideas just sitting in random meetings not about library assessment but about other things so as library leaders um giving anyone who's involved in assessment the chance to just hear to be involved in a lot of conversations or even just listening to a lot of conversations gives those people doing that work the chance to have those ideas um come around i was actually just thinking before i i sign off here that a lot of the advice given is great advice for anyone who wants to do library assessment as well and i particularly like the kind of flavor of the the notion of of kind of lowering the stakes of simplifying um if you can just take a breath and ask a question and see where it goes i think everyone's so very impressive at this conference and i don't mean myself but why hear ideas here that just blow me away with with sort of the scale and the scope of them um but that's that's just one part of library assessment and there's so many smaller scales little questions that can be answered um and i love that notion of lower stakes increased invisibility increased visibility um you know it doesn't have to be big and impressive you can get there that'll come you know you put a bunch of things together um just start with a question and see where it goes so reiki you're the last um i'd say being open and involved in setting expectations for like the assessment during the early stages and planning and really thinking about the action that you might take with the assessment um to avoid kind of demoralizing staff if you do a big assessment and they're all interested in it and then nothing happens with it and so to just make sure that there's a follow-up with that assessment thank you for an acknowledgement thank you for all your great ideas there is one last question in the q&a but i'm going to invite you to all of you to type your answer for that and at last my colleagues at al to hold these after the closing for five minutes so you finish typing your answer in there the question is about after COVID-19 user studies and assessment work are getting more challenging what tips and tricks are you recommending so please go to to the q&a and type your answer thank you to all of our panelists our presenters and our audience powerpoint slide session recordings and papers will be made available on the conference website our next session will be held on wednesday december 16 on the theme of critical theoretical assessment and space it'll start at 1 p.m eastern time and registration is opening on tuesday thank you everybody stay well