 Good morning and welcome to this week's edition of Encompass Live. I'm your host, Krista Porter, here at the Nebraska Library Commission. Encompass Live is the Commission's weekly online event. We are a webinar, a webcast, an online show. The terminology is up for debate in the world. But whatever you want to call us, we are here live, online every Wednesday morning at 10 a.m. Central Time. We do record the shows every week. So if you're unable to join us on Wednesday mornings, that's okay. You can always go to our website and watch our recordings afterwards. And at the end of today's show, I will show you where the website is and where all of our recordings are available to you. We will include the recording of the show that goes into our YouTube account. If there are any presentations, slides, handouts, we include those or links to those as well. And any websites that are mentioned, we collect into our delicious account so that you have access to those as well. Both the live show and the recordings are free and open to anyone to watch. So if you see any topics coming up or any on our archives that you think may be of interest in any of your colleagues, friends, neighbors, family, anybody, go ahead and send them to our website and they can watch the shows there. We do a mixture of things here, book reviews, mini-training sessions, interviews, demos of things. Basically, the only criteria we have is that it is library-related, something libraries are doing, something they could be doing, something that might be of use to them. That's really the only criteria they have to meet. Some of our topics might seem a little out of the box. This one is obviously very obvious to the libraries. So, but, you know, trust us, everything always comes around to having something to do with libraries as we are the Nebraska Library Commission here. We do have presentations and some shows that are specifically done by Nebraska Library Commission staff of things we're doing here, but we also bring in guest speakers as we have this morning. On the line with us from both here in Nebraska and all the way over on the east coast is Amy Schindler, who's right from our University of Nebraska at Omaha, just north of where I am here in Lincoln. Hi, Amy. Good morning, Chris. Good morning. And also, as you can see on the slide here, Christian Dupont from Boston College. Good morning, Christian. Oh, Krista, hello, we're all. And Emily Hartman from Harvard University. Good morning. Good morning. And I believe, I don't know about you guys. I know here in Omaha it is snowing. How's it going? Good morning. Oh, OK. You're good, OK, for today, though. All right. And Amy, contact me about doing this session here today about changing these standards that they're doing for archives and special collections. And I will just hand over to you to explain exactly what's going on and what you guys have been working on and what you're looking for, hopefully, from our attendees today. OK, great. Thank you, Krista. Again, good morning, everyone. And thank you for joining us today to learn more about version two of the proposed standardized statistical measures and metrics for public services and archival repositories and special collections libraries. We're here this morning to share the current version of the proposed standard for public services measures with all of you. And hopefully, hear from many of you with your questions and comments either live today or by February 17, which is the closing date for our comment period. So I'm going to begin today by sharing background about the proposed standard. And then my task force co-chair, Christian, will talk about the structure of the document. And task force member, Emily, will then take you on a deep dive of the proposed standards reference transactions domain. And then we'll wrap up with comments, both how to comment on the document, and then hopefully also your comments and questions. So again, feel free to ask your questions as they come up during our presentation. So background, the SAA-ACRL-RBMS Joint Task Force on the Development of Standardized Statistical Measures for Public Services and Archival Repositories and Special Collections Libraries was charged in 2014. And it's a mouthful. The task force consists of 10 members, five appointed by SAA, and five appointed by HRA-RBMS, and includes two co-chairs representing each of the organizations. So the cost to develop standardized statistical measures for public services and archives and special collections goes back decades, with increasing interest in assessment in recent years. We can see this demand through our sessions at our professional conferences, publications, as well as grant support initiatives aimed at fostering cultures of assessment and demonstrating the value that libraries and archives bring to our communities and society as a whole. Many of us have probably been asked in one way or another to demonstrate the value provided by our repository, which should include qualitative and quantitative measures. For instance, you can share the story of a researcher who was able to get a building on a National Register of Historic Places using material from her repositories. And we can also want to share the amount of time that that researcher and our other researchers spent in our reading rooms last year. So getting back to the task force, in 2015, we conducted a survey of community practice. And I just want to say again, a big thank you to the 313 repositories who responded. We know it was not for the faint of heart, as one of our colleagues said. So thank you to everyone for your responses. Version two of the draft document was released for comment in June 2016. And the task force was fortunate to receive a great deal of comments, both live at the ALA and SA annual meetings, and then also online and by emails. And so we're here today in part because we wanted another opportunity to share and receive live feedback on version two. So version two of the draft document was released for comment just last month. And that comment period will close February 17, 2017. So next week. And then what will happen? After next week, the task force will again get back to work, revising the document based on feedback we hear from all of you. Or if all of your feedback is, it looks great. Go ahead. That would be fine. But we hope you have some comments for us. Then we'll submit the proposed standard to SA and ACRLRBMS planning for April 2017. And then we wait, hopefully, for the proposal to work its way through the standards, committees, and governing bodies of each organization. So we probably won't have a standard in place for your new fiscal year data collection this summer. But that doesn't mean you can't start using the draft document. And we know that summer repositories already have. So this standard was developed to provide archivists and special collections librarians with a set of precisely defined practical measures. They've been based upon commonly accepted professional practices that can be used to establish local statistical data collection practices to support the assessment of public services and their operational impacts at your institution. We're not here to attempt to reduce the value of archives and special collections to a set of numerical inputs and outputs. We wish to establish a common and precise as possible vocabulary to facilitate conversations about the way in which archives and special collections deliver value and how we might increase it. Careful attention was given to creating the measure so that any type of repository that manages and provides access to archival and special collections holdings may use them. Also, they were formulated so that repositories of any size and any level of budgetary resource could implement the measures, even if only the basic measures, which you'll hear more about in a couple of minutes. Admittedly though, we do admit that some of these measures will be much easier to implement if you're using a digital tool and not just pen and paper to track your data. The measures were also formulated to support the aggregation of public services data from multiple institutions to provide a basis for institutional comparisons and benchmarking. This is something for the future though. And then finally, I just wanted to say that we are not the task force that's working on holding counts and measures and primary source literacy. Those are two other joint ACRLRBMSSA task forces that are also currently at work right now. And you can check their websites for more information on their current draft documents open for comment. And then this here, this is just a list of our eight public services, proposed public services domains, I should say. Those of you who reviewed version one will note that version two has separated out instruction sessions by popular demand into its own domain. It still shares a lot in common with its own exhibitions, but it is on its own now. And with that, I'll turn it over to Christian. Thank you, Amy. And again, thank you for all of you. We are now up to what, Christa? How many attendees? 120, I'm saying. It's 120 logins, but I have had some notes from people that there may be groups listening as well. So as far as number of people, we won't know. Not exactly sure, but 120 individual logins, yes. That's great. Thank you, all of you, taking some time out of the day to engage with this effort here. And obviously, many of you have already responded to our earlier survey. Some of you, this is a brand new thing, though, and maybe you are a practicing archivist or special collections librarian. Maybe you're an assessment librarian at your institution or administrator. We thank you for joining us, because in fact, this really is a brand new effort. There have not been any... Back to the last slide, actually. Okay. And I'll go walk through the domain a little bit. A standard that has defined in the domain of special collections and archives, the specific measures, there's been no common vocabulary, no common measures. And we're aware that it's kind of paradoxical, the fact that we're all more focused on demonstrating value and assessment these days. And yet some of the annual surveys that we participate in have actually kind of cut out the sections that have to do with special collections and archives. So in fact, our effort here is to kind of compliment the kind of statistical surveys that we perform for our own institutions and for aggregated surveys that we participate in and to enable us and those who look at what we are doing in special collections and archives to be able to have these, again, common measures and vocabulary so that, again, we can really bring special collections and archives into this larger assessment conversation and this conversation about the values that vibrated archives bring to our users. So what are the public services domain? Let's get back to vocabulary straight here first. So as a response to our surveys and we kind of thought about how we could really cluster things together, we define what we call these eight domains of public services measures that we'll be getting into. So user demographics, we all tend to collect information about who is using our archives and special collections, particularly those researchers who come and register to use our materials on site in a reading room setting. Some of our researchers are coming to us by email reference transactions, phone calls, that's an activity that we have in common special collections and archives with libraries in general. Reading room visits, again, people who come on site to use our materials. And yes, using those materials, actually counting how those materials are used in different ways. We also, as libraries, archival special collections, repositories, often host events, maybe lectures. We want to be able to find some common vocabulary and for tracking those types of public events that we do. A lot of us these days are focused on instructional outreach, whether we're part of a campus community or maybe even a historical society that's serving K through 12 students in the area. So instructions, sessions are a special type of event, but because they have their distinct qualities, we separated those into a separate domain for statistical measures. We also tend to produce a lot of exhibitions, both physical exhibitions, people coming into our repositories to see a curated display of some of our items. And then many of us are also putting exhibitions online. So we developed some measures and metrics to help repositories describe their activities in producing physical and online exhibitions. And then there's a general category of just online interactions. How many people are visiting our websites or sections thereof? A lot of us are doing a lot more on social media these days. So how do we quantify that activity? So again, our standard here provides some definitions and guidelines for that. Okay, now let's roll to the next slide here, please. Common vocabulary, as Amy said. Because our goal really is to create a standard that complements other standards and statistical surveys that we are asked to respond to, we have, in every case possible, borrowed or adapted our definitions from other standards. So from other statistical surveys, from the international standards, for instance, that inform those. For instance, we're all familiar with maybe Z3950 as a connection protocol for library catalogs. Well, the Z39.7 ANSI and ISO protocol is about information services and use, metrics and statistics for libraries and information providers at Data Dictionary. We draw from that from ISO 2789 from this Society of American Archivists glossary. So the idea being that, again, we're trying to work with common definitions so that we are really speaking the same language. In a very few cases, though, we did have to devise our own definitions. The most interesting of this that I'll mention here because it might come into our discussion later is what we call a collection unit. Something that's special about special collections in archives is we hold material in various formats. So not just books or rare books, boxes of archival documents, maps, drawings, artifacts. So, and we all tend to manage those a little bit differently as how we issue them to our readers in a reading room or maybe track how many items we pulled for a class. So this is an example of how we've said we don't wanna change the practice of archival repositories by imposing a standard on them. We want them to follow their normal operation, what works for them, and enable them a systematic way of counting things. So whether you circulate or check out to a reader in a reading room, one folder at a time from our box or a whole box or two volumes at a time and just out of just one, we would call that a collection unit, the way the material is checked out to a user. So that's an example of you won't find a collection unit in any of the library of standards, you'll find it in hours, but we give it a rationale that helps it make sense in the context of other standards that we employ. And that's where the other part of our definitions section, which is a glossary and appendix at the end of the long standards document includes, our sources and comments and why we chose the definitions that we have, a thesaurus to relate to other terms. And then as you'll see and as Emily walks us through actually an example domain here on reference transactions, you'll see where specific terms that we have defined in our glossary are capitalized in context so you can refer to those. Again, in precise ways as you use the standards. Okay, well the next slide please. So going back to the domains, we have eight public services domains. We have given them each the same modular structure so that you can really work in the standard at a specific area and then kind of telescope out and work with different sections there. So it has a very long document. There are some of you looked at it already. I hope some 60, 70 pages, but because that was a common structure, we hope it'll be easy for you to navigate. And yes, one comment received already, we will be numbering all sections. So it'll be easier to navigate the document that way that's final version. But in terms of its basic DNA, if you will, each domain, we give an overview. What are we talking about when we say online interactions? And then we propose for each domain one basic measure that we expect every repository, no matter how large, how small, how well funded or not, whether it's using paper and pencil to tally, statistics or using an electronic system, a basic measure that every repository can keep and that has meaning in that domain. So we provide a rationale for that basic measure. And then we give some very specific guidelines for collection, how to count. And then if those guidelines aren't clear enough, we even give you some applications and examples so you can really check to say, yes, okay, we are counting it in a way that's gonna be compatible. Because as Amy mentioned, one goal here is that we could have a statistical survey of special collections and archival repository on some national scale even that we could aggregate statistics and compare them. So we do want to be able to compare those apples with apples. So we wanna help institutions collect those statistics in very consistent and precise ways. So basic measures and then advanced measures are those other types of measures that would apply in a domain area. But we don't expect every institution to collect all of them. In fact, if you tried to collect all of the three or four dozen statistics that we suggest here, you would probably spend a good deal of time tracking things. So we suggest areas that would be useful to assess different types of activity and to give rationale for those and guidelines for collection. Emily's example next year will help to really clarify the distinction between basic and advanced measures and how you might apply them. The other concept that we'll introduce here are metrics. And we also provide a series of suggested, recommended metrics that might help you assess your performance and service delivery in that area. What do we mean by metrics and measures? So next slide. So a measure is simply the result of taking a measurement of some quantifiable object or process, right? So how many people visited your repository? That's a simple measure. How many collection units did you issue to readers in a reading room? A simple measure. A metric is the calculated ratio between measures or maybe an independent variable like time. So how many researchers per month have come in? Or an interesting metric might be that maybe you've never thought of. How many collection units does a researcher look at on average when he or she comes to the reading room? Has that changed as we've implemented a policy that allows people say to bring in their digital camera and start photographing things in our reading rooms instead of having to read them and take notes right there on the spot? So those are the examples of the kinds of, again, measures and then metrics that we work with as concepts in our document. A couple more concepts and this will be my last slide. Inputs, outputs, value and assessment, right? We're all trying to get to some sort of assessment of what we're doing here. But let's back up and clarify that with our statistical standard, and again, it is a quantitative and not qualitative standard. We are counting service transactions, interactions. We are counting both inputs, so resources that we allocate to a service. An example of that, Amy alluded earlier with that example of a researcher who comes in and engages with staff on a reference question that leads to getting a building registered as a historical monument. And she said, how much time is the staff spending on that activity of answering a reference question? That's an example of a staff input and we do have measures throughout our document in different domains that precisely help you to tally and to track the amount of time input that you are putting into a particular service. The outputs are the service transactions themselves. We're not always used to thinking in special collections and archives about what we're doing as a kind of transactional nature. We tend to get so focused on our researcher that everything is sort of a one-off special thing that we do. But in fact, when we answer a reference question for one person another, that's a transaction. It's a repeated service cycle. And those are the outputs that we're going to be counting. So inputs, outputs. The concept that we often work with now in assessment is really value. And that's really from the user's perspective. What benefit does the service have to the user in answering a reference question? What did that do for me? Well, it helped me to get that building on the register. I couldn't have done it without that. That's a user story that again won't come directly from the statistics there, counting how much time or how many service transactions we did. But by at least building some consistent vocabulary in ways of counting the things that we're doing, we're building a basis for repositories to then approach really value assessment and maybe combine these quantitative measures with some qualitative assessments as well. But even with what we're doing here, purely numbers, we can get out a lot of really meaningful assessment data. Operational efficiency. How good are we? Should we be training ourselves to more efficiently answer reference question or distribute it, you know, how they're handled by different staff? How quickly are we getting materials retrieved? These are operational efficiency and they have a bottom line impact on our operation and our staffing. Service effectiveness too. Numbers can tell us stories there. You know, when people submit a reproduction order, they'd like to have something digitized. You know, are we getting it done within a certain amount of time? Are we getting it done at all? That's an effect of this measure that analyzing our outputs can really tell us. And from there, we can begin to tell those stories again of impact and value. So again, those are the concepts that we're working with in statistical standard and Emily now will really take us into a deep dive that will explain in context how this works with reference transactions. Okay, great. Thanks, Christian. So yeah, what we really wanted to do was show you just a kind of focused piece of the standard. I don't think that Amy or Christian mentioned this, but the document itself, if you've taken a quick look at it, is actually something like 80 pages long. And we know that it's not a gripping cover to cover kind of read, but we made it that long because we really tried to fill it with the substance that will allow you to use it in a really functional way and provide a lot of the answers that you might kind of need as you're wondering about how to apply this stuff in the real world. And we are working as a group toward some other kind of product that will be a little bit more of a kind of quick reference item that will help you sort of be working with it without flipping through all of those pages. But we wanted to take one of these areas in our domain and really show you, walk you through all of the information that we've provided, all of the kind of direction and support that we hope we've built in to the standard. And so this will both sort of show you specific details about our domain of reference transactions and serve as a model for what you might expect to encounter in these other domain areas. So hopefully this will help raise some questions that you might have about the utility of things we've provided or some suggestions or support for how we've laid out the structure of the standard. So let's look at the reference transaction domain. As we do in each of the domain areas, we begin with a definition that tries to pull together a lot of the different concepts that we work with in public services on that kind of daily basis. So the way that we've just pulled together the existing definitions and kind of repurposed them for this standard is to say that a reference transaction is often the most common interaction between the repository staff and users. So the staff engaging with users to learn about their research interests and really thinking about how we use the resources of our repositories to respond to queries and to researcher user needs. And we mentioned here also this sort of filled out the kind of conversation we might have around what does the reference transaction mean? And this is another common thread that we see in our defining work throughout the standard. We sort of offer this additional element to the definition, which is that reference transactions provide opportunities for staff to hear and gather stories from users about the impact that archives and special collections have on people's lives. So that kind of goes to what Christian was just talking about in terms of how we're looking at what the outcomes of using the standard might provide us. So you can go to the next one, Amy. Great. For all of our domain areas, we begin with a basic measure. The idea here is that this is something that anyone in any kind of institution can collect and can report. Amy mentioned at the beginning, the survey that we did of our colleagues back in 2015. And that was something where we really were kind of looking at a huge range of practices and we used that to guide our thinking very specifically about these basic measures. And we kind of checked back with that data and made sure that, yes, indeed, these are things that anybody should be able to capture. So in this case, our basic measure for reference transaction is the number of reference questions that are received from users. And that is regardless of the method that the user uses to submit the question. We provide a rationale, as Christian mentioned, for every kind of measure that we suggest you collect. So this is a way both of communicating directly to you if you don't kind of buy on its face that this is a useful thing to collect. This is our way of sort of offering some argument about why you might want to or opening up some new kind of considerations for a repository or for administrative staff who might be sort of wondering about why we would want this data and what we might do with it. So in this case, our rationale for the basic measure of reference questions is to say that maintaining this count is actually kind of our core way of tracking a core staff engagement with users. We say this is the most basic and we think that that is just because it is such a fundamental function of public services and that this is a good way of reflecting that. It's not that we're saying this is the only kind of way that our staff work is represented, but this is a core and basic way. Can you go to the next, Amy? Sorry, I was there, okay, oops. I think we, yeah. So the next area in our domain for reference transaction is actually the provision of these guidelines for collection. And this is one of those places where we start to see why this is an 80-page document because there are lots of different ways of collecting data, lots of different kinds of ideas that people might very reasonably have about what counts or how to count. And as we said, the idea here is really to make apples be apples across repositories. So we've provided a lot of instruction about how to capture this data and how to put some parameters around our counts for things so that we can be moving toward a place where we have at least some sense that we are comparing the same kind of information. So for the basic measure of the reference questions, what we suggest is that counting reference questions concerning different topics from the same individual actually are two different questions. So that's a way of sort of helping people not lump all of the questions that might come from a single user into a one count, one tick mark. I was saying when you're following different streams of logic or questions or maybe consultation of materials for a user, that becomes something separate that we count and that builds into our kind of basic tabulation for the number of reference questions that we're getting. We also suggest that we exclude follow-up emails, multiple social media interactions or other conversations on the same question. So that's a logical kind of extension from our first point which is that it's the same person that asks a different subject of question. Those are two different questions. Same idea here, going in a different direction. The same individual asks questions that build on the same initial question that flesh out maybe what they've come to us initially for. That remains one single question. We also specify that you exclude directional or general information questions. So why don't you have your tour? Can I have an instruction session where the restrooms are kind of the same? Doesn't count in this reckoning. We also exclude request from users to get new material while they're working in a reading room. That is not a reference question if you ask where is my next box. So again, that's just to really provide some immense clarity around how to capture these metrics. We also suggest that account questions from users working in the reading room if the response requires staff to employ their knowledge around one or more information sources and that user hasn't already asked that question. So this is a recognition that sometimes reference questions come up in the course of our patrons work in a reading room but that it may be a sort of separate inquiry. At the bottom of this slide, you'll see also our sort of understanding or nod to a particular kind of practice that you might engage in, which is basically sampling. We understand and accept as perfectly valid the practice of sampling to collect this measure as well as keeping a kind of consistent alley as one moves through the course of the year or the reporting period, whatever that might be. So as Christian mentioned, we also try to flesh out our guideline for collection with examples and this is an example of some of the examples that we provide here. So these are little user and staff stories that we think might help people understand how these things might play out and what to do in terms of establishing that metric in the real world. So for example, a user and a staff member exchange multiple emails about the same research topic. That's one of those cases where the repository should count it only as a single reference question but we indicate that you may wish to apply a higher complexity level to that transaction. So that's where we get into our advanced measures. Basic measure, we're counting each reference question but there are ways of extending that and that becomes into play when one consults our advanced measures area which we can look at now. So just kind of quickly show you some of the advanced measures to again give you a little bit of flavor here. In this case, one thing that we could just open. Emily, can I jump in for a second? Yeah, of course. We do have one question about, I think it was on the previous slide or what you were originally talking about. Someone is asking our follow-up questions would follow-up questions be counted if they send you into a different collection? Like does that then make it become something new? A newer reference question. Yeah, I think what we, the way that I would interpret that and I'll look virtually at my task force members here, I think that the way that we indicate how you would proceed in that case is the substance of the question different. So collection might take numerous kinds of consultations of different materials or collections to answer a question. That still is at core the same question, but if the substance of the question shifts into something new. So we're really talking about if the whole topic suddenly they decide, oh wait, that is what I really meant what I wanted to know about was this and it's completely different from the first thing than you've got to. Yeah, or a new kind of research rabbit hole that comes up of someone sort of in the field and they say, oh, this has really opened up a kind of new area of concern for me and I want to look a little bit over here. Can you help me with that too? Right, awesome. This is Amy, if I can just jump in too. Where that, the fact that you're accessing one or two or many more collections to answer the same question, that would be captured under the collection use domain. So that effort, if you will, is partially documented there. And Christian, I'll just jump in here with one more comment too. And this is a good discussion and example of where, while we want our standard to be normative and to provide some clear definitions about what to count and what not to count. At the same time, we recognize as a measure of local policy that has to apply here, how to do this practically. So the main thing that we want to help repositories to do is to think about an area of activity like reference transactions, and then to think and extrapolate even their more local policies about this. So let's say an institution is using a reference tracking system, like a help desk software kind of thing, or even a spreadsheet. There may be a point where you're answering the question and you said, okay, this is now a different question and I'm going to start a new conversation thread. Okay? The way that if you're using a reference tracking system that you're going to count in the end is you're probably going to run a report from that software, okay? So as long as you have a local policy and just your judgment that's saying, okay, when do we break off and make this a new question? When you run that report at the end, it's going to have some consistency to it and that's going to be your account. You know, on the other hand, if you are the loan arranger, there are two of you in an office, you're just in your making tick marks on a paper of how many reference questions. Okay, as long as there's some kind of consistent understanding about how you're handling things, that's the main thing so that if we do have an effort to aggregate statistics over time, at least your repository will be reporting information consistently and even for your own operational use internally as you evaluate from month to month to year to year, you'll know what you mean. It's got to be meaningful for you. That's the judgment question we want to leave you with and we explain a little bit more in the introduction to the whole document. Yeah, exactly. I think that's a really important point that this guideline is in standard, is meant to sort of help provide as much clarity as it can but we all know on the ground things happen in different ways for all of us. So we're hoping that this will provide some structure and some sense of an internal kind of coherent means of collecting this data so that even as you have sort of staff maybe changing over time, your practices become sort of embodied in these particular ways and you are providing a consistent kind of way of counting. We do have another question from someone else about the directional questions that you had mentioned that they want to know why are, the specific question is why are directional questions not counted if they come detailed? Which I suppose could mean, the question could mean can they turn into something that then gets counted? Maybe they started as directional but developed into something else. This is Christian. May I jump in with a quick answer and let others follow? Sure. One point I wanted to make here is that this is, because I was the one speaking about definitions, right? So the definition that we have adapted in this case for a reference transaction is a very common one. That's the one that appears in the international standards there. That's E39.7 and ISO 2789 that I was talking about earlier, okay? And that applies to late library statistical surveys. So just like everybody else, okay? We exclude those, okay? So you read other statistical survey. Those are the instructions that are given. Is that directional, where's the bathroom? It's not a reference question. It has to reference question, has to use the knowledge of the librarian about the collections and some information source. Whether that's memorized by the librarian or something that we check up. So this is something that actually corresponds to other survey and standards for finding reference questions and collecting statistical information about them. This is Amy. If I can just add to that, as an example that we deal with, often we'll have students walk in our front door and say, what is this place? And our frontline staff says, well, you're in archives and special collections and they'll say a sense or two about who we are, what we do. And sometimes the student will leave, but sometimes they will stay. And then they're asking more substantive follow-up questions about, oh, tell me about those congressional collections or the university history. And so at that point, then it becomes a reference question, a transaction that we're documenting that we're counting. Make sense, yeah. Someone else says it may not be a reference question, but it does take time and it's a customer service question. And that's where likely we had a specific guideline there that was on the previous slide about when the researcher comes up to the reference desk and says, may I have my next box, please? That's not a reference question. We're not using information about the collection and answering a substantive content related question. I just need my next material. So that's something that's peculiar to special collections and archives. You wouldn't have it a general reference desk in a library. So that's an example where we extrapolate. The principle is you gotta have the knowledge of the professional or the staff member and or consulting an information resource in order to have a reference question. So therefore we give very specific examples of things to include or exclude based on those principles. All right, thank you. Go ahead. So do we have any more questions or it's about to make sense? I don't know if we get any kind of feedback of have we answered it fully? Do you need more? Does that make sense? Anyone can say if you need more information or not? Yeah, the other thing we have is a comment someone wants to know is, so they said great work about the whole document you've got here, the whole standard. They want to know after it's completed or if you don't already have one of these, could you reduce it to a one or two page at most cheat sheet? Otherwise I'm afraid staff will neither use their reference and 80 page standard. That's what I was saying at the kind of beginning of this deeper dive into the reference transaction which is that we probably will spin out a document that will be useful in that kind of day-to-day way that will be much, much shorter as well as some kind of shorter executive summary sort of thing with a little more narrative element. So there might be something that's very functional kind of tool-based product that we put out and as well as this more narrative kind of distillation of these ideas. Great, all right. Thank you. Go ahead and continue. Okay, great. So I'll just sort of put a spotlight on some elements of the advanced measure just by showing you kind of one example of how we're taking that basic level that we think everyone can respond to and saying here are some ways to build on this. So one of the advanced measures is this question method and that captures and categorizes the methods by which we receive reference questions. So in our tally for that basic measure it's just is there one or isn't there one? So it counts as one or it doesn't. In this advanced way of looking at how we might extend some of our data collection, we're thinking about capturing also the ways in which we are receiving reference questions. So is that by email, is it by phone, is it in person, is it by telegram, carrier pigeon, whatever it is. And we're thinking that in this case, the rationale is that it shows how our users prefer to interact with us. So especially as we look at this over time, so this is one of those cases where a snapshot may or may not be useful. The fact that we're getting more phone calls in this week or more emails in the next week might not tell us much, but certainly over time, especially if we start to see shifts in how people are interacting with us, maybe we're getting more kind of chat reference questions. Could something like that happen? Those trends show us really some ways into staffing and to tailoring the sorts of services that we might provide. Maybe even give us some pointers for staff development or training. Do we need to be able to build up more special collections librarians who can do that kind of chat reference with something like that work? And then the way that we deal with the advanced measures is by continuing as I showed you in the basic measure by providing those guidelines for collections and the sort of skit sort of scenario examples of how we might do this count. In the reference domain, we have a couple more advanced measures that we suggest. So that's time spent in the responding. That can be a tricky one maybe to encourage your staff to do. That's certainly one of those weird little quirks of life that accounting for the time that you spend on something actually takes a lot of time. Frustrating, true, tourism. But that could be really useful and could show us a little bit again on the kind of staffing and the value questions that we might have. We could also track question purpose. So defining the sort of purpose of a question or service request that we get. And we suggest that that is maybe best done in response to a defined rubric. And also the question complexity. Again, this is something that is best determined by some kind of rubric or scale so that we can place it along a continuum of complexity. Let's go to the next slide. And we kind of wrap up our section by looking at these recommended metrics. So at the beginning, Christian flushed out the difference between the measures and metrics. And here's where we're taking things from that flat number into something that shows us more meaningful trends. So here we're looking at the total number of reference questions that are received in the week or month. So that's taking our number and looking at it across a period of time. And as we sort of suggest in the standard, that shows us some of the patterns of the life of our institution, perhaps. We can go on, I'll just run through these metrics that we suggest the total number of reference questions received via each method. So is there some kind of pattern to that? Do we know that there's more email transactions on reference at certain times of year? Does that show us the way a particular constituency might like to communicate with us? Those things that we can start to do some analytic work in our own institutional context to try to understand better. We can go to the next one. And these are just the final metrics that it won't take our time in going through in detail. But the average number of minutes spent responding to reference questions, the average number of minutes spent responding to maybe an internal versus an external user group. The ratio of time responding to reference questions to users' time spent in a reading room. So that could be a kind of fascinating metric to have and to explore and to think about some of our ways that we just deploy our service and our staffing. Also the ratio of reference questions submitted by users in each of the demographic categories. So here you're seeing the way that we're sort of pulling together the different threads on our standard and suggesting how they might help illuminate these different areas when we look at some of the data put together in these different ways. Let's go to the next Amy. And I think that's over to you, yeah. Okay, so I'll just wrap up really quickly here and hopefully we have some more questions. So we just wanted to provide the list of all of the domains and show the basic and advanced measures for each of the domains. So one thing you'll notice here is that every domain has, as we said before, one basic measure and then the number of advanced measures does certainly vary by domain from one for reading room visits to many for collection use domain which I had to stretch onto a second page there you see. And you'll also notice again that the instruction and events domains, they're counting a lot of the same sort of information they have been separated out. And then finally, the exhibitions online interaction domains and their measures. So just wanna wrap up again and say, we really appreciate all of your comments that you can give us today or through next Friday. And these comments are open to everyone. You don't have to be an RBMS member or an SA member to make the comments by email or on the websites. So just make your comments, you can go to the RBMS website and it uses a plugin, digress it to allow you to basically make your comments sort of inline in the paragraphs. On the SA website, you're basically going to comment as you went on a blog post. You go to the bottom of the page and you have to log in. If you don't have an SA account, you'll be prompted to create one. So again, it's open to anyone. And then of course, you can always just email your comments to Christian and I and we welcome those. A quick word about kind of for the future and some of what is not in this document. At this point, we cannot guarantee the creation of a statistical survey instrument to collect your institutional statistics. But we have been in contact with a couple of vendors to ensure they're aware of this project. So that isn't something that we can guarantee we'll be able to release. And unfortunately, a shared national data repository will not be rolled out at the same time of the standards approval. As has been noted, the document notes a number of areas where specific types of repositories may wish to come together and add further definition to a measure. For example, repositories may wish to define specific user classifications based on whether they are a public library or an academic institution, a corporate archives, et cetera. This document does not attempt to provide guidance on conducting qualitative assessments of user impacts, which are beyond our scope. And then finally, these measures were not created to stand in for budgetary inputs and outputs. They could be used to support a cost benefit analysis of service operations, but they are not set up to automatically do so. So thank you to everyone for joining us this morning and we're happy to take more questions. If there's no questions, I can certainly show you the commenting options on the SA RBMS websites. Thank you, Amy, Christian, and Emily. Yes, we do have one question that just came in here at the end. Let's see. All right, if we can get to this question later. We're always looking to translate user demand and use stats to inform additional processing and digitization decisions. Can you talk about how these measures will address this need? This is Christian just to read and make sure I understand it. It's correlating essentially use of materials and then how those prioritizing perhaps for digitization. Would you mind reading the question just one more time? Sure, no problem, sorry. We're always looking to translate user demand and use stats to inform additional processing and digitization decisions. Can you talk about how these measures will address this need? So translating user demand and use to processing and other decisions. So to really decide what needs to be digitized, you not only need to count how many things are being used but also keep track of what is being used. So this is another aspect that's useful to bring out here is that in defining a statistical standard, our focus is with those basic measures on simply counting. And what information you capture is gonna depend a lot on the local method you use to do that counting. So again, our definition is to come back to the collection unit. You're taking a box of archival material, you're issuing it to read in the reading room and the reader comes back to you and says, I would like to have material digitized from this collection or it would be great if you would digitize the whole box sort of thing. Now if you're just keeping your statistics on tally sheet, how many boxes were requested or you're counting call slips to say at the end of the month we had so many call slips for boxes of archival materials where books would have you, you wouldn't be able to correlate and to say, oh it was that collection and that box that people have been repeatedly asking for to be digitized. If you're using maybe an automated system where you're tracking call slips and that sort of thing then you might have access to that information that would say, oh yes, we can flag things that people request for digitization or from which they have already requested individual items and say we really ought to go back then and digitize that whole box material because people keep requesting things from it. So I hope that's helpful. I'm happy to take a follow up from the same person who was asking the question if that kind of gets at what we're able to accomplish with the statistical standard and the dependency upon the local method you're using to capture statistics and how rich that data is. Mm-hmm, makes sense to me. Yeah, if you need any more clarification, let us know. The questioner there, we have another question. Someone says they're unclear about the user association measure. Is this asking for just affiliated users or affiliated and unaffiliated users? So the user association measure, yeah. Pull that up, there we go. Again, okay. So for our basic measure, this is really hard. We spent a lot of time talking. What do we want to account for user demographics? Because we're trying to address so many different types of institutions. It just happens that all three of us on the phone today have worked with academic libraries, special collections unit within academic libraries. We have other people on our task force who are corporate archivists, who work in public library settings, what have you. They're just different settings, okay? That we're trying to address all of them in historical societies. So one of the criteria that came to us as a very basic measure was that there are, like in the case of an academic institution, there are people who are affiliated with your institution, their students, their faculty members, okay? And it's interesting for us to think about, in terms of our mission, how much of our effort, how many of our users are coming to us from our campus community versus those who are coming to us from around the region, around the world. Same thing, historical society. How many people are coming in within certain, who are members of our institutions, who are friends of the local one, Longfellow House, here, okay? And how are they using the repository versus other people who are not members of an organization, you see? So that's what we're coming up with as a basic measure. And we think it's an interesting one for all institutions to take into account. That's why we recommend it as a basic measure. Whereas, so it's the association with the institution that we're capturing there. Whereas if you go to the next page, it's kind of the affiliation of the user with other organizations. It may be your own, but it may be other organizations, okay? So you have researchers coming, how many of them are professional researchers? They're PhD graduate students, faculty researchers versus members of the general public versus a K through 12 student. Those are the kinds of affiliations that we're suggesting would be useful as an advanced measure to capture in terms of user demographics. Anyone else want to add something to that? That's the basic distinction there. Rationale for our basic measure. Okay, cool, thank you. All right, and now we have another question that's come in. If anybody does have any questions, just remind or use your question section of your go-to webinar interface to type them in. I did notice it did just hit 11 a.m., central time here on my clock. We'll go as long as it takes to get through all the questions you may have. We don't get cut off from this system or anything, so feel free to stick around and ask your questions as you want to as long as these guys are willing to stick around to answer a few. We might only go to like at the longest, maybe quarter after we don't want to hold people up too much, but don't worry about that if you do have questions, type them in and we will grab them for you. So we have a new one here. Question regarding counting reference transactions. Currently, different curators may get the same question from a patron, and in our current system, they're counted separately. If multiple people work on the same question, sometimes unknowingly, where would the total time spent go or would it be kept separately for each curator? Yeah, this actually came up at midwinter too. I think maybe a lot of us have problems with people submitting questions to multiple folks on staff, and it is a little bit tricky. I think we made a note that we would kind of come back to this question about, again, as Christian sort of said earlier, like there's a local practice element here that we can't make good recommendations around, but certainly this seems to be a pretty big widespread problem. In terms of the time though, this may depend on your tool, but the time spent responding to a question, some of them can be recorded from multiple staff members in one tool, and that time is the time, regardless of how many people have kind of contributed their minutes or hours to it. Yeah, so I mean, this is where actually, if you, you know, some repositories that we work for when they respond to initial physical survey of practices found out that they weren't tracking these sorts of things at all, and it sounds like at your institution where you are asking a question from, you are, you know, tracking this and finding that as a result of tracking, you actually have a problem. I mean, you've really described a local problem. You're probably in a wasting effort when I talk about the operational efficiency. I think you're realizing, oh, wait a minute, maybe we need a better way of routing questions, you know, limiting, you know, the number of email addresses that are on a staff contact list or making it very, very clear that you should use the reference form if you want to have your reference question answered the most efficiently. You know, so you can avoid this sort of, you know, duplication of effort. But, yeah, local practice, if you're spending time answering it, I think as Emily said, yeah, count the time, but maybe you do want to change your practice as a result of what you're learning and that's what assessment's all about, right? Yeah, so you want to make sure that you know that if it's the same topic, the same requester, that was one reference question no matter who worked on it and that's something they have to internally figure out how to realize that, notice it when the stats are being taken collected, yeah. Yeah. Okay, another question came in, it just popped up. All right, this is a long one, I'll just read it. Regarding events, are you intending to include only events hosted and sponsored by the repository or any events where the repository is involved? The definition seems to suggest the former, but there are examples about broadcasts and talks for community associations. What about joint events? Thanks for example, three repositories collaborate on an outreach event, it takes staff time and showcases collections, but it's officially hosted by the Regional Archives Association. So what are your events definitions, I suppose would be the specific question? Well, let's see, I guess I'll pick up here Christian again. We tried to work with, we adapted ISO 2789, ISO 2789.229 event definition, that it's a pre-arranged activity with cultural, educational, social, political, scholarly, or other intent. That's the, then there's also another definition in the anti-NISO Z397 standard about information services to groups, information contacts planned in advance in which a staff member or a person invited by a staff member provides information intended for a number of persons, and then it kind of goes on from there. So in this case, we adapted our definition to, again, reflect so that we are still kind of speaking the same language as these other standards, and yet reflect what it's, repository practice that we've observed. So we define an event as a pre-arranged activity with cultural, educational, social, political, scholarly, or other intent, such as tours, lectures, concerts, and other programs, and we do say organized or hosted by the repository. So if you are providing a venue space for an event, then even if you are not directly involved in it, then that still counts as an event. This is interesting that our BASCA Library and Consortium Commission is hosting this webinar. Is that an event by our definition because it's being hosted online? It is a pre-arranged activity. It has some, it's called educational intent to it. So I guess by that definition we could. There would be a matter of local practice, whether you're including online events like that. Anyone else wanna comment? Amy, Emily, I'm trying to remember, we've had some discussion about this, but I don't have that part of the, I'm looking at the definition right now and not what we actually amend in terms of guidelines for collection here. I'm gonna skip back to that now. Yeah, it seems like, the question had some element to it about the sort of collaboration, right? I think it could be for any sort of thing. Does it have to be only hosted sponsored by your own repository or supposing there's other ones involved or you did a joint thing with some other ones? Yeah, because there's so many things. I mean, without possibility, I guess. Yeah. With reference to our definition, I would say that that is something that is organized by your repositories and it counts as your event. I mean, that also means that it counts as repository B's event and repository C's event. But if you're collaborating to do that, that makes sense within the bounds of the definition we've provided. What's interesting to me is maybe this idea of collaboration and of tracking that in some way, which has come up around instructional activity to as a question of do we want to be capturing that as maybe some kind of advanced measure, which I think is maybe an open question. Christian again here, as Emily's been speaking, I've been going back to the document a little bit. This is an area, so this is a useful comment. We could probably clarify this and make it clear when we talk about events in the initial basic measure that we do mean to include broadcast kind of events. This actually, we do give this as a guideline for collection under the advanced measure for what we call event attendees. And there we say for online events, such as webcasts count the number of viewers as event attendees. So we are recognizing that an event may be this online event here. So we can make that clear in our definitions earlier in that domain section I think. That would be a good thing to do, let's see. But at domain point two, it's so long as the information, this is where it's like operational impacts we're talking about here, the inputs and outputs again. Right, if your repository facilities or staff are in some way providing a resource, your resources are going to hosting that event even though you're not one of the speakers that that's an impact. Now that's where our basic measures aren't gonna tell you a whole lot. How many events did you have? Were they broadcast, were they lectures, were they attended by a lot of people? That's where to have a true kind of a sustenance, you need to take multiple measures and combine them into some type of story. And that's where this idea of time, we have that with events too, how much staff time has gone into preparing events. So if your staff is just simply hosting a webinar, you're probably record relatively little staff time involvement in that broadcast. Whereas if you're producing the whole show, then that would show up there. So that's where you might have, in one case, Repository A has five events and Repository B has five events. But if you look at the time that each of us spent in it, Repository A invested a lot of time because they produced three broadcasts and two lecture series that were held at their repository. Whereas Repository B simply opened its seminar room to local community groups to use it for an event. It's still an event that happened at the repository, but you can see when you start combining those measures that you get a different story of both operational inputs and then outputs that come from it. Okay, thank you. Anybody have any last minute urgent questions? It's about 10 afters, so I think it probably is about time to wrap things up. Previous question to the standards. Okay, somebody did just type something in looking for, got more, oh, they got more clarification about our previous question. Do the standards recommend checkout counts? But not collections, names of collections specifically. Correct, so that's one of our definitions we have as a checkout, which actually corresponds to what, we have a kind of funny definition to it. The act of recording the removal of a collection unit, again piggybacking definitions from its place of storage so that it may be issued to what we call a registered user in a reading room or for some other purpose. So we do, yes, absolutely try to crisply define what a checkout is and then encourage people as, again, a statistical quantitative standard to count checkouts. We do give in our application of examples, yes, it's interesting if you can, in whatever system you're capturing those checkouts. If you have an online system where you're capturing the title along with the call number and everything else about the item as a checkout, great, then you can actually maybe sort that report and group titles together and say, huh, that collection gets a lot of use and that could help your digitization thing. So that's where the data capture method, again will help you to give you the data that you can do things with. But again, our baseline is we wanna come up with measures so that if you only have a sheet of paper and a pencil, you should still be able to track even some of the advanced measures that we're talking about. And this is Amy, and so I'll just mention this. The basic measure, all checkouts, again, you see there that it includes registered users who are in the reading room, but also relates to other user services. So you're helping someone by email, you're doing a reproduction and an exhibition load or an ILL, those are all under all checkouts. And then with the more advanced measures that those other uses, potential uses get broken out. So you can count those separately. Yep, you can see that there, yeah. All right, I think we'll wrap it up now since it is now about 12 after. If you guys do have any other questions, of course, Amy, Christian and Emily, your information is there. They've provided their contact info with the slides and they'll be looking for comments on the website, too, on those links. So those will also be included when we give you the recording info. So please do reach out to them with any questions and anything else you do wanna know about it. February 17th, right? That's the deadline? That's what you're looking for? Yeah, next Friday. So you still got a week and a half or so to get some input into this. Yeah. All right, any last minute things you guys wanna say before I do wrap it up? Thank you. Just thank you to everyone for coming and we look forward to your comments. All right, all right, great. Thank you, Amy and Emily and Christian. I'm going to pull back to my screen now. As I said, yes, the show, it's waiting for it to come up there. There we go. We have recorded the show, are recording the show still, and it will be available on our website, our Encompass Live website here, which I'm showing you now. This is a specific entry for today. And you see here the links here that are for the asking for the comments are all the hot links here will be included when you get the recording information as well. This is our Encompass Live main website. Luckily, you can just Google Encompass Live and so far we are the only thing called that. So you search for us anywhere, you'll have come up with it. On here is our upcoming shows, but then right here is where the archives will be right beneath the upcoming topics and yours will be listed right above here. And we will have, is this one had some documents, recording available, and if you send me the presentation, Amy, I can put it up on here and links to those sites for putting in your comments will be available. And I'll email everyone who attended and registered after this is processed probably later this afternoon, it should be ready. I'm usually at the mercy of YouTube, but generally they do things come through pretty quickly. So look for that email later today. So that will wrap it up for today's show. Hope you join us next week when our topic is tween and teen build collective. This is from Lindsay Tomsu, who is a teen coordinator at our La Vista Public Library here in La Vista, Nebraska. She's got this build collective that she put together. She's the teen coordinator there. She does a lot of great things with her teens and even the younger tweens. She's gonna come on the show next week to tell us about what they've been doing lately. So hope you'll sign up for that and any of our other upcoming shows you see, we've got our February one since of our March, always adding new ones to the schedule. So do look for when the other March ones are finalized and posted. Also, Encompass Live is on Facebook. So if you are a big Facebook user, you can pop up to our Facebook page and give us a like here. What's up? I post items when we're ready. You can log in on the fly. I post reminder messages. I post recordings on here. So anything related to the show we posted on our page. So if you're big on Facebook, you wanna track what we're doing from over there, go ahead and give us a like. Other than that, that does wrap it up for today's show. Thank you all very much for attending. Thank you to our speakers for coming in locally and remotely and we'll see you next time on Encompass Live. Bye-bye. Hello.