 So thank you for joining us today. I'm Cliff Lynch and I'm the director of the coalition for networked information I'll be doing a brief introduction for the session. This session is one of the, one of the synchronous project briefings that are part of the CNI spring 2021 virtual member meeting. This week we have done a number of synchronous project briefings will be doing one more at four o'clock Eastern tomorrow. Next week we will have some plenary presentations on Wednesday, Thursday and Friday to close out the virtual meeting and I hope you can join us for a summer all of those as well. I do want to remind you that for this spring meeting we are making heavier use than in the past of pre recorded presentations which are available to participants on demand. And I invite you to have a look at those and I think you'll find quite a rich assortment of topics covered and hopefully a number of things that will be of value to you. This session like basically all the sessions at the CNI spring virtual meeting are being recorded and this will be publicly available after the conclusion of the meeting. All of mechanical things. There is a chat as part of the zoom session. Please feel free to use that to comment, make introductions, or any for any other purpose you want. We have a Q&A tool at the bottom of your screen and after we hear from the presenters. Diane Goldenberg Hart from CNI will moderate a Q&A session at the end of the presentations. Also, I would note closed captioning is available and please avail yourself of that if it's helpful. All the mechanics I need to cover and let me just briefly introduce our three presenters, Megan Oakleaf, Ken Varnum, and Shane Knackford. The topic here is a really important one and it's about how to bring the library and the data that the library holds into the broader conversations about campus analytics as they relate to student learning and student success. This has been a complicated issue where we've seen a certain discomfort shall we say around some of the values of privacy and anonymity that the library community has championed. And the, you know, very, very noble intentions of the work on learning analytics and student success. There aren't totally simple answers here and Megan and her colleagues have been, you know, really at the center of exploring many of these questions and how to open cut our director, our associate director emerita has also been involved in some of these questions over the years and continues to be engaged in those discussions. And with that as context I just want to thank our presenters I'm really delighted that you brought this work to CNI I'm sure you're going to get quite a few questions from the, from the participants in this session. And thank you again. And let me thank everybody for joining us at this point I will be quiet disappear and turn it over to Megan read off the presentation. Thank you Cliff I appreciate that very much. And you're right this territory is complicated so that's, I guess that's part of what makes it fun. Thank you for joining our presentation on adding a library profile to caliber, bringing the library into the campus learning analytics conversation. I'm really thrilled to be talking about this project we haven't gotten to do that much given the last year, and I'm pleased to be here with my colleagues, Ken and Shane, who are brilliant and I love working with them so I'm really happy that we can all be here today. I want to acknowledge that this project was funded by IMLS and we would not have been able to do it without them. IMLS has been funding quite a bit of research in this library learning analytics space over the last several years, including the data doubles project prioritizing privacy, the library learning analytics project at Michigan. We have the precursor to the grant we're talking about today and the project we're talking about today and class. So we're very grateful to have this this work enabled to think about how libraries can and or should get engaged in the campus learning analytics conversation. So, I get to start off today the overview though covers all of our topics so I'm first going to give you a project overview for class, and then Ken's going to follow with an introduction to caliber and the library and the caliber library profile, and Shane's going to wrap us up with what's next for the caliber library profile and then we're really eager to get your questions and comments and, and have a nice discussion. So, I want to begin by talking about the class project from the perspective of some of the reasons why we wanted to get involved and sort of just set some, some definitions so that we can all have a clearer conversation. So one of the things that I want to begin with is a good definition of learning analytics and this is complicated because the term learning analytics gets used very differently in libraries. And then it does and the rest of higher education so in libraries, I frequently have colleagues referring to using the term learning analytics to refer to assessment or correlation studies and learning analytics in the larger scheme of higher education is a slightly different thing and not being clear about definitions muddies the waters and makes it harder to talk about so I wanted to provide a definition, so that we all know what we're talking about today in this presentation. So I went with the higher education usage of this term or the use of institution level systems that collect individual level student learning data, centralize it in a warehouse or record store and serve as a unified source for research, seeking to understand and support student learning and success so we're talking about learner record store at the institutional level, or perhaps consortium and individual level data that that sort of use not the sort of broader library assessment or one off correlation studies that are episodic in nature. So what analytics is used to help educators discover, diagnose and predict what's going wrong or understand what's going right with learning and learner success, and then to do something about it right so what's wrong, what's not going well, let's go what is going well that we want to keep doing, and what can we do about it and this is really essential, particularly for students who might not already be aware of the unwritten and hard to decipher rules and practices and higher education. You know figuring out what's going on, and then doing something about it so intervention sounds kind of experimental but it's really just doing something with information which is something librarians know a lot about. So interventions can include macro level changes, or sort of individual level occurrences. So at the macro level and there's a lot of tremendous amount of power in this although sometimes in conversations that gets overlooked is that learning analytics can help us uncover systematic and structural problems that students and learners are encountering and figuring out how to and help us learn how to change those things through changes in practice, changes in processes changes in policies to a broad swath improve learner experiences and get rid of obstacles that have been hindering students that maybe we weren't aware of, or only understood superficially right so making big changes based on new learning from understanding trends in the data. It can also be used to facilitate individual level things that are important on on a smaller scale but extrapolated over multiple students and to each individual student making enormous difference. So that might be providing learners with insights into their own learning behaviors by giving them back their own data. By notifying students or their educational support support partners, librarians advisors, tutorial center folks, faculty of events patterns or milestones that they might be unaware of. It also might include prompting or referring students encouraging them to gain assistance from the services that are available, or otherwise link students. So in many ways. The conditions are things that a good and caring educator would do if they knew to do them. And because so many of our institutions operate at such enormous scales. Things can fall through the cracks lots of things can fall through the cracks so huge hurdles go unrecognized with learning analytics we might be able to find them and change them individuals don't get connected with the supports they need when they need them. Sometimes because of scale, and this can help us fix that problem as well so those are the kinds of interventions that we're talking about coming out of learning analytics. So why might libraries want to be involved. We know that very few libraries are involved in this work as we've defined it right with the higher education definition that they collect individual level. In this case library use data student library use data and contribute it to a centralized or most libraries are not doing that. So there are consequences to that that we need to consider results of that that we need to consider. One of the dangers is that we continue to design for average. So most of our current assessment approaches. Self report data in some cases aggregated data, not, you know, individual identified data. And so we aggregate them we pull it all together and we put it in a large lump that we designed the best we can for that lump of information that we get from our assessments. And sometimes that results in designing for average. We also, as I said, tend to rely on assessment approaches that depend on self reported data. Oftentimes the samples of the population are very the response rate is low or it might be representative of the institution overall but doesn't give sufficient insight to what is happening for certain pockets of students. And speaking of groups of students, we also with less power in the information we're using might miss intersections. Right, we, we talk a lot about dividing students by race, gender, different types of health status for example, but looking at the intersections of these identities can get lost or be difficult or possible to do we end up with really low and that we don't feel can't report out because we don't want to identify anyone. And we, so we miss those intersections, you know what is going on with Latina female physics students in the second semester of their sophomore year, right is there a problem with that are they pulling out of the, the, that particular science track and going somewhere else we might miss things that are focused on an intersectional identity of students academic lives. So I think we need to recognize and think about the fact that while any harm that we've done by designing for average is definitely inadvertent and not purposeful. It can still exist. And so for equity purposes we need to think about how we understand the student experience how we understand what helps them be successful, and what might hinder them from being successful. Oh, my God, that was just the first bullet. What other consequences might there be. So if you don't participate. If libraries don't participate in institutional learning analytics libraries go uncounted. Right, so we, we can't be included in those holistic and institutional pictures of understanding student learning and success how we contribute, and it necessarily degrades the quality of the overall campus data picture. So if the data is incomplete if it's got a hole in it, then that the decisions that are made based on that information are more problematic than they would be if there was a complete picture so we need to think about what what the consequences are going uncounted. We can also be challenged sometimes to answer thoroughly questions about the ways in which libraries help or inadvertently hinder student learning and success. So if we don't have the longitudinal and detailed data that we might like to have the last two are also really important that we might see our roles as influential essential partners at the table if we don't, you know, pull a seat up to the table. We might be invited, and we might not so much lose our seat but then forget to sit down and not be there when important conversations are going on and you know they say decisions are made by those who show up and we want to show up and be part of the decision making. And then finally, we may miss opportunities to infuse library values of privacy and confidentiality. Those are far from, you know, being only library values lots of our partners on campus care about those values, but they might not see it the same way we do. So if we participate, we can't share our perspectives on those on those value and ethics related areas. So I want to turn to the project that was a little bit of what underpin the project I want to talk about the project itself. So, who was involved. Hi for the project and was just so pleased to work with an amazing team, you can see there's lots of different organizations involved here, University of Michigan, University of Minnesota Lewis and Clark Community College from a library perspective, the consortium Unison OCLC was a strong partner, IMS global that creates educational technology standards. So lots of folks involved the three of us are here today but everyone had a piece of what we did. So some of our project outcomes things we intended to do is with the Leela project we had started forming some partnerships and collaborations which led to that slide you just saw, but wanted to cement those connections and really make sure that those partnerships were fully formed and articulated and would last going into the future. We wanted to design proofs of concepts that would serve as models for future projects connecting library data with institutional learning analytics. We wanted to, we knew from the outside outset, develop library data profiles for caliber, so to enable the technology part of conveying library data to institutional or other record stores, and then finally recommend ways in which those prototypes and examples and models can enable the use of library data to help students. Now we didn't start from scratch, a lot of what we began with came from the Leela project I'm not sure how many of you are familiar with Leela Leela is the library integration institutional learning analytics project, another IMLS funded project which enabled a series of three meetings over the course of a year that brought so many big brains to the table to talk about what learning analytics could do might do what it shouldn't do with in terms of library involvement and one of the main takeaways from that first project was a prioritize set of almost 100 user stories that would express what librarians, students, faculty, academic advisors, institutional leaders institutional researchers might want to be able to get out of library integration into learning analytics. I want to support those as user stories which are generally written as as some stakeholder, I want to be able to do this thing in order to solve a problem which even outcome me to need. And so the Leela paper includes all of those pages of user stories, and we as a part of the Leela project prioritize them, and those prioritize user stories fed into what we focused on in class. And so the foundation came from Leela to to have that work already set to get into the class work. There were three phases to the class project. We had two in person meetings back in the days we could have in person meetings. OCLC hosted them in Dublin, Ohio and we all came together to talk about our plan and they finalize our partnerships and make drafts of our plans a lot of whiteboarding meeting to we finished the specifications or we thought we did. We had them pretty close but then when it came time to really formalize it and get it approved through the process and have it ready for the white paper which was at the end. We spent another good many hours meeting virtually throughout to 2020 to get the work done so there was at least three phases of the project although that last one was pretty long. Okay, I'm going to hand it over to Ken he's going to talk about the caliber library profile. Hi, Megan. So, I'm Ken Vardy, University of Michigan, and now that we've had that very eloquent and high level explanation of the need for learning analytics, I'm going to risk giving you all whiplash, and bring you quickly down to the kind of the, the five foot level from maybe the 5000 foot level where we were, and get into what the heck we're talking about with the caliber library profile. So for those of you who may not be familiar with caliber in general, it is a set of specifications that are managed by IMS global to help structure data about various kinds of learning interactions within an academic institution. On this screen on the right, you can see all of the different profiles that have been defined so far which library profile, maybe the newest, but it's certainly one of many that have gone before it. And each profile is is designed to structure data about some kind of transaction in a campus environment so things that students might do in a learning management system like canvas or blackboard. You know, interacting with a grading system, watching videos, all sorts of things like that. And there was not a libraries are, you know, delightfully idiosyncratic, but apparently everybody else is to business of all these different profiles. And we spent, you know, we determined very quickly that there was not a great way to shoehorn the kinds of interactions we anticipated wanting to know about in libraries based on the user stories into an existing profile. Hence, we had to design our own. Next slide, please. So these profiles are as defined as triples. They are at their very, and so it's just a stream of data, a giant text file that essentially says someone did something, an actor performed in action on an object. You know, and that that is probably not all that exciting in some cases, but it provides a framework that can be built on in the context that a particular library or particular learning researcher wants to find out more, more information about any particular one of these items, each of these three parts of the triple can be expanded on detailed to whatever degree is desired and is useful for the particular research that's being undertaken. So if the picture is worth a thousand words, clearly that doesn't say at all. So if we go to the next slide, I can give you a few more words. Examples for library type actions, you could have something that was very generic, a student used a library. That's, you know, very nice it helps you count how many students might have used the library and if you're thinking in the physical space that might be a gate count basically how many walked in a student accessed an article. Which is also something useful and you know, more on the counting side. A student intended reference concentration, these are all very, very simple straightforward statements of facts statements of things that happened. And the profile goes into much more detail about the vocabularies that are allowed within the standard for the actors the actions and the objects. They're all controlled, but they can also be decorated. You can add as much or as little detail about it as you like. So, next slide please. When you the learning data stores that receive these caliber events that can, you know, ingest these and put them into some data structure for future analysis can be built to essentially they take what they receive so they can become very complex. You know, it'd be three, three, three columns basically in a simple table actor action object, but you can also add everything else. And you can quickly get very much more detailed if that suits your research needs and your privacy policies and and what you want to do. So you could say that student accesses digital resource. You could also say student with ID number something accesses digital resource with a DOI of something. You could get a lot more detailed if that was the data you wanted to present you could go with student with ID this and name this and then registered in the third semester of a five year program and lesson you could put as much data into that as you as made sense for you and as made sense for your institution. The possibilities are nearly endless. So, next slide. This is the framework for caliber. It's very flexible and very much able to be put in tune with your institution in your library based on the learning research the outcomes that you want to want to investigate. We started thinking about how we wanted to structure this profile. It was very clear to us that it wasn't going to be just, there wasn't going to be one this kind of event, because libraries do lots of different things and it became we started out thinking what was have this a library event that you can expand on and make incredibly complicated and beautiful and decorated. And very quickly came to be that we were writing Moby Dick, every time that we sent a single event. When we really wanted to write old man in the sea something much more sustained much more tourists and much more useful. So we ended up settling after the kinds of long discussions that only a bunch of librarians and computer programmers can have about what things mean into three very discrete types of events. So we started with this event, which we really focused on the physical space of a library, or a physical space being used in some context for the library library resource use, which was all about the stuff the library provides people whether that's physical, the physical books, digital resources, anything else that is a resource the library in some ways mediates access to. It's a library participation, which is a human to human interaction in a library context, it could be physical in person. Remember when we wrote this back that was all you know that was great you could do that kind of thing all the time. It could be virtual. It could be an email exchange for an ask a librarian reference question or any kind of thing at all training sessions. So we kind of narrowed things down to those three broad use cases. And I want to walk quickly the next three slides through the kinds of things we contemplated needing for each of these. So for the library use event. We only had one action used. We were torn between one to be very precise and very something that would let us do really quick research or quick analysis later on, and something that was generic enough so we didn't need to do that. It doesn't necessarily fit every single case perfectly, but as a concept a person used a space is pretty good. The kinds of things we talked about were libraries and library spaces. Libraries were intended to be whatever made sense and a particular institution for the biggest umbrella concept of the library, it could be an entire building which might contain multiple sub libraries and collections and meeting spaces and name spaces within the bigger room, all the kinds of things that we love to do with our spaces. And then a library space is that lower level, and you could make things nested so that you could go to the library and be in a library space in a bigger library space in a library building. Because again that reflects the reality of many of our spaces. So this is our library resource use. Again, on the next slide. It is, again, similarly, a little bit. The very the action the verb we decided was just access. We had spent too long debating whether you accessed a book when you pulled it off the shelf. So we're going to worry about that after probably too many hours, but we ended up with access is the catch all verb for all ways that an individual could take advantage of the information contained in something the library provided. And then we tried to keep it again relatively simple and broke the world down into physical resources and digital resources, each of which could come with all sorts of optional decorations optional identifiers. And then a handful that might be most generally useful. These are not requirements, but they are things so that when our library in Minnesota, for example, might compare checkout histories in some way. We might have a hope of having similar underlying data to compare and be able to do closer comparisons. If these don't make sense for a particular institution, you don't have to use them. You can use your own. If you have a shelf listing number that makes sense to you that'd be fantastic to include. And then finally, we had the library participation event. Again, we wanted to keep the verbs the as small as few and numbers possible attend doesn't necessarily fit you attend a reference interaction or reference session with a single subject specialist. Maybe that's not the way we think about it. But again, it sort of catches everything together into one into one verb to use. And then the object of that. Again, keeping it generic is an activity. And much as with the digital resources and physical resources. We suggested some common vocabulary, but these are by no means prescriptive or exhausted and are meant to just kind of help you help you as a library generating these events out of your existing data streams or existing tracking mechanisms to make it more to make it more consistent across institutions. So what is one of the oh and then before I get into an example of what one looks like on the next slide, you're right to advance Megan sorry. The subject of the sentence and again we wanted this to be incredibly flexible recognizing that what is what makes sense to describe the person doing the action falls under a whole range of policy, sometimes legal other rules and processes across institutions both at the library level and at the institution level. So we were perhaps the least prescriptive when we described what an actor is at the very least you have to declare whether it was a person or a machine doing the interaction. You might have automated processes that get recorded, those probably should not be attributed to a person and vice versa. But again if it makes sense for you, you can break that out instead of using a generic person, you could say undergraduate or first year student or a user login, whatever makes sense, and you can again that you, it's not a one size fits all even for a particular library so you may want a person to specify generic person in one case and you may want to specify class year in another case in your own data. That's that's all all fine. And again like anything else you can attach more and more information to this if it suits your needs. All right, so what does all this look like when all is said and done. Now the next slide. This is a fairly simple straightforward event packet describing a single discrete event. It can be sent from a library to a learning store. The first six lines or so are basically made a data about the event, they simply say that it follows the caliper spec version one part, number addition to the profile extension has the universal identifiers it's a type of which profile fits under, fits under, and then it's telling you there's an actor it's just a person, and this happens to be the highest level person it's the what any caliper compliance system uses to say this is a person action used, and it has an object, which is a library space. So this would be great for know your annual ARL ARL stats how many people entered this building. There you go you could use this to collect and count that information. On the next slide, we have a slightly more detailed sample packet. It looks very similar it's still a person. We're not saying anything more about who this individual or what the individual was or what their context is, but we're giving more detail about the what they access to library resource for the title, and a particular idea and it also adds more information about where the person was when they accessed it, they happened to be off campus, which we might have determined through their using a proxy server or an IP range from some kind of from the authentication method that they use. In this sample they may have come from, we knew in some way whether the because the link was in a canvas or blackboard site that they access this in the context of a reading list for a particular specific course. So this gives you a little bit more detail about the about the use, less about the person, more about the item, but you get the idea and you can extend, you know, anything here almost infinitely to be to give you the data that you might, you might want to have. And that is a really quick introduction to the caliber profile, and I will now risk giving you whiplash again by elevating you to the future and turn it over to Shane. Yeah, thanks Ken. My name is Shane accurate and with the University of Minnesota. I'm going to talk to you about what's next for the Cal Poly library profile and how it might be used in the future. So go ahead and next slide thanks Megan. So right now the Cal Poly library profile is in what's called a public candidate final draft. So you can actually see the profile and all its glory. If you follow the link I think it's been shared in the chat but also be on the last slide. It's public and it's ready for implementation so you're more than welcome to use it yourself and test it out. In fact, we need implementations because the approval process for Cal the Cal prospect requires that. So though considering or for the class grant we considered other vendors and two existing tools the libraries have used for assessment purposes that we may need to emit caliber output for learning analytics purposes. In the earlier slide Megan highlighted that OCLC was a part of the class grant. So they worked with OCLC through the class grant to consider how easy proxy could be used to output catalyprized library data for use based on easy proxy logs. A number of institutions, including the University of Minnesota and I think Michigan to have used easy proxy data in the past for projects that correlate libraries to students access measures, and I think we all think it's very likely that easy proxy data could also be used for analytics purposes learning analytics purposes in the future. Next slide Megan. So, again we work with the easy proxy or work those DLC to to think about and model how to standardize processes and workflows and create early a new tool that could be used to create caliber output. Based on easy proxy logs caliber output in the form of this new Cal Cal library profile so you can see in this slide there's a number of steps in this so we take the easy proxy logs. And we feed them into this new tool, and this the second part of the process, which we call the enrichment process this is where the data from the easy proxy log might be enriched. Because as so you might know, easy proxy logs really can contain typically just a URL, maybe a user ID that kind of thing but that URL can be enriched, especially if it has a PubMed ID or do I, we can clean out what was the journal that was being accessed what was the database that might have been used, and and try to get some more information that might be useful again for learning analytics processes and projects. So the stage of the process though would be to make a privacy decision it was important for us to take advantage of the caliber library profile and how flexible it is around privacy, and build into this tool the idea that information can be very specific to be despecified. So you might want to be specific in the user ID that you something but very general or broad in the case of what actually they used so maybe user X, used a database rather than the specific database or user why access just to the library or you might want to go the opposite of that and be more general with the user but get specific in what was actually accessed this library privacy filter that this tool will have will make that possible. And then it will spit out the caliber output based again on the caliber library profile. So that caliber output can then be put into a learning record store, or some kind of other analytics ecosystem, as it says there. And, like I said, we're excited to work with OCLC on this. I believe OCLC is committed to building this, but we are working on a follow up now which could accelerate that process. So stay tuned for more information about that. Next slide. I'm going to talk about a much more specific example now something that we're grappling with the University of Minnesota. Next slide. There's a tool out there called my learning analytics and the University of Minnesota right now is piloting my law. It's a Michigan design tool, and it's made available through Unison. We're all part of the Unison Consortium to member schools I believe six schools right now are piloting this tool is learning analytics tool it's a student facing set of visualizations based on canvas data that is then placed into their learning record store the Unison learning record store. The first visualization is for assignment planning. The second is for grade distribution show students where they fall in in the in the course. And then the final visualization is called resources access. And this is where libraries come into play or could come into play with the resources access dashboard students can see again which file which files are popular in the course, and then which files are accessed most often by classmates and the idea is is that they can look at this dashboard to get a sense of maybe a reading or file that they have missed looking at. And as you can see files must be uploaded to canvas. Again these are student facing dashboards faculty can't see this advisors can't see this this is literally giving students their data back to them in unique ways that that hopefully will help them to understand as you can see by the last bullet. Michigan has done some evaluation and they have found that it has been had some positive results for a student success. Next slide please. The difficulty those that University of Minnesota we use legato to create course reading lists that are then integrated into canvas. On the right hand side you can see how legato integrates. PILA resources access visualization though does not include readings from these lists and a lot of a lot of courses use legato to build reading lists. When, through the piloting process, we, we had to recommend that it that only courses that didn't use legato should participate in the pilot because the information was incomplete. And if a course is using legato that that information would not be in the resources access visualization. The ramification of that is, you know, if, if my law is successful in the pilot, and we decide to implement it across the university faculty may need to decide whether or not they want to use my law, or if they want to use legato. And that that's not a that's not a good choice as far as I'm concerned. So next slide please. So the library caliper profile or the caliper library profile could could help with this. We could configure legato to create caliper output for inclusion into my law and again the unison learning record store can then be combined with other learning data that currently populates my law. And the obviously as Megan has already mentioned to, including this will create a more complete picture of student engagement for these these courses and it would then mean that faculty don't have to make that choice between using these tools. The caliper library profile definitely makes all of this integration possible. And, and I just wanted to say that this, this isn't going to be the last time that we have to make a decision like this I think. I think these types of choices are going to become more and more prevalent for academic libraries as learning analytics tools become more widespread on our campuses. And I think the caliper library profile is a good step in the right direction to give us the option then of participating if we so choose and if it fits into our privacy policies of practices. If we so choose we can use the caliper library profile to participate and be at that table. Next slide please. So, this is just a number of links more information about the class grant the white paper that came out. There's also a link there to the, the library use profile that can was talking about. And then the lila or the library integration institutional learning analytics in a link is there as well for you to read over. So that's our presentation. Next slide. Yeah, there's our contact information if you have any questions or comments. And I guess we've got plenty of time also I think now to take questions if you haven't. Terrific. Thank you. Thanks you Shane, Megan and Ken for that interesting overview of these new specifications and how you envision seeing them used going forward. The door is now open for questions. So, we invite our attendees to share any questions you might have in the q amp a box. If you'd like to ask a question live just raise your hand, and I will be happy to unmute you. Thanks also to Andrew pace for sharing helpful links along the way. And can I see is added a link there to the recent white paper that was released in December, I believe. So we'll give folks a few minutes to think about any questions or comments. They may have. I was wondering why we're, we're waiting for some of those questions to come in. Do you foresee any limitations to the specifications that you have developed. Is there anything that you think might need to be expanded on it sounds like you've built in a tremendous amount of flexibility but do you envision any. I can take a first stab at this question. I think that one of the challenges will be is, you know, and something that we spent a lot of time debating is, is what level of granularity is the right one for the main the main event so we have we end up with three to kind of describe the entire universe of things a library offers a campus that maybe too many, probably not, it may be too few, it might be. But we sort of, we settled on the the too few figuring that it was easier to split something later, and then to have a thing that a couple people used and then most people didn't and then that data was more orphaned. So we, we, we, we hope that if this, you know, to the extent that many libraries start generating events and sharing them within their campus or even across campuses through consortia or otherwise that you would be able to drill down through variations on an event, as opposed to having to vote, you know, merge somehow different events that had different frameworks. So, I suspect we will learn all sorts of things as we, as we pilot this, and that's why we pilot things exactly. If you start using this, and you run into things that don't seem to make sense or where the resources the activities you're trying to describe are describable. Please let us know. Very helpful. Thanks. Thanks, Ken. All right, it looks like we do have some questions and comments here first from Claire Stewart. Thanks for the presentation and the update. She might have missed this in the update chain but are the privacy filtering and enriching tools, things that exist already exist at Minnesota are being built via separate project etc. The privacy filtering for easy proxy does not exist already. The tool, the enhancement that's going to be created doesn't exist yet. But we have been doing some of this stuff at University of Minnesota, a lot of what we discussed with OCLC was based on what the University of Minnesota has been doing with easy proxy data. And we wanted to standardize it and because we get a lot of questions from a lot of libraries like how do we do the same thing. And we worked with OCLC to try to make that so that anybody that wanted to do that with easy proxy could any library. Okay, thank you. Thanks Claire for the question. And thanks for addressing that Shane and Nancy also comments on privacy from a librarian's point of view the privacy issues seem daunting. We have never associated usage with specific patrons, mining easy proxy etc seem to go against what the libraries are comfortable with mining. If it's down to an individual level. I'll start with this one so thanks Nancy that's, that's a really obviously incredibly important question and tricky space in doing all of this and I would also invite anyone listening to look at the white paper because we tried to handle that, or, or describe some of the thoughts around the privacy issue more eloquently than I'm probably about to do right now. Well, it's fun to have something can be like write down and, and get it exactly the way you wanted to come out but I'll also take a crack at it so I think that, you know, these conversations around privacy are exactly the ones we should be having in our field that things are changing rapidly in terms of what kinds of technologies, the type kinds of technologies and how much data they keep and that's both at home grown stuff and vendor stuff. And so we're being sort of forced willingly or not to have hard conversations that maybe we didn't have to have in the past. What I really want to want folks to talk about and think about and sort of come to their own conclusions about is how we balance competing ethics and values. So definitely privacy is a deeply held value to anyone in libraries, I don't know anybody who doesn't care about it. I do think that we have to sometimes disentangle privacy from sort of its, its family of complete anonymity, or confidentiality, right so those are two totally different things and we tend to equate privacy with complete anonymity. But privacy is actually also an aspect of or confidentiality confidentiality is sort of what we do with that data to keep it private that's another way of thinking about privacy as, as protecting the data we have not not having any. But that's a, so there's like one bucket of conversation. Another bucket of conversation is how do we balance out our drive and ethics and values around privacy with our drive and buckets, or drive and values and ethics around equitable access and in our case in academic libraries are learners and our, and our faculty. So, you know, we are just as duty bound as we are to keep data private to do something with the data we have. And when we look at current realities and higher ed, whether it's, you know, sort of a, the big metrics where we know that there are retention problems we also know that there are learning, you know, learning gaps that shouldn't have to exist and so on so whether you're thinking about like the big success metrics, or the specific like learning in a particular course and that and understanding and gaining what you're supposed to get out of a college education. We have obligations there to right so not only to support students in and achieving what they're here at the institution to achieve. But also thinking about if we could help and we don't what are the ethical implications of that. We don't want the equity issues right so if we know that we could be doing better services or more targeted services or resources or what have better facilities or if we know what we could dismantle things if only we knew about them or had better detail on them and we don't, you know, then we uphold structures and we know that, you know, we can't really be proud of as a profession. So figuring out how to balance those two things, which are oftentimes posed as complete, you know, like, oh, they're in battle against each other but in fact we just need to find where our line is right so what is truly private and we would never ever want to capture in any way versus what is something that is part of the learning endeavor, and we can help students with and that's what one of the examples we showed was a reading list example. You know, readings from a particular course are not necessarily like personal investigation, as you might see with some sorts of days so there's there's all these layers and things to consider this territory my favorite word for this territory is nuanced. Nothing is clear cut everything is complicated. So there's all of that. So that's bucket number two how do we balance our values around privacy with our values around service equity access. And then there's another bucket that is important for this conversation is then is that the caliber library profile doesn't prescribe any level of personally identifiable information that is completely left up to the institution or actually the library's choice in consultation with what kinds of research questions they need to know. I am definitely personally not a fan of going on fishing expeditions and data. I think you should have research questions driving your, your concerns what are the big problems how are you going to solve them what data do you need to solve that. How do you protect what are the governance policies in place who has access how do you make sure that that access is protected how do you go back and look at, you know, the, you set something up and you think it's going right but if there's not human eyes on it you might be inadvertently doing something with with the processes so continual reflection and evaluation of what it is you're doing with the data so all of that is a part of it, but the caliber library profile doesn't prescribe that you have to keep that data it just enables those enables you to count person used thing. Or to if you have an important research question that needs answered so that you can serve your students equitably and well, it provides the possibility for that but you'd have to go through the all the thought process. And I think there was another bucket but I can't remember with this as a hole in my bucket and I don't remember. Thanks. Thanks for that comment Nancy and Megan thank you so much for sort of unpacking all of the nuances there. It's a trade offs to. So, yeah, really interesting. Jimmy gaffrey has a has a question thanks for the presentation appreciated the attention to privacy. Did you envision this shaping future approaches to library services that would involve that would involve opt in or opt out of tracking for students. I could try I guess I don't. So the caliber library profile will capture use events. And if there was going to be an opt in or not that process those are probably separate from from what the caliber library profile would would provide for. You know if you wanted to build an opt in and somebody didn't then their data would not be captured and not be rendered in the caliber library profile. And on the flip side if there was an opt out possibly data could be removed from the learning record store after its input. So yeah I mean I suppose thinking could go around the caliber library profile but I see the I see those two things as being separate. Do you have anything. No, that was that lines up with with my answer. I mean I think that I at the risk of saying something that would be taken out of context. I'll run with it. What are the risks is that if we're trying to understand global usage of a system, enabling not letting people, not having it be global, really harms being able to find out what is where we're not doing well or where we're doing particularly well. I think that the caliper spec doesn't take a stance on what gets included a library or an institution is most certainly empowered and invited to figure out what makes sense. Maybe there are some consensus is across consensus across all academic libraries are all libraries about what what level. No is this only opt in so, which is certainly very valid and I understand that approach very well but it means probably that we will not have data we can actually do anything with, at least not at any scale and and again if that is our decision and that is a decision and that reflects a firmly held library believe if it's opt out. Again, we will, we will have some challenges in and trying to understand who is not in the data, which exacerbates to an extent the same conditions we have now where we have the data but we don't know anything about anybody so it's all it's the treating towards the average. There's an answer here. I think that there's something that you would have to figure out at your own institution with your own staff in the library and on the campus where you're in whose ecosystem you are part of to figure out what makes sense. And bringing in student voices into these decisions, I think is also really really important. And, at least from from my own institution is something that we have only done on the margins, if at best and I, we are starting to think about that very very seriously and realizing that we were making decisions or not making decisions. So, whichever way we go we're not even asking for this decision to be made or the people whose data we are probably talking about and I think that that's something that I would, I would hope that many that many of us are also coming to that realization, even if it is late as it might be in my as it is in my own case. Thank you. Thank you for your question, Jimmy. And now we have another question we probably have time. This is probably about the last question we have time for so I'll read this now from Eric really interesting work thank you. How have other learning analytic areas handled student privacy and how does FERPA factor in. Not generally and then Shane and Kendall say something smarter. So, I think, you know, I think if you're if we're talking about across campus. I think most of the work on student privacy is in protecting the data that exists through policies practices governance, the hiring of chief privacy officers. And actually using all of the both technological and process protections that are available. I don't, I don't know that FERPA figures in as much because this is sort of information that the institution has and needs to have in order to operate. And then an opt out question, when you're talking about learning management system data, in order to participate in the campus students have to oftentimes it engage or all the time engage with a learning management system so that's just sort of part of the process and I think on most campuses something that students probably accept in the fine print without reading and so we do need to do some work about how we educate students to understand how their data is being used I applaud libraries in particular that are making sure that their privacy policies. And reflect what is actually happening, not necessarily what we wish were happening and doing that work of being honest and sort of doing an audit about you know what what kind of data are we keeping them, but at the institutional level, you know there are so many layers of technology protection but also governance policies processes, who has access, you know all of that is getting laid out but I think it's even, you know, if there are holes I hope that they are getting addressed. Now when so much of the student activity is technology enabled. And so where maybe some campuses where that has not been updated adequately, I hope that they're looking at them and more thoroughly now since students are engaging almost entirely through Shane can do anything to add to that. Real quick, I guess I think my impression has been outside the libraries. It has been more focused on data security, ethical uses of data and transparency of how the data is being used privacy of course is important. But those three areas seem to be taking precedence and they're committed to maintaining student privacy but they're also committed to using the data to help students succeed and using it in ethical ways that seems to be where the conversation is. I would just build on that a little bit. I think that they're in my library anyway there was a fairly healthy and I think mostly inconclusive debate about what is a record as defined by FERPA that applies to the library and there's the letter of FERPA and there's the spirit of FERPA and a lot of library data I think would certainly falls under the spirit and may not be under the letter now that I don't want to say just because doesn't say that in FERPA the law that we shouldn't be paying attention to it. But I think that there is a large gray area that falls under the ethics and the institutional needs, the institutional desires and the motives that may push a particular point one way or the other in a given context. So it is very uncut and dry. Great. Thank you. Great way to wrap that up. Thanks for the question Eric and thanks to the three of you for addressing that now we are just a bit past the hour so thanks for hanging in there with us everyone. I think Megan and Ken are able to stick around a little bit longer after I turn off the recording if anybody wants to hang out and chat with them. I think Shane has another commitment but I will with that close the session thanks to our wonderful presenters. Thank you to our attendees. And I hope we'll see you back at CNI for our last live project briefing tomorrow. Thanks so much everyone. Bye bye.