 Hello and welcome to my CNI project briefing for spring 2023. My name is Jason Clark. I'm the lead for research optimization analytics and data services at Montana State University Library. You can see my slide. I'm, I'm going to talk a bit about artificial intelligence and data science and how we start to introduce those emerging technologies to our organizations. The title of this presentation is you auto complete me navigating human machine relationships for responsible and sustainable AI and data science implementations. A word about where I'll kind of how what outline I'll follow as I move forward. I'll talk a little bit about research motivation where we have opportunities to create dialogues around artificial intelligence and knowledge work. In particular, I'm going to point at a number of artificial intelligence and data science prototypes that we use locally to facilitate some of these dialogues. I'll look at organizational learning opportunities and competence. How do you build comfort and familiarity with AI and data science. And then I'll make a note of various challenges, research, research implications implications and potential next steps with parts of this this work. So, in terms of motivation. I'm on a team with Sarah manheimer, who is our data like librarian and a project director for the responsible AI tools for values driven AI and libraries and archives. IMLS Institute of Museum and Library Services project that was awarded in the summer of this year. So this is early, early stages with with that particular project. Sarah has always been stressing, as we think about artificial intelligence in clam institutions or libraries or archives. We're thinking about how do we support the responsible use of this new technology. And giving us a sense or focusing on empowering practitioners, finding their role, finding how we might think about implement ethical implementations of this technology. Parts of this idea really begin with my involvement on the grant with Sarah. But there's another component that has been kind of, I've been seeing acknowledging and sort of feeling myself around general, I wouldn't say obviously excitement. I think there's some real potential within a lot of these technologies. But I also sense a little bit of just anxiety, just general like, what does this mean for us how do we do we have a role here. And so you'll see comments like this and the the F word that fear is the one that I sort of key on a little bit just just to kind of understand. There is a role, but there is something there's a bigger question here for, for a number of us. This is from a quote from the archives access and artificial intelligence book was released, I believe 2022. So, in our setting. I've been wanting to kind of connect the the grant work, the research to practical implementations in the library and create some discussions around artificial intelligence and how it might apply to knowledge work. So one of the first things we did are that we looked to do was to connect our staff to the possibilities around this work. And part of that was facilitating dialogues, these dialogues are open invites go across the organization. And eventually they are framed with a demo, a particular demo of AI or data science, and then a primary concept. And because this is an evolving stream of research but also the technology is evolving quickly. We focus on a series of discussions something that we can kind of come back to over the course of the year. One is an example of one of the first invites to see you can kind of see how we set expectations. We were pretty clear in introducing when we introduced this concept saying it's okay if you just want to come and watch. There'll be some of us who are ready to talk through what we're seeing and we do need a leader or facilitator that is usually me. In this case we had a primary concept of just the larger language models and the various generative computing models that are being seen in things like chat GPT that product from open AI. That's got a lot of it's been in the news a bit lately. You can kind of see that tone and these slides will be available. But the first part of this is creating a comfortable place to have a discussion to ask questions right to sort of be like puzzled by this technology. The dialogues are centered on trust truth telling I had a different word there I didn't use it and interest in shared learning like this a general sense of like these are open pop in. But we want you to be interested in learning learning more. So the other component of these discussions I kind of mentioned the framing creating keeping it open bringing a number of all levels of the organization making it open. One of the other things we do in these sessions is come up with it's usually some kind of prototype or concept. Concept that's demonstrated by an existing prototype can be either or. In the past we've used things like an image classifier to work through inclusive metadata discussions like what's lost when this model isn't informed or supervised. Other things like how do you talk to a machine so like voice interfaces. How much you automate or create a chat agent that that helps with web archiving something like that. And then I've presented on with Leila Sturman on general text summarization natural language processing we've done to create accessible citizen science. All of those prototypes exist locally but sometimes if something is sort of out of scope or something we haven't really done locally. We might turn outside. But as we frame these sessions along with learning open fun trust trusting your questions being able to be puzzled. So all of this is the demos we we use. We tend to use these these these very these concepts right. There's a sense of play dialogue is is involved like we're going to ask questions. We're going to sometimes we're going to be constructive sometimes we're going to be that there's that truth telling concept. Sometimes we're going to be really critical of what we see. And more over the at the end we want to be able to explain what we're seeing. And with the local implementations that helps because it's either been somebody like me or a lead who's done the work. So we can we can talk about the models that we used, or why we made this interface choice, right. But all of that is kind of filtering into the prototypes we bring into these dialogues. I'm going to just demo one. There are a few that I mentioned there. It's usually a form of like knowledge work that we have in place and it could be something like creating metadata summarizing text. We have a couple of reference librarians instruction and reference librarians who actually do systematic review kind of work on articles, so survey the literature but also summarize it and rotate it. It could be something like a web archiving experience, which is about what I'm going to show show, but usually grounded in some form of library or archival work, just so we have local context to understand the technology. So in this case, it's a come an interface that inquires of you that sort of brings you through the different stages of archiving and taking a screenshot and adding metadata to a particular item for for records for the purpose of web archiving. So if you can kind of see, you know, we have, we use this, the sense of play sort of, you know, bringing a ghost into into the exchange. You can see their clear prompts for how you work. You know, what are you, what are you doing what what do you want us to help archive. You know, can I have a title. I'll take a URL right so it's stepping through this but the whole, the whole point of it is it's conversational right and it's, you're talking to the machine so this is human machine interaction, really kind of building on that idea of a conversational agent, right. You'd want to add tags you can do that. How about a description. When once you're all done moves through us as Hey, this is what I got this is look okay. I'm about and then if you hit yes, you get. Oh, great. I saved it. I saved your metadata. I took a screenshot I took took files. So now you have a record of this, this particular item. And what these prototypes allow you to do is humanize that technology, which I think is really important, not only to get a sense of how this works locally or why it would work inside of a particular workflow in your organization. But it just moves it from from abstraction to practical. So you can provide an experience. People can talk or see or view or passively watch something on through a video, but if you give them a tool that they can kind of poke away at work through that's an experience that they that they take and then start to internalize. I think in these prototypes, because there is a level of local implementation or control, we can ask questions. We can ask questions of ourselves, we can ask questions of the developer we can ask questions of the technology. Why is it important. How does it work. And that is what makes the prototype essential to the dialogue and sort of habituating the experience of of this new technology. So the other component of this is it brings about organizational learning. And really what we focus here is what are the, what are the places that not only we can start to understand the technology, but how can we make it transparent. And this kind of goes back to Sarah Mannheimer, our project director, the responsible AI grant her focus had always been how do you, how do you implement the implementation, but also sort of the ethics of implementation, why you're doing something that you do. How do you do it, how do you want to explain it to a public so they can have an understanding of it and start to trust it. All of these kind of play into organizational learning and where, where the, this, this dialogue and this, this conversation around humans, working with automation machines, AI, these sort of these newer computing models. And so one of the things that we do in these sessions is really try to unpack what are the roles we might have in AI. Not everybody is a software developer or is wanting to work with code or. So, there are lots of ways into that question. One of the ways we think deeply about our work is what kind of digital literacy, can we promote, how can we lead and build out understanding of the primary concepts of AI. This really connects to our instructional role in the university. So you can see things like what we'll talk about is what kinds of literacy, can we bring to our literacy questions can we bring to AI. And it can be things like the bigger picture concepts the primary concepts behind an AI implementation like generative computing or the models themselves, even the data sets that feed the models. And then trying to understand them, maybe even interpreting the models at times, all of that is a form of literacy that we engage with, within these dialogues and empowering the staff and our other members of our organization to think through our roles. And facilitation is another one that we kind of talk about in terms of AI so that not only just like understanding the new tools where they come from how they get applied, but how you start to work with them and those of you who are going to be showing my age there was a time when we used to have to talk prompt search engines, or login to have access to a particular database and you had to know, not only just like credentials to get in but you had to know how to talk to the system. So here in another one of those moments with AI where we talk about prompt engineering like how do you not only understand the model, but how do you talk to it so that you can get new information or the information you want from that model, you know how do you turn it from thought into production and useful production that you can, you can use. So facilitation is something we talk about. And then also supervision, like in general, what are the ethics of this, this particular tool, what kinds of data or how are these models working so like quality control on the data or the models. And then big picture bigger picture picture questions about how would we implement this, what happens if we implement this, all of that. So let's go back to its literacy, its facilitation, its supervision, all of those roles are available to to us sort of talked about at the beginning but I want to kind of come back to the challenges of this work of seeing yourself in an automated process, not always immediately clear. And it can also create what I had talked about earlier. I think I said the F word, what I meant by that was fear. And there's also there are opportunities that come come forward in these discussions. So you'll see general sentiments like what I have on the screen, what I'm terming, giving the term, I think we said fear before but I'm, it's, there's also some anxiety about this right like there's some excitement about like oh my, oh my gosh I can't believe these things. This new technology, I can ask it a question. It gives me an answer it writes an essay. That's in particular when we're talking about chat GPT which on the screen but there are other ways that like general natural processing text summarization other things that we've done you saw the, the example of the chat bot as well. But what I would what I would call some level of existential anxiety. Just like how do I fit in now what does this mean for me. I wanted to have this discussion we need to be real with each other and trust each other. So there is a component of the dialogues that's, what is this, you know, how does this work. Why would this work for me how does this help our organization. What does it mean for me, it's okay to ask that question. That challenge sort of comes back to what does it mean is obsolescence on the horizon. What does it mean do I need to I need to learn new stuff. And how we try to frame that is, how can I contribute or what does this mean for our organization. What this does is it creates trust like if you're real about what you're seeing and how it might change a role or require new skills that opens that starts to open the door to, oh okay I can see where we go from here. And without it the dialogues don't, I think you can, you can, you can have them, but that's, it's really more like a show and tell, which is not, not the goal of these of these sessions. And along with that challenge there's this bigger opportunity and what we start to see in these sessions is not only a sense of oh like this, this, this could lead somewhere and sort of optimism, building out, but also a way to communicate new internal relationships. So I mentioned the instruction and reference librarians have been doing reviews for faculty or summarizing the annotated, doing annotated bibliographies or that kind of work for grant teams or other other entities on campus. And that service is really well received within a particular grant team, but it's also something that because we're relying on a form, an intensive form of research, production, single readers. And the idea that we could offer that at a broader scale to the, to the university or other, other audiences is just not really in the works. So one of the ways that, you know, we're able to kind of talk about where, where current services are what's valued. How do you move, how could you connect a current valued service to a more to a broader scalable model. Parts, parts of this, this dialogue have opened up discussions around. What if we could help build automated annotated bibliographies with some of this work. It's a moment of like, local summarization and text learning natural language processing, and then finding a way that it might help others in the organization, the dialogues create this right there. It also builds a bit of empowerment, especially if you frame it with the roles that we have that are still remaining as you move to use these technologies, things like I mentioned facilitation supervision. And then others, I think the other part of this is just allowing others to conceive what AI projects might be. Again, tasks that might be supplemented or extensions to current work. It's kind of where I was going with the new internal partnership sort of came up in one of these discussions. And in terms of research implications, I really want to strike. There are. I think as we will get into the responsible AI group really is moving to not only do survey the field understand. So we're building case studies first with various libraries that we've put out a call for proposals to understand how AI has been implemented in libraries and glam galleries, libraries, archives and museums. So it begins with case studies and then where that's going to go is studying the case studies coming up with a send a harms analysis tool or a way a framework to look at implementation of AI and move it forward ethically and responsibly. And one of the implications of that research is understanding that glam institutions have a role to lead questions, even in work in generative computing. Moreover, understanding how or why AI has a role in knowledge work. I, that's something that we're already seeing with the local expression of this, this grant research. Building some talk and some discussion around why this is a compelling technology, or or even in some cases if it's not if something that doesn't make sense. And then finally, creating new research and projects based on that trust and learning in the dialogue sessions and as an example, one of the things that came out of our first session was this sense of like it'd be pretty cool if we had a chat agent that did some of our, can we, you know, can we start to stand up a prototype so in addition to the prototypes I mentioned this is a newer one that hasn't really released yet but where we're going with it is generally an agent that could not only summarize work, but maybe even generate one of the pain points we heard. Most recently as we've been working with a number of grant grant facilitators on campus. And they had wondered about other ways that we might think about generating even just like starter template language for an NIH grant or. One of the things a certain assistant like this could do is maybe if we said if we taught it enough and showed it some grant writing, it could start to provide a template for grant applications. Right. Right. Or like the beginning narrative of a grant proposal, something like that. And this one also this one in particular is building out to actually do some of that summarization I mentioned that instructional and reference librarians were starting to do. Even as we, you know, we can set there. It's, you can give it a prompt, and then eventually it will, you know, if you say, can you build. Here's the article can you summarize it in two or three senses for a particular audience. And then we can see it see it doing doing that work. But again, lots, even as this appears like, Oh, well the work is done. There's so much work in teaching the model, making sure you've got the prompt correct understanding how it's going to interpret, and what what it's doing. There's still high higher level intellectual work that just sort of changes in this mode it sort of changes where the energy is not really spent on generating the text it's editing, auditing, making sure that the, how that model is conceiving of your question. Is it doing it correctly. That's where the work is in this moment. So, but encouraging encouraging to see how these dialogues create. Not only new ideas, but also people wanting to engage in this and even since we had our last I'll leave, I'll leave with kind of close with one one one thought people have been experimenting with having fun with some of this this work and I think there were a number of So, some staff create closing reports, you know, small narratives for end of night emails or just like updates. And they've been running that through with a particular you know asking particular whether it's chat GBT or some of the other other models like cohere AI or their number of companies they just keep They are very popular right now. But what what's happening is they're actually using these to give some personality to the Their email reports on what happened in the day or the night, you know, so can you write this as Jeffrey Lebowski from the big Lebowski, you know, give me giving the prompt and then watching that come out and then sharing that just to just to show how this The group is generally starting to engage and sometimes critical sometimes having fun, but all all through this learning how to how to prompt the system and how to move and use these technologies. So, in closing and thinking about where what does it mean. I would start with the top of the title here just of the slide, establishing our role in AI and knowledge work. The things I'm saying on the slide are, you know, we are part of that AI and research and learning process, but also demonstrating how to apply that to university research and institutional data questions. You can see me starting to kind of think that way. As I talked about the grant writing template proposal, you know, is there a way to think through generating text like that that could give researchers a lead. So they're not looking at a blank, a blank piece of a blank screen and a flash and cursor. So, I will leave you with that. These slides will be available. I'll put some references on there. You can look at those and then also know that you can contact me anytime. We are just getting started with the grant work. And you can see how that grant work is filtering into organizational work. And I'm always happy to talk about this. So please reach out if you have any questions. Thank you.