 Okay, good morning. Buenos dias. Good morning and welcome to one of the first sessions this morning. As many of you will probably agree, over the last year we have witnessed a fundamental change on the art of writing. And yes, that includes academic writing. Today we are going to share with you examples of our institutions about what we've done to proactively assist students and faculty navigate an increasingly AI powered environment. So we will start with Jolene from Carnegie Mellon University. She will talk to us about Kineas, an AI tool which is fully embedded in the writing process. We will continue with Ben from the University of Maryland. He will talk to us about their collaborative work to create a campus module available to everyone at their campus. Then Leo from the University of New Mexico will share with us the findings of an AI exploration project that they did with a group of library employees. And finally, I will talk about SITE, another AI tool that was built specifically for academic writing and research. We plan to have a few minutes at the end for questions. And with that we'll start with Jolene. Thank you for that introduction. I'm already fired up for this session. Hi everyone, my name is Jolene Paspa and I'm the Director of Library Services at Carnegie Mellon University Libraries and I'm going to talk about Kineas. I'm going to talk a little bit about the background about CMU with a nice winter scene with Hunt Library in the background there. I just wanted to sort of emphasize that CMU is frequently cited as the birthplace of AI and the purpose of this presentation anyway just to say that I think that our landscape is pretty open and receptive to implementing new technologies and experimenting with new technologies. So that's where we came up, our problem space where we were about a year ago. Much like many libraries, we were asking ourselves some questions. So how could we increase pathways for our users to discover and access library resources? I've read the studies that talk about how researchers don't tend to start their research at library web pages which isn't shocking. So what opportunities could we find to meet our users where they're doing that research? And we also wanted to think about ways that we could kind of educate researchers and users to form better search queries and so would there be ways that they could distill their research questions into topical searches for improved search results? Which then leads us to Kineas which is an AI powered recommender tool that's designed to assist in the discovery of relevant research articles and it does that by analyzing any text that you put into it basically. So it comes in a few different flavors. There's a web based version of the tool. But then there are also plugins that integrate with Microsoft Word and Google Docs and that sort of goes back to what I was saying before about meeting users where they are. So if you're reading a paper that you think is really fantastic and you're reading it in Microsoft Word, you could actually see the recommendations to other articles in the tool while you're using Word. So that's a thing that we really like, the flexibility and portability of the tool. So here's a screen shot of what Kineas looks like and so the left hand side is the Microsoft Word version of the plugin and you can see Kineas on the right side there, that little menu that pops up with different journal article recommendations. There's also the topical search or topical recommendations in the middle and then on the right hand side you can see how Kineas at the article level links back to the library link resolver and some of the topics that it suggests for that article level metadata. And because Kineas utilizes AI technology we did end up asking ourselves some questions and thinking about things differently than maybe we would have for the average library database or subscription tool. But spoiler alert, Kineas passed with line colors for us and checked all the boxes but you may be wondering how it arrives at its recommendations. So it does use open Alex for its article recommendations which if you're not familiar is sort of like the open alternative to Google Scholar brought to you by the great folks who are responsible for unsub and unpaywall. So we really liked that it wasn't tied to a particular content provider and so we didn't have to necessarily question whether they would be motivated to direct traffic back to a platform or a publication. So there was a bit of neutrality there. And Kineas was a very young company when we first learned about its existence and we were actually the first institution in the United States to implement the tool. And I think thinking about the product roadmap is really important. This landscape is changing constantly as I'm sure many of you are aware and so at the time we wanted to scrutinize that a little bit more closely but it's also something that I think is an ongoing conversation and something we need to be aware of as time goes on to make sure that the product and the company still kind of align with what we consider to be important to us in making our decisions. And I just wanted to in my final slide just talk a little bit about how we are looking at the impact of the tool and its uptake on campus. And it hasn't been that much time yet so we are still asking ourselves the best way to go about assessing the tool. But some of these are maybe more traditional metrics that you might be used to. But we are also thinking about things like is user retention important in this environment? I don't think that's something that we talk about a lot in a lot of our other library tools. But I can talk about that graph in more detail if you have questions because I admit it's not very easy to read. But we are also encouraged to see that our total number of unique users started to tick up at the beginning of the fall semester. And also traffic to our link resolver started to go up in the fall semester. So we are happy with its adoption on campus thus far. And that's about all I have time for but I'm happy to answer questions. Good morning everyone. Thanks so much Jolin. So I'm going to be talking about the AI and Information Literacy Canvas module that we created at the University of Maryland College Park. So I'll start with a bit of context about my institution and the impetus behind this module. So the University of Maryland College Park is the flagship institution of the University System of Maryland and R1 State School with a sizable student population. I work in the Teaching and Learning Services Unit at the University of Maryland which primarily focuses on first year education across disciplines. Our library instruction includes a focus on source evaluation activities and other information literacy concepts. After the release of chat TPP and the explosion of interest in AI that followed there was a lot of uncertainty around AI among faculty and librarians at Maryland as there was across the country. There were concerns around plagiarism, concerns about what this meant for curricula and education as a whole and specifically what the effect would be on our campus. In addition to this uncertainty we saw illustrated information literacy gaps around AI as the spring semester went on. Just like many other institutions, UMD libraries received a large influx of requests for books and articles that did not exist. The result of students and faculty using generative AI to create a bibliography not realizing it was possible for AI to generate non-existent material. So how would we address these issues to a large user population? In partnership with the Teaching and Learning Transformation Center, which is our on-campus pedagogy and teacher training center, we decided to create a module in Canvas, our LMS, that could be integrated into any course across campus via Canvas Commons. We also created a parallel lib guide so any student could access these resources even if their instructor chose not to include it in their course space. So what would this module entail? Through internal conversations we came up with a few goals. First obviously we wanted to address our previously mentioned information literacy gaps. We were seeing that our users were automatically trusting AI output without fact checking and this was showing up in student assignments as well as our ILL requests. We also wanted to move away from the plagiarism issue that had dominated the conversation so far and reframed towards helping students become informed users of new technology. Rather than being scrutinized and under suspicion from instructors, we hope to invite students into a more open learning space to learn about this technology that will become an increasingly critical part of the information systems that they navigate every day. In a similar vein we wanted the project to reach a broad variety of students without alienating them including both AI users and non-AI users. We hope to create a resource that none of these groups would reflexively dismiss. We wanted it to be easily integratable, widely applicable across disciplines and we wanted to focus on practical skills for students not bound to any particular AI tool. We wanted to make it as evergreen as we could so we were focusing on information literacy skills that can be applied to multiple AI tools even as the field rapidly changes. Applying these principles we had to decide what key skills and information to include in the module. We had conversations with faculty members and students compiled frequently asked questions from users and experimented extensively with generative AI tools using them in ways that align with how we've seen first years do research. We were able to use existing resources for online source evaluation and concepts like lateral reading modified for an AI context and we decided to limit the scope to university research to ensure the content was as relevant as possible for our undergraduate users. So with all of that in mind the final module ended up having four sections. The first section was a dive into how AI tools work on a basic level. We have a video explaining mechanics of generative AI and giving some examples of AI tools. We also have another video going into some of the conversations that are being had around bias, labor, and privacy in AI. We include examples of these issues and screenshots of some of our conversations with chat, GPT, and Bing AI. We thought it was important to include these aspects of AI since they are also fundamental parts of how these tools work. In the next section we talk about assessing content for accuracy. This is where we introduced many of the concrete skills that I discussed earlier. We have a roundup of some errors made by text-based AI and we demonstrated some lateral reading exercises with videos showing a full fact check of a response given by chat, GPT, and a quiz giving students a chance to practice the skills on their own. We also had a section on citing correctly going over some of the MLA, APA, and Chicago styles that have started to be set for AI generated content. And finally we had a page about exploring further resources including a roundup of AI tools, the Dolly prompt book, and some other suggestions for ways to experiment with integrating AI tools into your workflow. We're happy to have received a positive response both on campus and from other institutions. Our next step is to do a more in-depth survey of the module and how the module is being used. We'll be updating the module periodically and our next goal is to talk about strategies for analyzing AI content that you yourself haven't generated. I want to give a quick special thanks to my co-authors, Mona Thompson and Daria Yaco as well as the Institute for Trustworthy AI and Law and Society who created the videos I mentioned earlier. If you want to explore the module yourself you can go to bit.ly.com and if you're interested in integrating this into your institution or one of your courses please contact me at bshell1 at umd.edu. Thank you so much and I'm going to turn it over now to Leo. Hello everyone. I'm Leo Lo, Dean of the Libraries at University of New Mexico. So after JetCity came out I was one of those people who just talked nonstop about it to all my people and I just kept talking about I encouraged them to use it. I would tell them I use it all the time. But after a few months of that I have noticed that there's a little bit of enthusiasm, quite a bit of skepticism and just a lot of indifference. So I realized that just telling or encouraging people to use it is just not enough. So I decided to develop a structured program to get people to use it. And we call it the GPT4 exploration program. That's really when GPT4 first came out. So we decided to give them a structure way to learn how to use it and so my talk is not really about a tool but how to get people to use a new tool. So I developed a 12 week program basically with three purposes. One is well let's figure out how to leverage technology to help us do our work and we always been asked to do more with less. That's impossible. Maybe with better technology that could help a little bit. Second thing is to increase the AI literacy level. I think that we just don't know enough and we have a lot of misconceptions on the new technology. So one thing to help with that is to just learn a little bit more about the technology and not just using it but different parts of it. So that's what I want to do with this program and also a little bit of cultural change. We know that we change is coming. Let's get into that kind of mindset. So that's what this program is about. 12 weeks with two weeks of introductory prep we asked them to read up on it, think about an individual project to apply this tool for and we asked them to attend and that was a library, like a bootcamp, AI bootcamp for library. So we asked them to do that. And then the next eight weeks we get together and they would document everything they have done and we get together every other week almost like a community of practice to share the progress and lessons learned, challenges and all of that. That was great. We loved that. That got people talking and then the final two weeks we wrapped that up to prepare to kind of share that with the rest of the college and then the university and hopefully the wider audience so people can learn from that experience. So we asked for 10 volunteers. We asked them to tell us what they want to do with the chat GPT or GPT-4 mix of skill level. We wanted to appeal from different areas to different enthusiasm level as well. Some were like this is kind of weird and some are just really into it and there are some individual projects that the user for data plans, one person from the university press use it to synthesize a lot of text for them, cataloging metadata, using AI to generate FAQ versus the human generator FAQ and see how they fare. My assistant loved it using a she has her own assistant to take minutes for her so that was great. So we did a pre-program and post-program a little survey and we asked them about how familiar they are with these AI tools and their kind of self-rated AI literacy level and you can see they're below midpoint at the beginning and after that they all increased all these kind of levels so they increased these over midpoint for those and then we asked them to just share what are some of the challenges, some technical limitations, prompt engineering came up quite a bit knowing how to use it well, how to communicate with these but they all said they gained these skillset during it and they feel more confident using it and this is heartwarming for me because they like the program itself, they rate it pretty high so we're thinking how we can even improve it a little bit for future iterations. These are some of the qualitative comments. This program changed AI from a threat into a collaborator. I gained confidence in using AI to enhance my daily work rather than replace it and the freedom to experiment made AI less intimidating and some of the key learnings, well having hands-on experimentation increased their comfort with AI and prompt practice built critical skills and that's an important thing, a lot of people think that AI will replace critical thinking I think by leveraging how to prompt you can actually teach or increase and enhance your critical thinking skills and tailor projects amplified the engagement. A lot of times at the beginning when I ask, oh use it, they have no idea what to use it for so having some kind of individual projects or something that they want to use the program for actually help them want to use it. But there are also some challenges data privacy was the one thing that would come up every single time when we talk about what can they put in there, what can they upload to these programs. Even though chat GVT or GVT4 has say function to not ask, you know, saying not to use your data but people were not sure. And prompt engineering difficult but essential, they recognize how important that is but also it's not that easy to become really good at it. And also they all recognize that AI lacked subject matter expertise at least at that point so I know people, different companies are working on that so that's kind of my presentation. I have these structured plans to use it. Contact me, I'll share those with you with the structure of this program. So thank you. Thanks Leo. So again, I'm Elias Sok the associate dean for teaching learning and research at Clemson University and a little bit of context background information about Clemson. Clemson University is a public land grant research university in Clemson, South Carolina. It was founded in 1889. Clemson is the second largest university by enrollment. The current enrollment is close to 29,000 students, 80% of them undergraduates. Cooper Library, our main library on good days, we get between 8,000 to 10,000 students in the library. So why did we start with an AI tool and subscription? It was a combination of opportunities and maybe challenges. This year Clemson launched a new strategic plan, Clemson Elevate, with three priorities. The first one to deliver the number one student experience. In the library our mission is to provide services and resources that will meet the needs and expectations of students and faculty. But expectations change. These days, many of our students come with the expectation of a custom experience and just on time assistance. Also, as was mentioned earlier, during the spring semester, we saw an increase in the number of questions regarding references or citations. In many cases references for publications that simply did not exist. So what did we do? Well, we talked to friends and colleagues who were working on similar initiatives. This included informal and roundtable conversations, including some at CNI in Denver in the spring, as well as some Zoom meetings. By the way, that's how we got connected with Jolin. We also talked to a couple of vendors and we asked for demos and later we arranged for a trial over the summer. After a successful summer trial, we recommended to pay for a one-year subscription and we created a research guide, which included information about the tool site and also instructions on how to use it. So for the trial, June 20th, July 20th we got feedback from faculty and graduate students, representing five colleges and 11 departments. In a one-to-five scale, faculty and grad students rated the tool at 4.57. So we started a subscription on September 1st, but because of some accessible requirements in the creation of the research guide, the official launch did not happen until October 1st. The graphic on this slide is a quick comparison between the summer and the overall engagement over the last four months. So while unique user accounts only grew three times from 94 to 270, the assistant queries feature, which is similar to the chat GPT query box, grew almost 19 times from 120 to more than 2,000 queries. Surges and report views also grew two to three times respectively. So what's next? What are we going to do to continue this? Well, we will continue to work on some marketing and promotional activities, especially at the start of the spring semester. We are planning for several presentations. One for the department chairs, a group of 100 or so chairs in a room that I think they would be interested to hear about the benefits of a tool like this. Our team will also present at a Clemson conference in January, titled Teaching in the Age of AI. Finally, and because things will continue to change, for instance, site, the company was recently acquired by research solutions. So we will keep an eye on the future development of AI tools for academic writing and research. And with that, we have about four minutes for questions. Question for the panelists. Devin Savage from Illinois Institute of Technology, I understand you can't see anything up there, so I'm over here. But I actually had a question about Kineas, which I was also very fond of, and we have trialed at my institution as well. But one of the things that I noticed about Kineas is that I think though it draws from open and restricted the number of articles that they actually pull from, based on like English and some other criteria. It sounds funny to say, they're only pulling about 50 million articles to look at, which is great. But it is a fraction of what most of our institutions have access to. I wondered if you had any comments about sort of the scope or the future or any sort of like interaction you've had about that. Great question. And that's something that we did have conversations when we were looking at the tool and going down the route of having a trial and thinking about how can we identify gaps in the coverage in what Kineas is recommending. And like you said, it's only 50 million articles. How do you even begin to approach that problem space? But I do think it's important for us, especially since we aren't focused in one academic area, we're trying to be as broad in terms of coverage as we can possibly be. So I think that that's, and I don't know if I have an answer for whether, I think that there was an update recently where they added more articles to their index. So that's encouraging. And it's my understanding that the technology is such that they can be pretty agile and what they can add to their index and to the tool, which is also encouraging. So I think it's just a space that we're trying to keep an eye on and think about when we're designing our assessment how can we ask these questions in a way that's effective and kind of getting at the root of what we're interested in. So thanks for asking that. That was really helpful. My question is for Leo. So Leo, I was wondering a couple of things. Did you provide subscriptions for staff? Or did you tell them which tools to use? Yes, we did provide that and we asked them to use GPT4. That was the most popular one at that time. But however, we encountered quite a bit of issues with the subscription. Apparently they didn't like us using one P card for multiple accounts. So I guess they wanted to charge us for the enterprise version. But anyways, so that was an issue that we couldn't really figure out how to deal with. Somebody got kicked off for a while and all that. So that's something to pay attention to. And then a quick follow up question. So in my institution, I've been trying to get my colleagues to embrace exploring these tools. And several of them are adamant that they do not want to do that until they can be assured that it's completely private and they're also very concerned about helping any company that is perpetuating bias, using biased training sets, copyright violations. I was wondering if you had staff who were similarly apprehensive about using these tools and what you've been able to do, if anything to, or how you've negotiated those conversations. Yeah, that's a tough question. So we do have those mindset as well. And I think it's a common mindset for a lot of us to want to weigh into something that's perfect to use it. And things may never get to that point. Look at the internet, right? So what I try to tell them is we at right now, rules are being set at the national level, business level, any level for me, they haven't been set. So we have actually a voice, some kind of influence in that. And the only way to be able to do that is to learn more about it. And so that we can say something with informed kind of thinking. And the only way to learn that is like try it out in those six situations for example, right? So that's what I would say to them. Thank you. Good morning. One quick question for all four of you, I think. Do you have a sense of which academic departments for faculty, or which majors for students, or most likely to use your resources for AI? For the summer trial that we did at Clemson more than 50% came from STEM disciplines. Now, this is it was over the summer, so primarily grad students, right? We are eager to learn about the type of demographic that we might be able to get for the fall semester. And that's coming. So I'll say that information science was the discipline that we saw the biggest uptake in. We saw a lot of people downloading from STEM disciplines, but we did see some from the English department as well probably because of the focus on information literacy which was a concept that they were more comfortable with and that kind of angle on it. But yeah, we really want to encourage people even from these non-STEM focused disciplines to kind of just get hands on the tool and just, you know, get some experience using it because just ignoring it is not a safe strategy. Thank you. And I'll just say from Carnegie Mellon's perspective, we don't know who exactly at least in terms of Kenya's who's using the tool, but we do have a list of names. And so we could cross-reference that data at some point to try to, you know, put that picture together a little bit to understand it. But I do think that in many ways I imagined undergrads would be our sort of primary demographic that we'd be targeting with the tool because it's sort of designed to help them search differently, search better. But then if you have been searching for your entire career and finding the same results, then I could see, you know, later career professional or faculty member having success using tools like that for more of the serendipitous discovery that it's supposed to help with. So I could see, you know, I could see it being broadly useful across the board for us. I don't have any data, but it seems like it's more at the individual levels than unit or department levels. So it seems like it's the mindset of individual faculty rather than that department. And you had a varying range of mindsets. You mentioned skepticism to enthusiasm. That's right. So within even that discipline, that could be a range as well. So it's really on the individuals, I think, at this point. And you had 10 people. That's just my college, but I'm leading the university, so I can see it from different disciplines as well. And the data that we got from the trial at Clemson was based on the feedback that they completed voluntarily because site doesn't keep track of any of that. So like the 1,000 users right now there's no way for us to keep track of who they are. I think we're about over time. So thank you everyone. Enjoy the rest of the day.