 So my name is Liam Bollingham, I'm Assistant Director here at the Library and Cultural Services in the University of Essex. And I, as such, I'm in a leadership role here, so this is kind of my perspective and my colleagues will have different perspectives. So I'm really thinking about, you know, the skills of our staff body, how we can prepare for AI and how we can provide the best possible student experience. So let's introduce our other panellists. Beth, if you could go first please. Sure. Hi, I'm Beth. I'm a content delivery coordinator within Library and Cultural Services at the University of Essex. The main part of my role is I work with module supervisors to help create reading lists and also work with students for interlibrary loans. So it's all about requesting resources. So I'm particularly interested in how AI may change the landscape of searching for resources and what tasks can and even should be automated in my role. Lovely, thank you. Una, please. Hello, I'm Una. I am an academic support librarian for postgraduate taught students within Library and Cultural Services at the University of Essex. So I have a very student facing role. So I deliver most of the information literacy teaching specifically at postgrad, but also just generally to our students working with both embedded sessions within departments and our open workshop program. Thank you. So, like I say, let's move on to our three sections now. But before we do so, I'll just set the scene a little bit. I know you've already heard a lot about AI. So as we see it, generative AI tools based on large language models have a profound effect on universities and research libraries. And the way people find information, they do research, they produce academic content. These are, it's bringing fundamental changes to these and concepts such as authorship and plagiarism, which we'll look at. Academic integrity is very important here, of course, and we're perhaps I feel still in a Wild West period where we've had an initial round of policy from universities, but that could soon fall out of date as the game moves on and become less relevant. So my co-presenters and I think that skills are really crucial here and that both librarians and other library staff and learners can adapt and take opportunities or possibly be led behind by market forces. Let's see. Okay. Oona, please go ahead with the first section. It's my job to share the slides, so I will get doing that. Yes, I think what I really wanted to start with is just to make sure we're all on the same page when we are talking about this. Some people may be familiar with the idea of prompt engineering. But I think what I'd like to really start with is just this idea of what prompt engineering is. So it's there's lots of different definitions out there, but what it generally boils down to is this idea of optimizing or fine tuning prompts. When you're working especially with generative AI, you achieve more relevant outputs and generative AI, if anyone not familiar with that term yet, is generally any kind of AI tool that creates new content based on training data and it has parameters that decide on how it generates that content, so generating something new based on what is in its training data, but not necessarily new knowledge. It's just, you know, recreating information based on what it's in its training data. And I think what we really wanted to come across with this session is there's been a lot of talk about the skills and how generative AI and AI generally is going to change the skills of users, whether that students, staff or researchers, but not as much focus on us as information professionals as librarians. So what we're hoping to do with this session is talk about some of the potential issues or some of the skills that we might want to learn as librarians and whether we should, why we should, why should we care about this and what we can do with that. So maybe if you can move on to the next slide. So I'm going to be coming on to the issues in there in the slides, but I wanted to start with was really talk about this kind of worrying trend that we've seen in terms of there are a lot of AI tools out there and they're developing very quickly. And they are, there are lots and lots of new tools coming out and that's where this idea of prompt engineering seems to have really picked up from is there is a huge corporate push towards the new job roles of prompt engineers that are going for quite high salaries. And I think what we're seeing with that is these sort of GPT hacks prompt libraries paid AI models prompt engineers kind of pointing towards a monetization of information retrieval. And that's where I think we come in as librarian because as librarians because when you look at what prompt engineering is or how it works and how you can optimize these these prompts and work with these tools. It's actually quite a very similar skill to traditional librarian and information literacy skills. And I'm not the first ones who to make that point or point that out there are plenty of people out there who've made this connection. And plenty of people who've talked about this. But I think that's where we have this opportunity to to come in and and become experts or take charge of this emerging landscape, where we can help students and ensure that they've got access to the best information because I see libraries really as places of knowledge for everyone as places where anybody can come and access information without having to pay for it without having to have a specific mandate for that is almost like equity of access to information. And I think this is why our focus at Essex especially has been on free tools. And we're going to talk a little bit later in the session about the digital divide and the differences in students skills and focus back more on students. But that's where especially where we have focused on free tools to make sure that we're not creating those bigger gaps between some students who have access to the funds to to pay for premium tools and subscriptions to tools and that sort of thing. But a lot of the traditional information literacy skills are available for anyone, and you don't necessarily have to have access to a premium paid tool in order to be able to use the skills of prompt engineering. So something like optimizing prompts and how to make those prompts better. I think this is where it's really tricky to provide guidance on musical AI use because there aren't really it's only an emerging field prompt engineers haven't been around for a very long time. So, whereas librarians have and a lot of the skills are very similar. But there are also differences in between the ways these tools work and we don't still have a great understanding of things such as copyright and how that works with with AI and a lot of these tools. And how we can support student users and because as Liam pointed out we've been in this kind of initial round of policy, but a lot of these tools are still developing and things are changing still very quickly. So I think it's it's quite a difficult area to really authoritatively say to students this is what you can do this is what you can do this is how these tools work. And this is where you can go and get the most up to date information because that changes so quickly. And I find that I spend a lot of time trying to keep up to date with with new developments in AI I spend a lot of time both reading papers but also just watching videos on places like YouTube and TikTok because that's where our students are. And those are the places our students are getting this information from whether that's good or bad you know that that's where they are and that's where our audiences and seeing the tools that come out of those. Those videos and those discussions, and then I'll go and try and experiment with them but that's a lot of work and no matter how much time I spend with it I feel like I'm still falling a step behind. And I always whenever I run an AI tool session for our students a student will come and say what about this tool I've used this talk is this a good tool to use. And it will almost always be something I've never even heard of I've not had a chance to try so I think it's really quite tough to stay up to date with these tools. But again that's where I think we can cut in and help cut down some of that information overload by focusing on the skills rather than the tools themselves. And where we can help students focus on using their information literacy skills and digital skills to help them cut through the noise and figure out the parts of how to use information retrieval skills, their critical thinking skills, and find ways to use the information they get from AI tools ethically and use these tools as ethically as possible. There are lots of things that I feel like we don't have control over necessarily things like environmental impact, but there are ways that we can mitigate those impacts and help students focus on the skills and focus on the effort for use of these tools. So, maybe we will move on to the next slide and talk about some of the questions on how do we. So we've got some questions like how do we engineer prompts that can reduce hallucinations so this is something that has been talked about a lot. About how these tools tend to hallucinate answers so most of you may have seen or heard of you know things like chat to BT generating fake references or bringing in poor poor quality references. So are there ways that we can write prompts and teach students to write prompts that reduce these hallucinations. Is there an intrinsic way of writing those prompts. I don't have a strict answer to that. I know there are ways that we can get more helpful outputs from these tools and we generally teach to our students. Kind of three main ways of writing better prompts which are making being concise being logical and being clear. It's going to be really clear about what you want these tools to do you want to be very concise about the way you write them to the more complex you get with these prompts. The more unnecessary information that all that's the more places the tool has to go and get get confused it's almost like you're talking to a person, if you give them loads and loads and loads of information. You're not going to pick out your key message so it's almost more helpful to go and write shorter prompts, get an answer, write another short prompt, get another short answer and then retweet and tweak and tweak and get those conversations going that way rather than trying to write one long big prompt that will get you one long big answer out of which you then have to sift through, and that's how I feel it will also be easier for our students to figure out whether the answer is good or not because they are answering at one small chunks, smaller chunks at a time. But if anybody else has any other approaches they've taken any other ways that they teach students please do share do ask questions. So this is your your chance to come in and talk to us and Liam and Beth if you have anything to add for free to jump in and as well. Well people think of things and stick their hands up to join us I would just maybe echo a couple of things. I one of the things I'm really pleased that you know does in these sessions is is how you know it has the humility to ask the students for what you use. What do you find that I think that's just such a good idea and it's just something on the thought of myself. And I'm really good that I'm really glad to see that bears fruit. And in terms of reducing hallucinations there wouldn't it be great if you could, when you when it generates content for you and generates references for you if you could say oh please run these against crossref or just library discover or something like that just as that verification step. If the prompting was that easy it would be great. There's a comment in the chat about in this about someone who had been in this morning session where we talked about the cross between a literacy and information literacy. And someone who pointed out that in their group today we're talking about prompt gurus who put lots of time into learning how to use generative AI years ago and have now graduated into being paid lots and lots of money for these these kind of newer tools that have come out. And that's exactly what I was going to talk about in the beginning in the intro very dumpy intro bit. And that's a part of seeing these crowds of people who what kind of part of this crypto NFT folk who've now almost created this monetization of information retrieval. And there are these countless of websites that you can go on to where you can get by access to a prompt library or you can get, you know, subscribed to my newsletter for hacks on how to write prompts for APT and those sorts of things and that's why I think we have almost like a responsibility to figure out how these tools work and teach our students to do that so they don't fall into these traps of spending lots and lots of money with with people who don't necessarily have the best interest in mind in this very corporate kind of approach to information retrieval. So in this landscape, we not only have prompts engineers we have prompts gurus who are in it for the monetization. How to see ink. Okay. And we have a question. The students come with pre existing knowledge of how to write a good prompt. In my experience that that varies a lot. You come with certain students who've played a lot with these tools and know what they're doing and can can write prompts. They might just be there wanting that extra bit of information of what they can and cannot do so because a lot of the sessions that I run are focused on the ethical side of a tool so they might have an idea of how to write good prompts. They want to understand what they can and cannot do from the academic point of view. But I would say the high majority come with no clue they may be gone into chat to PT once or twice written a couple of prompts not going very far gotten a little bit frustrated or not really knowing what they're doing. And then they come to these sessions hoping to get an idea of how they can make use of it because they've either been told by the lecturers to to give it a go or experiment or they're just otherwise worried or stressed. And I think a lot of this boils down to confidence on how confident they feel with their digital skills their writing skills to academic skills. And a lot of them come with not a lot of not a very strong idea of how to write good prompts. They do tend to come with an idea of what they want from these tools which is helpful for for us to work with them. And they come with a lot of preconceived ideas of what they can and cannot do which is also interesting sometimes where they go well if I just use it to write and then just edit it. Isn't that the same as paraphrasing and we kind of have to explain them well not really because you didn't really read the original you didn't write the original. And that could be a very tough conversation to have with some of our students. Just for while we're waiting for people to either join us or ask a few more questions in the chat. Just give give everyone a bit of context. And I might miss this but what are the titles of some of the workshops you run on that relate to to AI and they should see and this. So the main one that I run is called using a tools ethically, which is basically where the first part of the session we cover the we have some central university guidance on using AI so what they can and cannot do what constitution academic offends and that sort of thing. We look at writing prompts and how tools like chat GPT can hallucinate so we show them a couple of example questions of how these tools get things wrong. So the usual one that I run is who's the Prime Minister of the UK, because 80% of the time the free version of chat GPT answers Boris Johnson to that question. And then I usually show them if you create references using a genitive AI tool, most of the time they either don't exist or they're just not very good quality so not something you'd be able to use in an academic essay or an academic piece of course work. And then we usually look at other tools out there so the second part of the session we look at other tools. So the way we've done it, depending on numbers we either get different people to look at different tools and feedback on what they think their favorite feature is. Or sometimes I just do a quick demonstration of couple of different tools and ask them what they think about these tools and how they, they might use them. So if we have done an embedded session, we do the first kind of part of the ethics side of how they can use these tools ethically and how they can reference their AI use as well. So that's one of the other things that we cover as part of these sessions is how they reference to being able to to reference the tools that they use what format they should use for them. At what point should they reference these tools because a lot of the times if they're just asking it for keywords that they're really the same as sitting down when somebody else use this analogy in the first session this morning is sitting down with a pen and paper and trying to find some keywords, you're still picking out which keywords you use. But then there are the things that you might want to reference to talk about because maybe you're doing it a lot with something a little bit more complicated or something that you might end up using more in your work. Gotcha. Okay. Thank you very much. Martin was noting that he doesn't think the audience can raise a hand. That's an option. So I assume colleagues at RL UK are on that one. I might be able to fix it. But we'll see if not we'll make do with chat and we'll make it work. It's okay. Okay, I don't know. Mars is it's been added now. Great. Peter's got a question. He says there is a strong focus on chat GPT and discussions around the AI, but which other AI engines do you train people in. So we usually look at perplexity and you. So you is an AI search engine that has basically similar to Bing's co pilot. It's basically a chatbot attached to a search engine. So we usually look at a contrast between chat GPT perplexity and you chat. And then some of the other tools that we look at in the kind of second part of the sessions are not generative AI tools. So we look at image generation. We look at research rabbit, which is basically a citation changing and visualization tool tools like elicit that is lit for literature searching. Brian, which is a systematic reviews tool. And some other tools, kind of like that is not not genitive AI tools as such. I'd love to do something around tools like site AI or consensus but a lot of those ones are not, not free or they're very much credit based. And I've been trying to make sure they're focused on free tools so that anyone who comes to these sessions can get something out of them. There's nothing worse than coming to a session and someone saying, look at this amazing tool that they can do all these amazing things. You just have to use it carefully. And then finding out that it costs 20 pounds a month to get access to. So I think that's some of the other difficulty with a lot of these tools is that the best ones or the most powerful ones are not free. Before we move on to Beth's section. Is there anything you would like to ask the audience? Oh, I did have some questions. And this was a question just more generally around AI on how people feel about using AI themselves in, for example, your personal life versus your in your professional capacity or versus teaching students about it. So do you feel like there's a difference in the ways you might use AI whether you're using it as an individual, as a professional, or if you're teaching students about about the use of it. So it'd be interesting to know what people's thoughts are around the different kind of roles or different hats you might wear and how you might have that might change the way you use AI. So that's a really good question. And while people are thinking about answering it, please, please do throw answers in the chat, raise your hand. These options would be great. I personally, in my personal life, I suppose if I was to be applying for jobs, I'm not I'm very happy here. If I was to be applying for other jobs, I might want to use certain kind of academic leaning AI tools to help me with the the topic for the presentation, or considering some of the questions as well. So that might be an example of my personal life. Use AI tools all the time in my personal life, like just if you're trying to get my car, you know, Google Maps and stuff. But these kind of tools, yeah, that's, that's one that springs to mind for me. Let's see what people are answering. Okay, so our colleagues over in the Essex Watch Party, Hiya. They like to get use illicit to get to know a few texts. And that's one of my favourites. Peter saying, did you first shoot a panel had members who actually pay a fair amount of money each month to use the pro level access. So there's that digital divide. Is that happening a lot to the panel's knowledge? Let's who know what is that happening. And a lot of the students that come to the sessions that Iran use tools like Grammarly. So that's probably the biggest one that people seem to be subscribing to. I haven't seen a lot of people and jet GPT is the other one that people pay for access to that that seems to happen a lot more as well. So the first thing to say, well, I've got jet GPT for because I pay for the premium access. So when I show them the examples in 3.5 and then they go to the paid version of they because they can see they get different answers, because it can do do more things. So I think it's becoming more and more common. I've seen a lot of students yet who subscribed to lots of different tools. So it seems to be more that the students I've spoken to or we've come to our sessions seem to kind of have picked a tool that they think is most helpful for them. Which is, is. And this is the other question. So that's just come in from the sex watch party that was going to talk about is that we've been asked by academics to actually subscribe to a paid at all to bridge that, you know, give that equity of access access to students. So that would be quite interesting. I think the difficulty of that or the interesting part of that is which tool, what's going to be the most helpful for everyone and what has the least capacity to be misused. Because you know if we subscribe to a genitive area tool I'll be effectively telling our students to go and use it. If they have access to the premium paid version of it. Do we then have to put limitations on how they can use it and be really clear about that. There was a stand out ethical tool which was reduced the impact in the environment was very transparent, very responsible, maybe not profit driven. There was maybe something like that was stand out and be very interesting for us. One final thought before we move on to that section is, I've been toying with this idea of, because I've seen these prompt libraries these paid databases are basically prompts that you can use for different types of things turning up. And I've been toying with the idea of effectively creating a free one for our students of your prompt library. And the other question, if anybody's got any thoughts on that is, I've been to minds about that for a big reason off. On one hand, it could help reach this digital device that we're about to talk about. By giving all the students access to the same prompts and you can almost say you can these are the prompts you can use and if you use other prompts, then you have to explain or talk about a new methodology or reference at all that you've used. And here are the prompts that you are allowed to use because we could build them in a way that doesn't give students answers but gives them points in their work that they might want to tweak or ways they can use these tools and ways that comply with academic policy and it could help with the students who use first languages in English or who might struggle with the digital skills of using these tools but then on the other hand, is that taking away the learning of learning to use these tools in ways that are ethical and engage their critical thinking skills of taking away that criticality of using these tools and learning to figure out for themselves whether what they're doing is ethical or not. If we give them the answers and say here are tools for you to or prompts for you to use. Are we taking away that learning from them if we provide something like that. Okay, let's let's move us on to Beth and her section then. Okay, sure. Yeah, so I've got a few slides. I don't know if you know it's going to share them. So, yeah, I just wanted to expand a little bit on the idea of librarians as pumped engineers and if it's something that students actually need. We've gone over the difficulties that are faced when students might use AI so maybe as soon as we can bridge that gap. But I mainly wanted to focus on those recent survey that Cortex conducted. I don't know if we want to move on to the next slide if you don't mind thank you. And so they recently conducted the survey to kind of get a general overview of what students thoughts are on AI. If it's something they are using, obviously we've seen evidence in our own roles that they definitely are. But this was an interesting survey so they pulled 1250 UK undergraduate students so disclaimer on this obviously that is quite a small amount of students considering how many students that are in the UK alone and who might be using AI but I think it could give a good overview like a general overview on attitudes towards AI just generally. So don't mind going to the next slide. So just pulled out a few numbers from this survey. So in particular, the survey found that 53% of those students pulled have used generative AI to help them with assessments. So this is only just over half, but it could show that this is on the rise. Maybe it is going to creep up each time if they conduct this survey again. And it is a question of is it only this number because of views on AI is then this negative view that maybe holds people back from using it. So of the students that have used AI for any purpose, including for reasons not related to their studies, 37% have used it to enhance and edit their writing such as by using tools like Grammarly or Notion. I've used Notion myself just in my personal life for ideas for trips that sort of thing. So that only counts that sort of thing. But 30% have used it to generate text such as by using GPT as an example. So of the students who've used AI to generate text for assessment specifically. 13% have used them for assessments, but Cortex has noticed that they do typically edit the content before submitting with only 5% of these students saying that they use AI generated text without editing it personally. So that's quite a small number, particularly considering it's only 1,250 students who have been polled. And as well, it doesn't specify the amount of editing or lack of editing that goes in. So are students using AI to write entire essays, or are they just getting some help with wording, maybe getting some key words as Euna has said. So it's definitely a great area with this. AI use can be perceived negatively in some capacity, but the view that students might be using it to generate an entire assignment, for example. But this poll maybe suggests that isn't the case at all. And most people do want to be independent using AI as a tool. So it does seem inevitable that students will use any new tools out there to help them with their studies, but it's not necessarily in an unethical way. And instead, I think this survey shows that students are most likely to use AI to assist themselves. So summarizing points, prompting their own thought processes, sort of coming up with templates that they can work from. Okay, could you move on to the next slide please? So this is sort of as well in the survey talked about how students view their institutions attitudes to AI. So 30% of students agree or strongly agree that the institutions should provide tools. So it's good that Euna is working with students in this capacity. We're trying to stay ahead of the curve here. Because 9% only 9% say that institutions do currently provide these tools or this advice, which is very, very low. And only 22% of students say they're satisfied with the support they've received on AI. So again, that is a low number. And it does suggest that maybe we can be doing more, particularly as librarians to help with this. Because as I said, it's kind of inevitable that AI might become part of our everyday lives. So what are the ways that we can mitigate any issues with that? How can we educate people? Okay, go on to the next slide please. Thank you. So I think it can be argued that as in universities and more specifically libraries best interested develop these clear policies on AI and teach students how to use it effectively, providing tools to prevent this digital divide that we've touched upon. In the survey, the digital divide does seem quite minor in some cases. So for example, they talk about how 58% of students in the fifth quintile use AI as opposed to 51% of students in the first quintile. This is a small gap, but it is good to consider obviously again, reiterate that this is such a small number of students that were polled that that gap could be larger. And I think you do definitely see the difference between the types of tools that the AI provides between free AI and paid for AI. I had a little look at some of the DigiFest webinars that were going on recently. And one of the presentations was about Poe and the presenter showed all these amazing things that Poe can do for free. So there are definitely free tools out there that are amazing, that can do so many things. But obviously behind that paywall, you don't know if it is even better. So it's definitely something we need to look at. So a lot of the hesitance that students have about AI that I've sort of noticed from this survey seem to be a lack of clarity or guidance from the universities. Like I say, such a small number of students said that they do receive guidance from their universities do get these tools. So some of the survey responders noticed that they're told just not to use AI at all. And that's it. And they've been given the impression that AI can only be used unethically. So for example, for plagiarism. So there are definitely students that maybe they need a bit of more guidance to show that we have these tools. They're out there in the world. So we would rather use them in an ethical way and show you the best way to use them. We've also seen this hesitancy from module supervisors lecturers as well. Just speaking from my own experience, we have had a module supervisor recently bring this up. They saw a blog post that was sent out by one of our suppliers about AI. And they highlighted a sort of concern about the depiction of chat GBT in particular knowledge bank during from so called open sources. And they noted that these sources, these sources are under copyright still and would need to be referenced correctly if used. So again, we can go back to our workshops where we can teach students how best to reference ideas. If anything is drawn from generally online, how they can work with those sources. And the module supervisor in this story also raised a tendency to hallucinate sources. So Zuna said, maybe librarians as prompt engineers, we can sort of work to find the best prompts that do reduce these hallucinations. There's also the idea as well of training AI on specific things so that it will only work with this. I believe it was the University of Luxembourg recently started working with a collection that uses AI to that is trained on their own collection. So if a student wanted to use it almost as a bot to look for a reference, it would only bring up things that were present in their collection and they could could be referenced so maybe that's one way we could do it. And again, we can work as librarians to create those prompts. So as Zuna says, perhaps this is a gap that librarians can bridge, we can provide tools and guidance on how to use AI ethically, and helps with studies in a productive way that still has students learning ultimately still keeping them independent. And we can also open the conversation with universities themselves to try and have that clear guidance as from the survey it does seem that is what is wanted by students. And I think as well just to finish up, I think having libraries as prompt engineers as well, perhaps it could formalize or legitimize the use of AI, and maybe we can create a safe space within libraries to explore it. We can put those parameters in we can work with students rather than restricting them. But I would be interested to hear what other people's thoughts are about this. So maybe we can move into answering a couple of questions or seeing what people's thoughts are on their student experience as well. Thank you very much Beth, and that was a nice invitation I suppose for more students as partners work as well. I really like this prospect of working with students. Absolutely. Could you stop the slideshare please Zuna fabulous. Okay, so Jackie point a question which I'll read out. I wonder how many students use generated assignments or papers in lieu of being able, or not knowing if they have access to pass papers etc. Lots of students that approaches just want to comparison paper for a starting point so maybe it's easier for them in some cases to use software than it is to access an old undergrad or MA thesis. Yeah, that is a really good point actually. I think maybe that's maybe on us we need to advertise those tools that they could use without using AI a bit more with a bit more clarity perhaps. I think yeah there is definitely I feel like AI in general is a bit of an unknown still for a lot of people so maybe they turn to it because they think that there aren't other avenues. But there is maybe this sort of shame of using it as well. And so yeah I think it is definitely a good point that maybe we need to explore different things that students can use before turning to AI and specifically. Yeah, we like that point and thought about that. I have seen that somebody had created, you can create versions of different GPTs and somebody had created one for British libraries ethos based on just on the metadata didn't go into the full text but yeah you could just sort of query the GPT and give you back content scraped I don't know with permission or whether that was an ethical thing to do but scraped from the metadata associated with ethos. If anybody would like to join us at this virtual table there is space because they can hand up otherwise if you prefer with the chat that's fine. Well people think I was going to jump in on that that idea of like a safe space and students exploring AI, and this kind of maybe fear that they don't know necessarily what other avenues there are. I went to the ILG information literacy and AI roundtable, and they had somebody talking about how they use AI in these kind of very structured ways because sometimes students get kind of told I'll go and experiment with it. But if they don't have the knowledge or the confidence or the skills to use these tools. How can you really go and experiment with it if you don't know where to start. So having these kind of really structured safe spaces like you said, and having very specific things that they can do with them and try out and see which tools they like which what they don't like what works for them. So having that kind of non judgmental space where they can go and try in a structured way with an instructor present who can help them. And that's where it's important that we also have the skills to be able to help them even if we don't know everything. If we're just willing to experiment as well and have these kind of structured workshops on hand, and that that can be really helpful for our students. Definitely. Yeah. A couple questions in the chat. I'll read the one from Wendy here. So for those of us who are looking to develop our prompt engineering skills, are there any resources that you've learned from or would recommend and Beth or please pick that one up. So, I've got a couple of ones that I used when I was getting started. So there's a couple of LinkedIn courses that I thought were quite helpful. A lot of them don't really get into the ethics of prompt engineering or the ethics of using these tools. But they were quite good at explaining how these tools work and how to write prompts that optimize your results so they're kind of the name of it but I'm sure I can find a link to it. And during the segment so I'll post it in the chat and if anything we can share it afterwards. Thank you so much by the LinkedIn learning course and I went to a webinar, which was called writing clear up prompts, I want to say, or getting clear discussion. Yeah, I focused on this idea of clear and clear was an acronym they used for C was for concise L was for logical E was for explicit A was for adaptive and R was for reflective. And they showed different ways of writing different kinds of prompts and they really demonstrated how if you write a prompt in this way this is the response you get, whereas if you write it in this way, this is the response you get. And you could really see which one was more effective for the type of thing you were trying to do. And so I'll share that one in the links as well I've got a link to that somewhere so I'll put that in the chat. I've got a moment. Definitely results out there and these are all the webinar recording is free out there and if you've got access to LinkedIn learning, then all the LinkedIn courses are available as well. Thank you. I've done the LinkedIn learning course on prompt engineering and I remember the trainer telling us, you know, when you generate generating images you say if you add the words digital art to the end of your prompts and he found that open I open AI tells me you get better results I don't know just little tidbits like that we're really surprised me. Okay, things have picked up in the chat. Let me see. I think we're on to so our UK comments and AI also feels very monolithic and these discussions highlight the experience and tools which really help unpack it to be focused on what's the best tool. They need an agree agree regarding safe spaces that's a really good concept. Peter's had a couple of people said something and said this as perplexity uses a rag model. I'll confess I don't know what is to actually actually improve prompts by assessing verified data sources. Is that any better at showing sources without hallucinations in a panel's experience. Well, you know my experience is just not there. To have to agree. I'm not too familiar with that. Unfortunately. And that's fine because I won't think I want to the things from this too so thank you Peter. So I don't I love me I don't know what the rag model is I should probably know. But what I have found is that complexity is better at finding sources that exists. It doesn't hallucinate as much, but the quality of the sources is not great. So we were looking at. In this morning session we were doing quite a lot of interactive stuff with generative AI and complexity used a lot of sources like blog posts and essay mills, and it would even reference things like Reddit. So even though all of these existed, and it was mostly just kind of recycling the content it wasn't generating as much new, new ways of saying it was mostly just sort of summarizing what was in these sources. The sources were coming from places like essay mills and blog posts and that sort of thing. Whereas when we used co pilot, for example, that actually brought in academic papers and university websites, but then somebody else use co pilot and got essay mills and blog posts and and that sort of thing so it's really really varied. Some of these tools you can tell them where you want the information from so chat to BT for example you can tell it to any use government sources or dot AC dot UK websites and then that's those sorts of sources. But you have to know how to do that and like knowing having those tips and tricks and figuring out how to phrase it in a way that doesn't limit things too much. Okay, James asked everybody, have they made the new scopers AI tool available for their students. We haven't but please everybody replied to James in the chat about that one. Let's see. Susan, who I still work with high season. Do you think the need for a safe spaces to explore AI also extends to staff. Have you had an experience of working with academic staff and library colleagues. If so, have their responses and feedback could be similar to what you've heard from students. I would say I think we do need those spaces for staff as well because they're the ones who are in the classrooms with the students. They're the ones who maybe will be working with them in their seminars, students might bring these things up so maybe they need that space as well to kind of explore and know what's best for their students to. I think to be honest, a lot of staff feedback hasn't been the most positive so far. I think a lot of that is to because of a concern of academic integrity, which is understandable. So yeah, I would say that is definitely important to I don't know if anyone else would think the same. I agree, but I will probably move us on the interest the session but agree. Just Peters just update us on what rag is and apparently it's retrieval augmented generation generation and yeah thank you very much and now the chat's come into that so that is that is great. Thank you. What we'll do now is we'll move to our final section and we have permission to run over a little bit if we need to so there will be time for a decent chat. Beth, you're going to share the slides for me aren't you. Okay, so for the final part. I'm going to look towards a few which is very, very risky and such a fast pace environments but just taking one aspect of it really just considering plagiarism for a moment. And this is a fantastic model, please move us on and that I've seen and talked to colleagues about and I wanted to get your thoughts on it all. The idea from Sarah Elaine Eaton about post plagiarism and she summarizes some of the points in this book that she's mentioned the top there. It's a really helpful model model and I'll just consider some of them and we'll also look at, you know, an active example as a way to really think about it. So, just to summarize what's going on here, the six tenants the the first one, the eye, and this was written I should know this was written at the end of 2020. For me, Eaton is a kind of AI Nostradamus able to see quite far in the future. And I've been just really impressed with the foresight going on, but yeah. So hybrid human AI writing becoming normal. So this idea of hybrid writing was a really interesting concept of the human obviously doing most of the work, but augmented by help from the generative AI. And Elaine. So Eaton argues that trying to determine where the human ends and the artificial intelligence begins is pointless and futile. So she's kind of predicted how a lot of people were very interested in detection tools and how we've kind of moved on for that already. And there is much less now conversation about you're getting a percentage score to detect whether you know this is written by Gen AI. It's, I think we're starting, I would argue we're starting to move towards this already. She talks about human creativity and how that can be enhanced by but not threatened by AI. So something I've kind of been thinking about during this conference is really, these tools are really effectively useful when you play to their strengths. So when you look at something like getting to generate keywords for you, putting something through particular models when it comes to systematic searching, things where there's lots of training data, you know, those are strengths. When we talk about creativity, I think this is something that humans personally are much better at than AI tools. So, yeah, I'm not fully on board with this one, but the point is that really creativity can be boosted with the right environment that probably makes sense. And the idea of overcoming language barriers which is something we probably don't talk about much in these conferences because most of us are native English speakers and English is the dominant language of academia so we're probably not even thinking about it as much. It's very interesting and anecdotally, the people I know whose language, first language is not English, they have been particularly keen on using AI tools such as Grammarly but then throwing emails into chat GPT and similar tools and saying, yeah, does this sound okay? Can you soften the tone here? It saves time, but everything else. It's not, it's not always just about language learning or language ability. And then the idea that humans can relinquish control but not responsibility, and it scares me. Relinquishing control, it's not something I like, but the idea of, but I mean anecdotally, you know, in my team we generate reading notes through clod.ai, we give it the Zoom transcript and make sure we're not mentioning anything sensitively like that, we give it to Claude and Claude will summarise for us which saves some people's and times. And, you know, I don't know the back end of Ford's algorithm or anything like that so that's why we don't give it anything sensitive or personal. We, we vet the, the transcript first. And, but I fully agree that you know humans need to be keeping the responsibility here. And attribution remaining important. Again, that's a massive area for us in libraries that is academic integrity and that this is really, really, really key and it's not debating it. And then the idea of historical definitions of plagiarism no longer applying. And what we're saying is we're not throwing out the idea of plagiarism in the future with landscape with AI tools, but it doesn't, as we kind of understand it now it does need to be transcended. So it's not as well as it's not a matter of the goalpost shifting and us having to adapt. It's, we're playing a different sport. So it's not just, you know, there's another dimension to this, you know, what plagiarism is in, or thought to be in 50 years. Eaton would argue is going to change. Okay, so let's look at an example which was shared recently on social media and think about some of these. So here we go. This paper was all over X or Twitter, and most the commentary. So let me explain what's happening here so we've got, we've got a journal article in services and interfaces. And we, you know, the abstract reads fine the title is fine. Obviously not my area I don't understand this stuff, but the introduction starts with the words certainly is a possible introduction to your topic. That is clearly the authors who are based at a university in Beijing had apparently have generated the introduction using chat GPT or a similar tool. And, and then what follows. People have been maybe skeptical about the validity of the quality of the work as a result on X a lot of the commentary has been, look at this. I can't believe this. I couldn't believe it's true but it is. And perhaps you know how was, how has this got past the peer reviewers and the editors. I can't believe as far as it's gone as what I've seen in LinkedIn there's been a bit more nuance and a few more kind of maybe probing questions. And I will personally say that I have a bit of sympathy for the authors here. And I don't think it's, it's, I don't think it's helpful to just say oh look, look at this plagiarism ring the bell. I think we have to start to take on what some eat and saying and move on a bit if could you click slide forward once but that should bring up an animation. There we go. So this is a comment I saw from LinkedIn looking around the commentary of this and the question that raised here is what about. So okay yes plagiarism suspicious. The authors, you know, have contravened Elsevier's policy and they've not acknowledged that they've used chat GPT or a similar tool here in the, in the sections towards the end. So there is obviously wrong doing here. But this person questions is, you know, what about conscientious authors who've done the quality scientific word, you know, in the main body of the paper, and perhaps the abstracts, and then they've used an LM to save some time on unnecessary writing. I wouldn't use the phrase unnecessary writing personally, but I understand the point if I'm writing a long article, and then I need to get right the abstract and do a bit of introduction just for context setting. You know why not use a tool to to help you with that kind of stuff and just feed it feed it what you're giving it so far. People could think that way and I can fully understand that. And when you know your first language isn't English and you're being kind of norms or pushing you to write in, you know, the language isn't isn't the one you use every day at work. All the more reasons so I think some of the things we looked at with Eaton are already applying here. We've got the hybrid writing. We've got the barrier of translation coming down. Yeah, I personally, you know, do have certain sympathies by you, of course, following the Elsevier guidelines is fine to use these tools for readability it says, but what the authors should have done is acknowledge that they'd used chat, GPT or a similar tool. I suppose the question then is that what would people question the quality of the writing if they had been open about doing that which is something we may well see soon and we did see that initial batch of people acknowledging chat GPT as a co-author and certain publishers really not being keen on that. But yes, this is just maybe a working example of some of this stuff happening in practice. So let's stop sharing at that point and have a chat about it. Okay, so co-panelists while people are thinking, any thoughts on plagiarism and Eaton's thoughts and arguments? Yeah, I mean, I think I've been it's not something I tend to cover with students. I feel like it's something that can get really confusing for students, but it's something that when I've had these conversations with staff. I think it's been quite, quite an interesting thing to be an interesting resource to bring in. A lot of times they've got a lot of concerns around some of the some of the tenets and some of the points for the one I've been trying to focus on really is the is the very first one. What the one about religion control but not responsibility and that's still a message we share with students about the kind of ethical or transparent use of AI is kind of pointing out that they are encouraged to use these tools and this is potentially something that they will be using when they go and get jobs or go into further education to get PhDs. You know, once they start specializing a bit, they might be looking at ways of automating certain tasks or be asked to do that at work. But that's where it's really crucial for them to understand that even though they might be submitting something or they might be using these AI tools, they're still responsible for the content that they produce. So even though it might be generated, they're still responsible for checking the quality of it, the factualness of it and ensuring that what they're submitting is still high quality and doesn't contain errors and doesn't contain any media biases or mistakes in that way. And that's what that to me is really about. It's about making sure that even if you're not generating the content, you're still checking because you're responsible for it. You're effectively the AI supervisor in that way. Yeah, it's really nicely expressed there. Some science I think about it, we're kind of moving from being just authors to being like authors and editors. When we when we use machines in this kind of way and it's, it's another skill set, which brings us back onto the topic quite nicely. Okay. So final call. Anybody want to up the hand up, you're welcome to. We can run over a little bit and that's fine. But assuming not, we'll just dive into the chat. So question here for Faruna. How's this work being received in your institution? Well, we're only just starting it. So I ran a the very first iteration of this workshop that I'm aiming at staff yesterday. And that was was really, really well received day with quite a concern with the AI detection side of things. And that's why I mean it's anytime I have a conversation with an academic. They always tend to go, this is amazing. Looks really cool. Great work. What about detection though? How do we check if students have done it the way you're teaching them to do it? Is there a way for us to detect AI writing and generally the answer is always no. And they know it's no. But the work itself seems to be quite well received. So we're going to be putting it on our like central staff training database is going to be part of the strategic initiative for training staff. So that's going to be really, really awesome. And hopefully somewhere where we can start these discussions with academics because there are some who are very, very against it, some who are very, very pro. And then there are the kind of high majority that I would say that are cautiously pro where they kind of have these concerns or they have these slight fears, but actually they kind of really want to learn how to use it and they recognize that those students are using these tools. But I'm hoping to really convert the people who are really, really against and maybe rain in some of the people who are really, really pro and making sure we're all giving the students the same message. That's what I'm really hoping this to be about. And that's what a lot of these academics seem to want this to be about as well is that they all have a message that they can preach to their students. Because a lot of them just don't know they haven't got the time they haven't got the resources they haven't got the knowledge to go and experiment and spend the same time as we have. Because they are teaching and they're talking to their students but they might not have the kind of up to date knowledge of where their students are or what tools they're using. So in that way we can give them something that they can then talk to their students about and that's what these academics seem to be coming to us with. Is that they really just want something that they can give to their students and say, this is what the library has given us here are the resources this is the message these are the things you can do these are the things you can't do you want to know more go to the library. Which is the way I'm hoping to position this and they seem to be getting quite quite nice attendance and anytime we put AI in the title of anything, it seems to always attract lots of people which is great to an easy way to get engagement. Yeah. And what you'd be talking about there is a lot about what's happened these previous days of the conference of the opportunities there for libraries to take a lead and and others will will follow us and we can we can influence colleagues very positively and I think you just illustrated that nicely. Susan in the chat said about the researcher from the policy group and their joint statement on the use of gen AI, and how it doesn't condemn use of AI for grant funding, but requires acknowledgement and advice is responsible you. So it seems quite in line with Eaton and her arguments there. Okay. Before we close down then I just want to ask fellow panelists was there any other questions you want to ask the audience. I think we've covered most of what I had to ask I'm just having a quick look through the chat in case there's anything that we fell out from or didn't pick up. I'd love to know if anyone gets a set is has subscribed to any AI tools of course I probably I think maybe people haven't, but does anyone get the sense. Any institution is close. Probably. I would think that there are places where these conversations are happening. And what it may be boiling down to is, is which tool, because this is the thing that we've been talking about a lot is about the how quickly things changing this field. And if you subscribe to a tool, what if that tool becomes obsolete because something more powerful that is now free or slightly cheaper becomes available. And I think that's been maybe the thing that stopped places from pulling the pulling the trigger and actually subscribing to it all. Not that I know off, but also institutional licenses you know, you might not be able to purchase them on behalf of your institution. Yeah. Yeah. Okay, so some nice very nice comments here. Thank you. Laura saying who else in institutions doing the work is it the sole responsibility of libraries over at Cambridge it seems to be devolved by digital technologists, computer officers to just humanities. I think, I think we seem to be not not sole leaders but I think we seem, you know, thanks to the work of Una and others. And we seem to be kind of taking a lead on that. I'm I forgetting anyone who I mean we have a central AI group that integrates more of the university office holders and some of the IT and organizational development colleagues. But I think a lot of the work that's been done directly with students is is mostly through us. There's sort of individual academics who might have personal experience or knowledge, knowledge with the tools but most of the sort of teaching and the director for teaching is coming really from us but that we've got a central working group that's looking at more policy level things and strategic level approaches. I have a question about prompt library schemers and sort of classification standards. Not that I know off to be honest. It's something that's on my list of things to go and have a look at, because I've been toying with this idea of a prompt library in which case we would need some, some kind of schema or a metadata for for these prompts to be able to actually use them and find them. But not that I know off if anybody doesn't know of any please do share. Yeah, we'll start to wind down now but yeah, Susan made a point about copilot as well. That seems to be the product that's available and something you can turn on. Perhaps you can do that in subtle ways so that's probably step one. It kind of gets kind of back full circle isn't it you know we all have Microsoft operating systems on a computer because they were they were there first. It's on a mass scale and it's kind of history repeating itself.