 My name is William Nixon and I'm the Deputy Executive Director of RLUK and I'm delighted to be chairing this session today. Following on from the fantastic key notes earlier from Renee and Natasha, in this next session we're going to hear from three libraries on the roles they're playing with AI and how they're innovatively engaging with it to explore the role for research libraries to tackle equity and to use it in their professional practice. And as Kate Robson-Brown's keynote address yesterday commented, how we as research libraries can play a role in the wider cultural conversation about AI. So I'm delighted to introduce our first two speakers, Jenny Blake and Bonnie McGill from Manchester University Library and they are going to, so they're two members of the University of Manchester Library teaching team and they've outlined a reimagined IL framework that aligns with the challenging opportunities presented by AI. So this includes a new conceptual framework which interrogates ideas of information and advocates for a sector-wide philosophy of search. So over to you both, take it away. Hi everyone, good afternoon. We're really glad to be here with you and quite excited to discuss our thoughts around AI and search and the role of research libraries and information literacy. We wanted to do a brief introduction first. So I'm Jenny Blake. I'm head of teaching and learning development in the library, Bonnie. Hi, I'm Bonnie McGill. So I'm a learning developer and working specifically on AI at the moment in the library with Jenny. So before we get started, we want to just point out that we've designed this whole presentation as a conversation. We want to talk about how it's really necessary for the research library approach this to be emergent, to be discursive and to carve out space for libraries to take advantage of their own expertise and understanding. We really want to emphasize that there's a lot to be done here, but that libraries are one of the places that definitely should be doing it. Yeah, and I think a lot of, as Jenny was saying, a lot of that conversation is really going to ground the framework in which we're thinking about discursive practice. So just to give you a little bit of a sense of what we're doing. So we're going to start off thinking about ethics. We'll then from ethics move on to thinking about information literacy and the types of critical thinking that's being asked of us, particularly now in light of AI. From this we'll transition into digital divides and to think about what this really means, not just in terms of the information and access, but moving into access. What do we do with the information when we get there? And then we'll foreground, as was as William highlighted at the beginning, the philosophy of that big thing at the end. And we're really looking at those areas that libraries have expertise in an agency over and perhaps some areas where we have a tendency to not push forward where we do have that expertise or to not be used to exercising the agency that we might have. And with that in mind, though, we wanted to go ahead and get started. So with ethics. Okay, so in view of that idea of a discursive framework, what we need to be thinking about is perhaps shifting the emphasis and changing the practice. So rather than thinking that we know what ethics is, we want to think about this idea that ethics is a question and not an answer. And I think really, as you can see from the quote there, it links to this idea that groups up and colleagues were thinking about that in how do we frame AI in our current discourse, they were thinking specifically about metaphors. But I think what this research particularly demonstrates is how we formulate AI, how we're thinking about AI really changes our understanding both of generative AI and its outputs. So we need to be remembering that AI is still a tool. And particularly again, the idea that AI is not an answer, we shouldn't just go for it for a unilateral answer that we don't question and engage with as a type of questioning. So we want to be thinking about that idea of what is information. So rather than thinking I already know information, let's start thinking about information in particular instances. And what we're trying to emphasize here is that the ethical questions will kind of be never ending that we will be able to take a sector-wide stance for or against or for in this case, but against in that. So that we have to start embedding the critical literacy and the critical information literacy particularly that libraries are actually really good at, but perhaps only within our own conversations with each other. And we need to start pushing that out more broadly and almost demanding that our universities and the water higher education sector keep listening. We want to ask people to ask themselves what is information literacy in this new age of generative AI. And with that in mind, we have the SILIP definition from 2018, which you can all read. But for those of you who aren't looking at the screen, the ability to think critically and make balanced judgments about any information we find and use is the definition that was in that document. And we wanted to just emphasize that while this definition is robust and the libraries themselves have a long history of considering information literacy in all of its different forms, we want to use generative AI's opportunity to push information literacy almost out of the library sphere without seeding any kind of control or agency without giving it away. We think it's really important in this new age of generative AI that we push information literacy as something that cannot be one-shotted for those of you who do teaching and research libraries into a module, right? Supported with just one session or done via a tool itself by enhancing it with AI. We want information literacy to become the backbone and the heart of reflections on research and reflections on search itself so that it really takes its place as the key element in research that it is. Bonnie? Yeah, so I think so drawing off of what Jenny was thinking about there, what we're really looking at is thinking about expanding the scope of our understandings of information literacy, which I think is really exciting. And it gives us, as Jenny was saying, a whole new way of thinking about what is information. And I think the brilliant thing about AI is that it's asking us to question research practices that we've got used to, but I think we can improve on. And I think we're in a fantastic position as libraries to do exactly that. So what I'm going to take you through next is this reconceptualization of information literacy and to talk to you about formations of information. So we've left the, the silly definition up there because this is going to be the grounding for what we're doing, but we're going to start thinking about how can we move this on? How can we develop this in light of AI? So one of the issues that we see around genitive AI that we can begin to think about in perhaps new research practices is that AI and its outputs breaks the context and the content of information. So for example, chat GPT, if we're using it, it doesn't cite its sources and sometimes it makes those up. We're very used to looking at, say, a journal article and it has an author and we can perhaps go and look at the author and see what else they're researching. There'll be a list of references there. So there's a sense of credibility and also the fact that an author is situated somewhere. And we're quite used to ideas that there's situated knowledges, quoting Donna Howerway there if you're interested. But the problem here is we have generative AI that isn't necessarily linked to an idea of authorship. So what we're advocating there is an idea of a framework which actually builds an awareness of the context and that context is AI or the technology which is producing information. So if we look back to that idea of judgments of information from the SILIP definition. As Jenny was saying, this isn't now about question that we know what information already is, but rather we want to shift the emphasis to it being an ongoing conversation. What is information in this particular moment and in this particular instance. So we're moving away from the idea that we have concrete answers about information. I'm really thinking about a framework of thinking that responds to instances and particularly that's iterable. I keep thinking about this. This isn't a static framework that we're thinking about anymore. And as you can see there from the middle quote from Lee. The idea here then is that information is produced by the technologies. There is in a sense no separation between the context and the content. We need to be thinking about the framework within which these informations. And I say informations quite literally because we're thinking about plurality here. All those ideas of informations are being produced and the meanings and other plural which are being given to the informations because of the technologies within which they're embedded. So these pluralities are really, really crucial. So not that we've made a final judgment. We want to extend that definition that SILIP has given us as a fantastic grounding but extending that to think about. What does it mean to have a formation of information and to be considering that in the practice of research? So we're not just necessarily thinking about the output, the information as separate from the technology, but rather thinking about it as a wider framework there. So within that what's really important to note and be thinking about then ease ideas of the digital divide. With that particularly in mind, we wanted to focus on this idea that often we look for the obvious problem, right? And we have a history across the sector, not just in libraries in any way, shape or form of addressing the point of failure that we see. So in this case, it might be a visual representation of someone who does not have access, whether because they don't have the technology or they don't have funding to acquire the app, whatever it is. We tend to see then to layer resource and effort onto that point of failure instead of walking back to where perhaps the point of pressure might be creating that issue. And so what Bonnie and I would like to advocate inside this iterative emergent framework for a philosophy of search is that access isn't sufficient. We're talking about agency and in that case that has a lot to do with the one's ability to textualize, right? Textualize and understand the information as is presented to you. And what we're saying now in terms of generative AI is that that understanding needs to move beyond perhaps, can I judge this? Is this sufficiently rigorous? Can I tell who wrote it to a realization that when that connection is broken, other things must be brought into play. In addition to which it's essential that libraries are at the forefront of demanding where possible, and especially where it intersects with our core business, that whenever AI or tools like generative AI are brought into enhance anything, that we advocate quite firmly for the ability of anyone to access and then use them. That there isn't this kind of gap between purchase and use, right? Where libraries are expected to somehow magically make it happen, but that it should be built into the tools. And if it isn't, we should demand that it is so. Yeah, so I think I'm thinking about what happens when someone has entered that digital space. How do they start to think about that information and what they have sort of encountered there? So what we're really advocating for is a rethinking of research within the research space. And so as you can see from that top quote there from Columbia, there's a real connection between the user and the computer. And this is all embedded within a discourse of power and agency within that discourse as well. So I think as hosts, sort of on the penultimate point there, as hosts, as hosts of information, we have a responsibility. This is bound up with the idea of ethics there, that being a host of information is not a neutral position and neither is research. But this also means, as Jenny was saying, we have the ability to advocate for a new concept of how we're researching and what we're thinking about. And as libraries, we can be the leaders in this. So we want to be thinking about what is originality in research. A lot of this is about questioning current practices. And we can start thinking about how we're questioning the AI as what we're practicing as well. So it's not that there's a separation between the framework and the thing that we're looking at. These work together in conjunction. And as Jenny was mentioning earlier, it's not the thing that happens in one particular module. This is the practice of research that we're advocating. So going right back to the beginning, this is quite a fundamental change of practice that we're thinking about here. So we want to be thinking about the structures within which information is formed. Libraries have a huge amount of expertise already in the sector. We're already at the forefront of thinking about information. We're leaders in research helping students to produce new research, new research practices and think critically. So as Jenny was saying earlier, we have the ability to use that power and not concede ground to tech companies, but rather to advocate for something else. And what we want to call for really is our philosophy of search. So I'm going to pass over to Jenny for the dramatic slide. So we realize that you've been promised a full-form framework and a full-form philosophy. But if you're paying attention to us in the beginning when we said there are no answers, only questions. We'd like to then also end with the idea that we're not done yet and neither are you. What we'd really like to do is do this together. We know it needs to happen. We are aware every day of the implications, not only for how students are learning, but also for how researchers are researching, for how backend processes are being formatted inside our own spaces and how the digital divide is increasing without change in some places. We know it needs to happen in libraries. We are the places with this understanding. We are the places with the history of critical information literacy. We are the places at least at the University of Manchester where everyone's card works, right? We'll stop. We're here for everyone. And we know it needs to start now. So what Bonia and I would like to do, and in reminding you that we started with the fact that there are only questions and no answers, is ask you to join us. We want to ask you to join us at things like this and advocate as the keynotes here. And as I'm sure our follow-up speakers will, we want you to advocate with your students and fellow researchers. And most of all, we want to network together because we feel like a philosophy of search that comes from the sector is one that can shift practice. And once we start shifting practice, we'll shift culture. And once we shift culture, then we'll be able to speak with one voice and make the kind of changes that we know are necessary to ensure equitable, innovative and original research continues. So Bonia and I would like to just thank you. We'll hang about for questions clearly. And we hope you're having a great time so far. Thank you. Thank you both very much. Fantastic way to throw down the AI gauntlet and a little bit of a provocation, which I think will definitely be picking up in the Q&A. And I just wanted to move on to our next speaker, Anna, to pick up on some of their experiences in working with chat GPT. So could I hand over to you, Anna, to introduce yourself and take things, take things on. Thank you very much. So hi everyone, and thank you for joining our session today. And thank you as well to Jenny and Bonnie for that super interesting presentation. So my name, as it says on the slide is Anna. I'm a librarian at McGill University's law library in Montreal, Quebec, where it is currently minus 10 degrees Celsius feels like minus 20. So happy spring. I swear it's not always like this, but also maybe don't visit Canada in March. So today I am presenting on behalf of myself and our head librarian here at the law library, Sandy here vote. Last fall, we co presented a workshop to our law students on how to use chat GPT both ethically and effectively. So today I'm going to tell you our little story of that workshop and why we did it, how we developed it and the student response. Oops. Sorry, I'm going the wrong way. There we go. So for our overview, I'm just going to start with a background to our faculty, what we were seeing and why we felt like this workshop was so important. And then I'm going to cover the content a little bit to give you an idea of what worked for our students because we did really have a great response. And it was one of those warm fuzzy librarian moments where you really feel like you had an impact. So so let's let's share that. So to give you some background since since last winter when chat GPT made its big splash, we noticed a few things from instructors in particular, we noticed a lot of apprehension. We found this came primarily from a lack of understanding of ai's limitations, they weren't using it, and they were scared of it and honestly fair enough, I'm enough of a sci fi nerd to not be completely comfortable with ai. But when you actually use it and see what it can and can't do. I find it becomes a lot less scary. Because, however, because instructors often have a more philosophical or theoretical apprehension, they don't understand the practicality of it. And they either ignore it or they will outright ban its use in their classes. We think this mentality puts students at a disadvantage and students are using this as a research tool, and as was discussed in the last presentation, whether we like it or not, it falls within a librarian's domain and responsibility to address it. So this is what Sandy and I did. And what we saw was a huge disparity in how students were using AI. Some of them didn't use it at all. Some were using it in those very problematic ways that instructors were worried about, although not getting away with it because they didn't understand what a bad job AI was doing on their assignments. Others were actually using it effectively. They were doing their work, their research, they were citing good sources and mostly using chat tbt kind of like a glorified grammarly. So while we were happy to see some students had already figured out how to use AI chat tbt specifically is what they're using. We found this wide range of applications to be a major equity concern. Some students were benefiting because of their prior use, knowledge and familiarity with AI tools, education they had, it being addressed by previous instructors, while others were left behind. And that just created a huge disparity and a major equity concern. So to give you a little bit more background on how did we gather this student feedback, I just gave you a bunch of information and like how did how did we get that feedback. To be honest, this was all done through informal interactions. We talk a lot with our student employees the library employees several law students, and they are often open and candid about their experiences. And also I personally do over 200 student research consultations every year. So I would actively recommend a student use chat tbt in an ethical way of course, this would put the student at their ease, feel like they weren't doing something wrong, and would spark a discussion about how they were using it and how to use it and also provided really great insight for our workshop that we were developing. The other important component to building this workshop was to gather examples. We did not want another workshop on how generative AI works or ethical problems, although of course we did touch on this, but we wanted a practical workshop that showed students how ethical use of chat tbt was also the most effective. So we played we played around with it we use chat tbt a lot we so that we could build a repertoire of examples to show the students. So we could show them how it enhance the clarity of awkward writing or how to phrase your prompts for the best response and also just how badly chat tbt writes just on its own if just left to its own devices with little instruction. So of course, really underlined why you should never trust citations from chat tbt. Now, moving on to the workshop. I'll talk a bit about the actual content. This was our outline. The intro parts were very brief and high level so that we could focus on examples and activities. We started out with a survey. We, which reflected a lot of what we had already gleaned informally to they were using it to edit writing create outlines explain concepts translations, which are all mostly great uses. Obviously there's some gray area, but these are the kinds of uses we recommend if done correctly. Major uses major issues we saw here were really in finding case law and writing papers, which will touch on in a minute. We also gave them a free text option where they could put in other they put in some other great ideas like writing, writing emails other personal uses which were mostly unproblematic, although friendship is potentially a concern but more for our student wellness department. And so we were it we did find this quite quite enlightening and useful information. And so for our first example, we wanted to talk about citations because this is a major issue for all disciplines. We addressed the address the issue of finding case law and for this we had a great example of a lawyer who relied on chat GPT to find case law and turned in a brief with fake citations. This is a really impactful example for law students because lawyers can be punished very severely or lose their license for something like this. So that really resonated with them. And next we wanted to do an activity. So we did a prompt building activity. This again was very impactful and very targeted to our specific audience, as we do have a bilingual program and legal translation can be very complicated specifically because in Quebec we often use terms that don't align with either the rest of Canada or with other French speaking countries. So I had them compare translations of a legal term into French using both Google translate and chat GPT. And whereas Google always gave them the same response to GPT gave varying responses based on how you ask the question. This sparked a really engaged conversation about how everyone asked their questions and how to build the most effective prompt. Oh, sorry. So another example, which was really fun and very succinct. This one showed chat GPT is writing style. It's issues with understanding new wants and again how to build an effective prompt. So I use this example from another class that I did where I was giving students a quiz, and I wanted them to locate a particular EU document entitled our world our dignity our future. So I had them come up with. I had to come up with wrong answers for a multiple choice I was giving them and this seemed like a good opportunity to play with chat GPT. But as you'll see it took my question very literally and tried to give me responses that were synonymous with what I gave it. This of course ended up sounding very awkward and unnatural giving things like earthly heritage and personal honor. So I clarified, I asked for something that sounds the same by, so by which a human would understand as I want something that has the same vibe and chat GPT interpreted that as me wanting answers that were phonetically similar. Obviously, not what I was looking for. And I would have never been asked back to this class if I had to use these these answers, which I really hope you're all trying to pronounce right now. So I finally wrote an effective prompt providing contacts and more exact description, which gave me usable answers and probably took more time than just writing them myself, but it was worth the laugh. So finally, we did an activity where we asked them to find a book on a specific legal topic. And some of them still use chat GPT, even after we had explained that it makes up fake citations. This activity really seemed to hit home with them. There is something about getting an answer wrong that really makes overachieving law students pay attention and really reinforce the importance of getting them to do these activities so that they're not just sitting back and you know information going in and out. They are participating and getting something wrong and that's that's how they learn the best. So it was and it was a really successful workshop and and we got really awesome feedback from it. So, just in conclusion, we wanted to stress how much this really is an equity issue and the librarians do play an important role in bridging that equity gap and teaching ethical and effective uses of AI tools. And then it's also so important to use examples and activities that are catered to your audience and really resonate with them. And, yeah, and that's it. Thank you so much and don't hesitate to reach out to myself or Sandy if you'd like to hear more. They are quite quite astonishing tongue twisters that it seemed to have made for you and a lovely demonstration of the power of prompts. So thank you very much. And last but not least, our final speaker for this session and indeed for the day is Andrew porn from the University of Edinburgh. And the exponential increase in the volume of research papers has meant scanning existing literature before starting the research papers becoming impossible. And I think Andrew is going to explore how AI can help find and present relevant information in a fraction of the time. So over to you, Andrew. Thank you very much. So I'm going to talk about systematic or literature reviews and how AI can be used to dramatically cut down the amount of non intelligent work that's required in order to do a review. I'm just trying to get me right. So this comes out of work we started and this this is a little this University of Edinburgh. I work for a part of the university called Dina which does offer services to all UK universities. Things like Geoscience is that did you map. So we're always interested in ways that we can enhance educational systems and we work with the library a great deal. The project we worked on initially was for the Bill and Linda Gates Foundation, and they were interested in animal health data, which would allow them to make evidence based interventions. We're in sub Saharan Africa now the poor regions that they were most interested in the basis of the economy and the viability of life there. It depends on the health of the animal herds, the lowest level. And so what they were looking for was say areas where there's an endemic situation of brucellosis in cattle and a low availability of the brucellosis vaccine. And that kind of data is available in institutional libraries as you can look at find it on Google scholar PubMed and so on. And a train bet researcher would probably be able to answer that question where after about three months or more of looking through institutional papers. Obviously, the Gates Foundation has a number of partners all looking to make interventions in that region. And it just wasn't practical in any sense to, you know, every time there was a question, take three to nine months to answer it. So what they did was bring in a couple of PhDs from a mathematics department here at the University of Edinburgh and ask them is there anything that they could do with this and the PhDs came up with a group of a few scripts that they could run on the laptop which basically took the existing sort of manual data that the vets had come up with in the sense of the searches they've done already. They used that to train a machine learning app. And then ask it for the queries based on the same data. This is quite successful. And we took it down from three months to three weeks of the basic work and then Nina came into production eyes that and then we're getting down to more like three days. Now, one of the interesting parts of this was the question that was asked to one of the PhDs was, well, we've been looking at cattle so far. Could we look at goats? And they said, yes, and anything else. So, depending on the training data, the point was, is you could use a system like this for any kind of literature or systematic review. So I then went around the University looking for reviews. I'm not an academic, so I don't know any research myself needed to find somebody was about to start literature, literature or systematic review and see whether we could help them. Ended up strangely going from veterinary science to engineering specifically microfluidics within engineering. They were about started a view. Now, their challenges were completely different. The vets were looking through thousands of papers trying to find relevant data extract it and put it into a data visualization. The engineering researchers in more fluidics department that they were more of a niche subject, where they essentially know the papers interested in it's maybe 40 or 50 papers that they'll immediately know we're going to be relevant. But in order to answer the research question that they're trying to, they need to read those papers quite carefully and take quite considerable time of them, because there's lots of equations and the equations have substitutions within the text that need to be made. So it's very close reading. They also do a lot more, but shorter reviews or so the scale of those not considerable about 200 hours to do a review rather than the months and months of the vets were spending, but quite a number of them being done and across the entire engineering department got hundreds of people. And so that's hundreds of reviews every year. So the approach we used was to actually convert the PDFs that are involved into latex, using a math interpreter call math picks, and then have a chat interface, basically using chat GPT, but tuned very carefully we essentially told it, chat GPT it couldn't make anything and it all the text it brought back had come from an actual paper. So there was, it was basically just a chat interface that allowed engineers to ask questions like how is the Reynolds number law applied in these papers, and then you'll bring back all the instances of the Reynolds law and in that particular application. And the result of just that fairly simple interface was again about a 90% reduction in review time, in other words, much the same as we saw with vets. And this allowed them, because it was so reduced it to such a short time to start thinking, the engineers could start thinking about doing longitudinal or meta studies, because it was made their lives so much easier and it was so much quicker. Now, we were obviously thinking about this as a service. So for us a cost benefit is quite important. And so we came up some rough numbers 450 year reviews a year about 200 hours each were eliminating 9% of that quite considerable saving in just in PhD and early career researchers time to be about a million a year in terms of time saved. So we're sort of decided to start looking that more seriously as not just a service that we could offer to university member, but beyond. Now, once we started to see the implications, the power of the AI systems that we could put in place which were fairly straightforward. I mean, we're not talking about proper research into AI we're not expanding the field we're using existing AI techniques that we know work. And that's sort of brought us to the academic support librarians here. And we were introduced to a hyperchloric acid study, a medical study. This is run by a couple of surgeons who are do cleft palate repairs on children in this country, obviously, however, their interest and their research topic was the problem of being able to do this, and that kind of surgery in third world low middle income countries, where the antiseptic conditions required for that kind of surgery are very difficult to obtain. Hyperchloric acid is a acid that's used for antiseptic purposes by the body itself breaks down into completely neutral substances very easy cleanup and this is usually the challenge in providing antiseptic acids within most frameworks. The study was probably going to fail because of the problem of the nature of the literature that they were needed to go through, that they sort of were used to doing medical reviews, but it turned out that hyperchloric acid very very rarely is used in the west in surgical situations, because we use much stronger acids and those we have the facilities to clean them up where it is used as much more in the veterinary scenarios in food preparation in environmental cleanups. So they had to look at a very wide body of literature and sort of the implication is anyone looking at cross disciplinary literature reviews is going to really struggle that basically they had 10s of thousands of papers to look at. However they had managed by the time we met them to do I think 4,000 papers themselves manually, just title abstract reviews that gave us an enormous amount of training data that allowed us to actually do what they'd originally intended which is to just get through that 10,000 papers and break it up into categories fairly quickly and easily. So, yeah, definitely regressive review there. And it has obviously major implications for, you know, the real world, if they can get that review complete the study of complete and then there's going to be we can move forward with it with that research. So, talking to the hardware study it became clear that there's three main techniques that we need to look at classification is this is a machine learning classifier approach. It's computationally quite cheap can be run on a laptop. All you need is a few hundred manual examples of relevant or not relevant or categorization, and you can train a machine learning to find that. For the vets and this is what we would use for in the medical applications as well. Then there's an information extraction step this is basically is a name density recognition, and you essentially have to annotate a number of papers with the key terms or give it a quick example of that in a moment. And I think all of our applications are interested in the idea of language models, but only in the chat interface sense of taking existing papers tying the language model to those papers that can't answer from anything else can't make anything up. But then but can take you to the key points in those papers. This is just very simple real world example this is the hydrochloric study and the tags in green are the ones provided by a human. The model predictions are in blue and the model entropy just tells you how confident the AI is and this is key we need to make everything explainable and reproducible. So one of the things we want to do with the AI is be able for it I highlight when it's not too sure, because that's what we will then use as a in a feedback loop. So that the least confident answers from the AI can then be examined by the humans and confirmed or changed and that will actually improve the AI as we go along. This is the entity of recognition. I mentioned. So, if you imagine, at some point, the, a human again has gone in identified fasciola gigantica and fasciola disease terms identified Uganda as a region, cattle as a species. So, so this is the training element. And what it does for you when the AI gets hold of it is if you look down on the sixth line down. It knows that in the present study 29 and so much identified the number 29 is a sample size of fasciola flukes collected from campfire and it can once trained simply pull out that number for you and put that into a spreadsheet. So that's the kind of key difference from say just doing a Google search. It's actually going to go out is going to find the key terms that you've trained it in and then actually pull out from thousands of papers just the key data you actually need. Now, the number of speakers talked about the dangers of generative AIs. And that's something we're very, very aware of and something that we specifically tie it down. There's a creativity element or parameter to anything you use with chat up to your other language models. And that can be tuned to zero so they no longer creates anything. And this is the hallucinatory problem. And also we're not looking for it to do summarization, which is also problematic. It's a really difficult area within AI to actually accurately summarize text. So what we do instead is use this Lang chain to tie it to very specific documents and then the chat interface is just a normal human language, making inquiries about the contents of paper. So the key thing for me trying to think about how we would create a service based on this or otherwise is that cost benefit analysis. I'm sure that at least 50%, but in our practice 90% of the work can be eliminated in a systematic literature review where it's repetitive and over a considerable time. There is a sort of cost of entry, you're still going to need to do some manual analysis as an actual and provide training data to the AI in the audit. So the study is going to be on a particular scale. You've got to have significant repetitive work that you're not really having to think about. And the AI is very capable of taking over from that. I've claimed there's an improved accuracy. I haven't got a great deal of citations on that because it's not a well studied field yet. But I can say that we use something called the Kappa test that someone you may be familiar with where essentially you originally intended to compare different humans doing evaluations for a review. And what we do is make sure that the AI agrees with either human involved in doing the manual part of the review as much as the two humans agree with each other. So that's one of the key things we make sure of so that we've got confidence in what we're doing. Now, the question and this is where things get quite interesting is the impact of this if we were able to provide these tools on a wider scale, as in for all researchers. Now it's not mandated that you do a systematic literature review for every piece of research apart from NIHR do do mandate it. And it's increasingly becoming certainly with the medicine and some of the wider fields like that that you do need to confirm you've actually looked at the literature and you understand whether or not your research is practical and whether it's been done before. But we would expect that that has the literature expands and there are more examples of, you know, people doing something again or doing something that's already been proved to be a poor sort of direction or approach that people that there's going to be more and more emphasis on have you checked the literature and then people going well there's thousands of papers to read how do I do that. And if you just have some knowledge of your field and can do a few hundreds of checks, then immediately you can pump that through an AI and go up to more thousands. The impact of this is that rather than and this is true for for all areas where it's mandated that a the actual research funding you get tends to involve about 50% of it being spent on the review itself. Before you do the research, if we could bring that figure down to a few days of work, then what you would do is of course do your systematic literature review before you apply for funding, and you can therefore have a much better chance of actually being successful in your funding. And also you're going to do a lot better research because you've the quality is going to be determined by fact that you've already confirmed that you've got the right approach and it's the literature backs you up. And that is me. Thank you very much, Andrew. And I think yeah that that that figure of one million pounds savings is quite quite eye watering and really interesting to see that sort of kind of more nuanced we've been talking a lot today about black boxes but that feels like a sort of a very kind of scholarly and nuanced black box that you've been you've been using. So what I want to do is to invite Jenny Bonnie and Anna to come on screen. And let's open some of that conversation we've got some some questions in the in the chat, but I'm going to have kind of chairs privilege and I'm going to start with kind of Bonnie and Jenny and I kid you know I liked yeah I like your framework you threw down that that gauntlet and I think that question is, how do we, yeah, how do we as a community how do you envisage taking some of that discussion that framework around philosophy of search, which you you kind of pitched forward. And also a bit that I assumed you're going to let them answer the questions first, so I wasn't ready to answer anything right away. I think one of the things so I, one of the things I should say is that there's a really important book to me called emerge emergent frameworks it's by Adrian Marie Brown, and she talks about something called seeding the small. The idea there is that it's a little cumulative steps that make the difference, and also it's this idea that the agency of the small group to make a change. So what I would advocate for is, is people like Andrew and Anna and everyone who's clearly innovating in their spaces to ensure two things to one articulate that overarching philosophy of search the thing that should sit on top of these efforts that we're making sure that we know why not just how we're approaching things the way they are, and that we're talking to each other about that why thinking back to how gendered of AI is impacted us throughout this last we keep it's not been that long but throughout this last period of time. One of the things I started talking about right away was it well for going to make everyone announce if they've used gendered of AI and and talk about whether it was a collaborator like why don't we then just change push against this idea of the isolated lone genius the person who thought of it all the one person on the article. Like why don't we advocate for a change that not only says yeah I got some help from chat GBT but also historically their wives wrote it up. You know like this isn't this is an opportunity for libraries to kind of put out there what we know about research and research practice and use gendered of AI is like a lever. Because everyone's quite excited about it to get in there and question things and and and have a bit of a, you know, a taller soapbox than we perhaps do, but I'll hand over to Bonnie. Jenny I think you said most of the things I would have said it's not long to add though. But I think certainly about that idea that is small change and incremental and I think, as Jenny was saying, I think the collaborative element with the students as well talking about why is that we're doing particular things so I think a lot of the black boxes around research and particularly when first years are reading those research papers and they look really glossy and brilliant and is around those kind of concerns about research practices that as Jenny was saying, there is a low genius in a room. And that has sort of spread sort of just been born in such in such a way so I think thinking about how we ourselves as researchers and I think they are also the idea that we mentioned earlier in the talk that research is an iterative practice it's not static. And to always be thinking about the idea that how, as Jenny was saying the idea of how, how are we reading information, because I think a lot of the time we particularly once we've learned and acquired a particular type of knowledge it becomes almost unconscious. We don't think about what it is that we're reading we're just doing the reading. And I think for people, particularly for students they're going to be the next generation of researchers to be thinking about how it is that we're reading and what it is that we're reading as information, because it's not the whole idea of language it's not a neutral space. And we're producing information and will be countless information so I think, I think, as Jenny was saying starting on a really granular level, particularly with the students and thinking about our research practices and being open about our research practices and what it is that we're doing. I think it's certainly a way to start. No, I think that's that's great. And I think that really also kind of leans into some of the other discussions that we've had during the conference around, kind of around attribution. And also for for us at our own UK thinking about that role of libraries as kind of, you know, as researchers as kind of partners and pioneers in the research process where we can build that capacity capability. That's around around the research. Jenny, there's a, yeah, it's you've answered it again in the chat so that's Adrienne Marie Brown is emergent strategy that's great. We'll be going on people's reading lists, I'm sure, not long after this. Thank you very much. All right, so I'm going to go to some of the some of the other Q&A. And there's one, there's one for you, Anna, a comment that your workshop sounds utterly amazing, which I wholly concur with. I think the question is, do you think that same type of workshop or something similar would be successful for library staff. Have you possibly tried it on libraries staff. Yeah, I mean, we've, for librarians at McGill we've had a couple AI sessions, but we haven't done it for our whole staff so that is actually a really interesting idea. And I mean, but I think, in general, I think this is something that everyone benefits from. I think that like I've been to a lot of online sessions or in person sessions where the room is asked, have you used chat GPT just like at all and I am shocked how few people have actually used it. I'd say maybe about half of the people in a room full of librarians will have never used chat GPT and this is as recent as last fall. And I was bewildered by that just it doesn't mean that you're using it for your research it doesn't mean you're using it inappropriately but just sort of curiosity how could you not get in there. So I, and I think a lot of it does come back to kind of what I touched on at the beginning about the response we saw from instructors and even some other librarians, where there's this this fear around it, and kind of this over, over assumption of what it can do and and how advanced it is, which is potentially something we should we should be concerned about for the future. But this current iteration is if you actually use it. And, and I'm sure anyone here can attest to if you just let it write an assignment for you if you let it write a paper it does a terrible job. It's not it's not replacing human critical thinking. And so I think that is is there's really a key in in stressing that in these kinds of workshops for for staff for instructors. And we do in fact have not for library stuff but for law faculty we have another session in April. We can actually get to some of these points with the instructors with the hope that as they're teaching their classes, they are reiterating these ideals, and not just saying telling their students not to use it or coming in with this like completely conceived idea of what what it is. But yes, I think that would be a great idea. Yeah, and I think you make it you make an interesting point there I think possibly the less successful strategy is the strategy which says, don't use this at all. And I think a more kind of co partnering, you know, some of what you had kind of demonstrated in your workshop. And I think, you know that really sort of brings that home. I wonder as well if there's almost sometimes, you know that slightly foundational question as well about which flavor of chat GPT or how do you sort of, you know, you know, colleagues access it I mean does does Miguel have a license. Do you guys have a license or you're just using. No, we're just using chat GPT three right now. And that is that is something we're also really concerned about. And we want to for the future iterations of this workshop or perhaps do this workshop in a two part or have like another session where we can dig more into comparing different iterations, different tools, especially within law, like legal databases, because it's, you know, they have some of the most funds compared to a lot of research databases out there. There's a lot of money in law. And so they're producing some very advanced tools very quickly. And there's this actually creates another huge equity concern, not just for people like Miguel where you know we're subscribing to all these databases that are using these tools and will generally get access. But what about like the small law firms out there that can't afford these tools and then we're creating the like the equity issue that I talked about seeing here with the students. That's just being replicated out in the world with lawyers who are representing people who can only afford certain lawyers and creating even bigger disparity there which I think is, again, a place where librarians have a role to work together, work with open AI platforms, and really, really strive to make those as good and usable as they can be and teach those to students. And another something else I've seen but something else that I talk about with students a lot is, you know, right now you have access to Westlaw and Lexis and all of these really expensive databases but you don't know where you're going to end up. You don't know what they're going to be subscribing to and so this this falls in line AI tools along with other fancy legal databases like you need to learn the free version you need to learn the open version and those versions need to be good and usable or else it just creates this huge equity concern. Yeah, no thank you very much. I think there's just a follow up there for you and I think you've answered some of that which is have you run, have you run the workshop again, or do you plan to run it again. So I think the answer to that was, yes. Absolutely we're definitely going to do it do that workshop again next fall we're hoping to do it in two parter because we also had we had a huge turnout like we had 50 students show up of a pretty it's a pretty small faculty we have five sorry 600 undergraduate students. So to get 50 students is like that's pretty good for library workshop. So yeah and we use and the student engagement was really enthusiastic we got so much positive response from it. So we I would love to do another one where we lean even more into different tools and also more practical uses maybe get them to like workshops some of their own writing and some of their own their own use cases. I would I would love to do that and again next month we are we're doing a session with with the faculty. And actually that was the follow follow up question which was, have you found academic staff or faculty kind of staff attitudes changing towards the use of AI, and are you working with them on AI use. So I mean, obviously the workshop next month, but also just in a one on one capacity, when I've had a lot of instructors express concerns like all I go into a lot of classes and do one offs in different law classes. And a lot of those instructors will have term papers and they'll express the be expressing concerns. And again, it really comes from this lack of understanding of what these tools are capable of. And then you know after going over it with the instructor they'll say like oh can you do can you cover this in your session when you come to my class can you please talk about this and give examples. And they're like actually very open and enthusiastic once they have that understanding. And yeah and some of them are really scared of it. And, but I don't think I've had anybody who really hasn't come around at all. And on the other hand I have also had a law professors who are just saying like oh can I just get chat to BT to do my lit review for me. Which as we've seen today there's maybe some potential out there but at the moment just saying asking chat to BT to write a lit review on seditious conspiracy in America is not really going to work out too well. You know, fantastic. So thank you very much. I'm just going to move on to we might come back to you Anna and DJ anybody but Andrew, you've elicited some some interesting questions as well. One of them is, is there, do you think there are any concerns that post graduates or early career researchers are missing out on developing critical synthesis skills by outsourcing some of these tasks to AI. Is there a value in doing the close reading to research or development, and how can those skills be maintained, alongside the efficiencies gained by tools, and a shout out to the Bristol Watch Party, who is a partly. So this has come up obviously in our conversations with senior staff. And it is sort of this thing of, it's really educational to just throw these thousands of papers at early career researchers and post grads PhD students. But the reality is that those students are simply going away using Google, getting the best hits that they can, then sifting through them and it's a case of is this paper relevant or not. Let's glance text and abstracts get glanced at probably while you're on the bus in and your market relevant or not relevant. These are real line of notes from from people we've had. That's not really educational in the sense of improving their knowledge of the subject. It's at the interesting bit comes in when you find a paper that, you know, the text or abstract, the title or abstract suggests it might be interesting and you have to think about it. The interesting part to that is the AI is also going to have to think about in the sense of it's not going to be too confident on its answer relevant or not relevant or categorization. And that means essentially when we're doing this sifting, if you've got a researcher who's looking through these papers, rather than looking through thousands of papers wasting a lot of time to get down to the ones that they might learn from as in the ones where they have to make do some critical thinking. All they need to do is look what the AI has come back with. They can say, well, these ones are obviously relevant. These ones are obviously not relevant. And then there's the low confidence results. So with the auditive system, we set up we always return the low confidence ones because it will improve the AI if you tell it which what you actually think, but also that puts right in front of a researcher, the, you know, learning opportunities the critical thinking without having to just sift through thousands. So I'm going to combine a couple of questions. One where a colleague has said that their eyes are now so wide, they are currently a health librarian. What steps should they take to learn these skills or what skills should be the developing kind of now and I think that sort of plays into into some of that. Yeah, and yeah, is that is and perhaps related to that there was a question about what was your your sort of, you know, was it written in Python, but I think that whole question around skills and yeah, kind of, how did you do that and what Yeah, so, yeah, Python is the preferred language it's got lots of machine learning modules. You can train yourself up in AI I have seen some encountered quite a few members of staff around who basically taught themselves enough in AI to actually do this but there's always that problem of a little knowledge can be quite dangerous. One of the things that others because of course talked about is the problem of introducing bias and so on, without realizing it. The important thing is to stick with a and this is obviously required for when you're publishing is is having a strict methodology and following it in order to avoid bias and to make sure you implement that through what you're doing manually and then using that manual data to train the AI. It's a problematic area for me because I don't think it's practical that people with, you know, strong knowledge in their their own personal subject area should have to teach themselves AI we should be making this much more approachable and easier to use and that's what we're looking at here at the University of Edinburgh is what tools could we provide that would make it a straightforward thing so that you just put in your data and the AI will then give you some answers you can look at the low confidence results you can do some sampling on the other results. And just a process where you can actually have confidence in what the AI is actually returning. But this is still something we're working on it's a lot easier for us to deliver a kind of a chat interface that allows you just to inquire about papers. But the real sort of easy powerful uses are things like getting rid of that problem of thousands of papers and of course the field, all of the literature is increasing exponentially this is a problem. The, you know, is not going away and it's getting worse and AI is probably the only way we can address it. Sorry, is there another follow up there I think somebody was asking about. Yeah, so no, that was so I think that in terms of the answer to the, to the skills question is yes that that sort of service or kind of approach that you're providing is not something that, you know, you know the health librarian or the law librarian or the health librarian would actually still need to bring some of that discipline knowledge to actually be constructing the prompts thinking about kind of running the running the searches working with the ECRs. But yeah, that would be different from yes having to write write the Python and harvest the building the service, I think is. Yeah. And just to say, I mean, obviously this is a woman within the IT group of the university. And I have students working for me from the informatics department to do this. If you have a strong AI department at your university that is a model where you two could follow. And what I would suggest is that, you know, there should be any nascent groups being set up within universities have got that combination of skills. It's just I'm concerned, of course, that shuts out other universities who don't have a strong AI or informatics departments. So that's kind of why what we're looking at is, can we set something up that we could offer to universities based on our own experiences but that's still in development for quite fast development that there's a lot of power to listen and but it's all about the user interface of of what can we provide to people without having to provide consultancy which was essentially we're doing internally at the moment. I think familiarity with AI and what it can do is obviously important for everyone to understand so that you know they can advise people properly. You've certainly kind of the thing opened and really opened eyes today and I think can kick started, you know, further some of that discussion around around that and I think also that plays into some of the earlier themes around kind of equity because you know perhaps, you know, not everyone as you say will have some of those kind of AI departments and so on. So I just follow up with one last thing which is just the problem of black boxes. So, obviously our concern when we're talking to researchers is that they're going to publish a paper and they need to explain, you know, their protocol and how they executed it on finding papers and so on. And they have they have to be completely, you know, open about the fact they've used AI there. Now, there are lots of commercial services out there. I think people have mentioned ran and a few others. The problem is all of those they can't offer a commercial service unless they protect their IP which is to say you can't see how they made the decision. We don't obviously we do want to run a service that was paid for because we have to pay our own costs but our focus is what can we do that will be completely open as in open source code throughout and something that any peer reviewer could then look at and repeat. So I mean this is a wider implication I guess because if you're a peer reviewer and you're looking at a paper where someone's used some AI to strip down to just the natural, you know, papers that they're actually interested in for that piece of research, you might want to confirm that or look for bias in that, at which point you need to know something about AI yourself. So if it opens up as many questions as it answers but I think there's no choice really in the future of this in that sense of we will need to be somewhat familiar with AI what it's capable of, and some of the dangers within it. Well, I think that's that's absolutely right and thank you. You know, they're under as well because yes I'm going to kind of commented around kind of ran. And I'm just going to sort of put back to you for a second and one of the questions with that it's your survey and the stuff that you've done around kind of talking with you with your students to do you think there are any concerns with students using chat GPT with grammar or writing skills that might hinder their learning or encouraging over reliance in their, in their tunes. I mean I just I think it's too far gone like we're all like, you know, we're all using can anybody spell anymore we're all using spell check where all we all have like word Microsoft word has grammar checks built into it. I don't think chat that that may be a discussion of philosophical discussion to have but I don't think there's any turning back from it and I don't think chat GPT is the forerunner in those types of tools. It's just like in a way just making it more accessible. I just I was just if it's okay with that and everyone to chip in. We get this question a lot as well we've run our the academic skills support for the whole uni out of the library at Manchester. And what we're approaching this as is we want to give our students a metacognitive tools to understand what they're outsourcing. So I like and I'm not the word about a grammar check. I don't necessarily like it's again chip sailed into a difference. However, I want them to understand what they are doing when they ask a tool to do something for them. So if they're asking a tool to make their flashcards for example, that's problematic because you're giving away a lot of the learning in that case for the sake of efficiency that perhaps isn't what you want. The AI support world building for our students asks a series of reflective questions that highlights two things one we're trying to help our current students make sure they make the correct decisions. But the other thing is a lot like sometimes how we approach search. We have not done the work we need to do necessarily in the sector to teach students how they learn. We just taught them strategies and tools to learn and so then when they're presented with a tool that black boxes part of that process, they don't really know what the implications are of using it. So a lot of what we're discussing now is not so much the gen AI, but realistically like actually did you know when you write flashcards out, the learning happens in the creation, then you can kind of light them on fire. You know you've done a lot of learning already so if something else is it for you that's a real problem. It's not so much a worry about the tool as a note to ourselves about needing to intensify our approach in making sure students and research is going back to the idea that we need to know what we're doing with the search like and the reasons why not just the process for so that they can react appropriately to this kind of tool. Thanks very much, Jenny. Well, we've got just time for a couple of minutes and I'm going to go over time slightly because I think there's still some really interesting questions in the mix. So I'm going to open this up to all of you. I suspect that this might be for Andrew but question around. What strategies might we use to prevent Jenny I from using nonsense articles in predatory journals as references. Basically, when we're using chat to be to we are very essentially because I work with AI PhDs are doing their own research. They are looking at summarization all the time themselves and they know it's really really hard. So when they apply something like chat to be to they say don't rely on its capabilities to generate knowledge. And what the first came out on the PhDs I asked them about it because I was quite concerned that we were missing an opportunity and she said the problem with the language models is they have no idea of signal that's to say truth. So what you see from chat to be T is a view of the world that basically is based on the contents of the Internet in 2021. And then it's been beaten every time it's answered a question wrongly. It's like a puppy that it doesn't understand why the answer is wrong it just knows don't give that answer. So in a brutal way I'm just trying to suggest you can't trust you to be T it will make stuff up it just generates language. And that's useful like I've sometimes done writing novels in most of their time and the the challenge of sitting at a blank page of paper. Chat to be really good for solving that problem of give yourself some prompt to kick off your article your paper whatever just to generate the first key points and it kicks you off and you can start to you know and then you edit it and you create and you start adding to it and and that's good. In practical terms, you can if you teach yourself just a bit of Python, you can do things like integrate chat to you with Lang chain and say to the Lang chain and specifies the documents you're interested in, you turn down the creativity online on chat to be one of the options if you take a look. And then it can only answer from those documents so it can't make anything up so there are options there. But I would say if you're looking to generate language, just use it as a starting point. It literally it will just make anything up, which refers to the real world because it is just making the language it's not, it's not conscious in any way. And I do read with amusement the articles telling us how the whole world's going to end because the AI is going to come along and and take over and kill us and I'm going. This is not intelligence, this is just language. And that's the key thing to remember about these models. Thanks very much Andrew. Got another more practical question for you which is about conversion of articles from PDF into latex and dating them. And how did you handle any of the rights issues in terms of that or translating it into into a product or service. And then a follow up question that someone is just asking was, when do you think you might be able to offer your services to other institutions who could be interested. So then there's a rights issue with we already have access to the PDFs and they're all public. So the conversion we use is this app called mathematics just because it's really good at converting PDFs and we're finding that all of the easily available conversions to latex tend to be pretty poor. But anyway, yeah, it's not an unsolvable technical problem. It's just one rather more difficult when we talk about things like offering the service and what it would be based on we still got to work out how that would work. Because, like I say, we want to make everything publicly available. But at that point, in theory, anyone could repeat what we're doing and then make it a black box and so on. So there's lots of concerns around that. Sorry, what was the second question you said? The second question was, yeah, around that rough time. Certainly in the next year or so we should have some first candidates. We have chat based sort of engineering and finance, you know, ability to inquire into documents. I think the only thing I'll say there is chat she's quite expensive for large volumes of documents. So we know we can only handle a handful without it becoming prohibitive. However, at the same time, because we've got great AI people here, we can develop our own models which are much more specialized and cheaper and can run on local hardware and so on. So that's the direction we're going. But we have to make it a practical service that we can offer a volume before we can go get it out in in front of other universities. I think we might have some interested other other UK members. I am actually interested in if people want to contact me. I'm quite happy to talk about the possibilities and, and I'm interested in what people because I'm a little concerned about the fact we're looking at just this university and the way research is done here. And it may be different other universities. So there's also that element of what the literature and systematic reviews look like at other universities and in other countries as well. So, yep, very interested in other approaches. And one very quick follow up and then I've got one last question for you all. You've still got time before you can stop for your dinner or lunch. Is, was that the enterprise chat GPT you were using, or is that the publicly available one that you were using when you were doing your activity. Yeah, we do have access to the enterprise one. And that's what we have for the moment. The key thing is more on. Interesting things about the more powerful language models like the enterprise version for its multimodal as to say it's not just text it can handle it can handle to some degree images and things, which is one of the challenges we see across a lot of research papers we've got people who are actually transcribing data out of graphs because that's what the way it's represented in a research paper and you need to get that data out and it's not got it separately recorded. That's which is crazy stuff, you know, you shouldn't be clicking individual points and getting just to get the numbers and things like that. So so that's one sort of interest to us, but at the same time we don't want to pay for the privilege that long term so anyway we're looking at other approaches for that and it's something that actually got master students working on as the proposals and things like that. So, and I think I'm afraid Anna probably for an AI arms race there's probably going to be all manner of workshop. What way well for you kid of Janey and bodies are all manner of workshop opportunities as it continues to move things forward. One final kind of closing question from colleagues in the audience was around the sort of skills here and I guess we're thinking about that systematic review skills should they be sort of sitting within the library or should we be moving into a new type of systems and data librarian who can also work with Python Python and building GUIs, or do we need more blending with it professionals and library staff and teams. And I'm going to open that up to the floor as your final final thoughts on what's been a really provocative and fantastic session. And that because of historically it was originally considered part of the library because it was to do with data I work for the chief librarian. That may be true in other places as well but yeah, I taught the academic support librarians they understand some basic coding, we support them and that's sort of where this kind of originates from. Any other thoughts or comments about where some of those skills or kind of needs Jenny. I think we don't want to conflate the ability to build a tool to do a thing with the expertise needed to create the thing I think there's a lot going on in generative AI. I live in a, you know, dual humanities and physics household. So we, you know, I'm quite familiar with Python and latex and things like that and see its value, but I also see the inherent value in in someone really who doesn't use something that will automate it, thinking about how something works and where I would get concerned is where the idea of that contextualization of information leaves and it becomes about automation and efficiency. Not that that's what I'm just doing. I'm quite curious about what I'm just doing. But, you know, that the idea of libraries leading that Bonnie and I wanted to push for is is is almost to protect the integrity of the research so that it doesn't get turned into something that eventually some will go yeah I don't know how to do that anymore. You know that that just gets done by this thing. A little bit like, you know, wondering how an escalator works or something. So I think it's just really, really important that we not necessarily conflate that the tool we're using to do the thing with the actual importance and the breaking down of the process and expertise in that process itself. Yeah, I would just add to that my my first library job was actually at McGill as the digital collections librarian and that role in itself was very much as the liaison between library the librarians are rare books department what researchers needed and our application development team. So I was this bridge and that was my role was to understand enough of the technology to communicate to them but also understand the user needs and and communicate that to the tech team and I think that can there's a place there for librarians to take on that really crucial role of being that human interface to the tech building that we you know just having enough understanding of it. Thanks very much and I think that's a fantastic point to finish on as sort of we've seen, you know, comments about librarians with superpowers this week and kind of librarians can have at the intersection at the heart of universities and I think librarians as bridges. I think as well or kind of interfaces between the colleagues who have those technical abilities and the sort of the discipline academic experience I think is fantastic.