 Wel, we're going to have a bit of fun with this session. Um, it's been lovely having a quick chat with Elizabeth before we go on. Let me just tell you a little bit about her. Elizabeth Long is Sheridan Dean of University Libraries Archives and Museums at John Hopkins University, where she oversees five Sheridan libraries and co-ordinate services across other libraries of the university. She also oversees the university's historic House Museums, Homewood Museum, and Evergreen Museum and Library. And I know that combination of kind of cultural and research responsibilities is shared by, by many across the UK and Ireland universities, so it's a great set of connections. Elizabeth is going to be talking to us about the future challenges for academic libraries, and I'm really delighted to invite her to start a presentation now. Over to you, it is Zerif. Thank you, Jess. I am so pleased to be here and to talk with all of you. So thank you for inviting me. So as Jess said, I want to talk about the future challenges for academic libraries and let's see if I can get my slides going. OK, OK. So academic libraries face a number of challenges as they adapt to the rapidly changing information landscape. Some of the key challenges they are likely to face in the future include digital transformation. Academic libraries will continue to need to invest in digital resources and services to meet the needs of their users in an increasingly online world. This includes providing access to electronic journals, databases, ebooks, as well as creating digital archives and provide an online reference services. Data management. With the growing importance of data in research and scholarship, academic libraries will need to develop expertise in data management, including data curation, preservation and sharing. Libraries will also need to support researchers in complying with data management and sharing requirements of funding agencies, budget constraints. Academic libraries will continue to face budget constraints, which may limit their ability to acquire new resources and services. Libraries will need to find innovative ways to provide access to resources and services while managing costs, including exploring new models of scholarly communication, technological change. Technological change is happening at a rapid pace, and academic libraries will need to stay abreast of new developments in information technology, including advances in artificial intelligence, machine learning and the Internet of Things. And finally, information literacy. As the amount and information available continues to grow, academic libraries must help users develop the skills they need to navigate and evaluate this information. This requires ongoing investment in information literacy programs and resources, as well as collaboration with faculty to integrate information literacy into the curriculum. So let me pause here and make a confession. You asked me to deliver a keynote talk based on my experience with digital scholarship and to reflect on how the academic library landscape is changing, but that isn't what you have been listening to. What you've heard so far is actually what chat GPT had to say when I asked it about the future of libraries. So if you have not experienced the chat GPT window, you can see that what you can do is you can ask it a question, which I did, what are the future challenges for academic libraries? And it writes you an answer and that is what I have been reading to you. So. Let's start again. I want to talk actually about authenticity and presence in a digital world and what challenges those present for academic libraries. So I did this exercise to help us think about what authenticity is in an AI driven world. There isn't anything in the text that chat GPT produced for me that I couldn't have easily written myself. And in fact, it did a pretty good job at hitting many of the major themes I frequently think about. I would claim it was rather simplistic, but my understanding is that how I worked further with the interface and provided more prompts, it could have actually done an even better job and given that it's pulling from a broader and deeper body of literature than I could ever hope to read, maybe it could have said things I wouldn't have thought of. But with that, it's been acceptable. Is that what you wanted to hear? Certainly, we don't think so when it comes to student papers. In fact, I know our faculty are talking a lot about how to teach in a world with chat GPT, and we have an inherent understanding that using something like this to write a class paper is not acceptable, but technically it also is a plagiarisation. And one of the things we've had a lot of discussions about is the fact that our academic policies may not be sufficient at this point to encompass this new technology. Do you even think about how do we describe what is wrong with the use of it? On the other hand, financial powerhouses like Morgan Stanley is using AI, AI and chat GPT to organise their knowledge base to provide on-demand information for their financial analyst. The medical world is starting to experiment with AI to read x-rays and CT scans. So I do think it would be a mistake for us to just wring our hands focusing only on the case of things like student papers and the use of it to get away with not writing something and therefore try to just kind of characterise these kinds of tools as bad. But we do need to develop a much more sophisticated vocabulary for discussing the ethical use of AI. We need to think more about the sophisticated tools we need for detecting something that may have been generated through AI technology and really think about what role it can and should be playing in our future world. So, of course, the cultural heritage sector has a deep history with evaluating authenticity in the print world. What you see here is an example of a forage document purportedly by Martin Luther, but actually created in the 1890s. This comes from the Bibliotheca fictiva, a collection that John Hopkins has of over 1500 rare books and manuscript forgeries. This collection has facilitated research into the historical and cultural context of the phenomena of based objects. We know a lot about how to identify forgeries and do the analysis to understand whether something is or isn't authentic. And doing so draws on many elements of the item. We look at textual analysis, we examine the chain of provenance of objects, we do scientific analysis on the physical material. So what's the equivalent to that in the digital world? And what does that mean for libraries that are either managing research publications or end data or at least advising researchers about best practices? I think it's only a matter of time if it hasn't already happened to you yet that libraries get drawn into helping with analysis on cases of potential research misconduct. And we also know that in particular fields, for instance, climate studies, there's a lot of possibility of them being targets for data hacking in which an attempt is made to alter data sets to further a particular agenda. So what does this mean for libraries? I've talked about several things so far, human versus AI generated content, intentionally created fraud fraudulent material alterations to legitimate data by a third party that results in fraudulent data. These are all part of an even larger spectrum of issues around authenticity and factuality in a digital world. And AI is not the only issue here. The question of authenticity and transparency is transparency is going to be an increasingly important when it comes to research data. How do I know that the data set I'm looking at is the same data set that the researcher produced? How do I know it hasn't been tampered with? So I've kind of created several different axes here to kind of help talk about this spectrum that exists in the way in which we can think about the data and how it may or may not be fully authentic or fully transparent about what it is and what its source is. So the first one, the top the top arrow is the way in which the data itself can be problematic and starting with on the one hand, it could just be poorly managed, not well versioned, files are missing, this is the kind of thing that my understanding is often can be behind accusations of misconduct where in fact what really turns out is simply it was not really intentional, but was also not well managed data. That can kind of move along to a much more severe case in which data is being falsified. And sometimes that just might mean processes or equipment being manipulated to alter the outcome or perhaps to just favor what had been an assumed, you know, kind of starting in with an assumption as opposed to letting the data itself speak to us and draw a conclusion. Data can also maybe be purposely admitted to be able to again kind of lean towards one particular outcome. And then one could go all the way to the case of data that's actually been totally made up, that is fabricated and never was your study not done or say patient data where there were no patients and completely falsified. So that's kind of one spectrum in which there can be problems with how we understand data that we might be looking at. But nowadays, research is also often more than just recording a fact. Code can get used to process or analyze data. So that leads to a whole nother access related to that code. Many researchers repurpose snippets of other people's codes and put that in their pipeline and they may or may not fully understand what that snippet is doing or how secure it is. They might be writing in languages that themselves have different levels of inherent security, for instance, there's been a lot of talk recently about the fact that C++ has been rated as one of the most insecure languages. And then finally, if you're using proprietary codes from a vendor, you actually have no transparency into what it's doing. So how do you actually know what is happening to your pipeline of data if you have insecurity on the one hand or actual kind of black box on the other? And then if we again look at the data, thinking about the data itself, another kind of axis of this spectrum is thinking about the data and whether or not it's well described, right? Is there a clear data dictionary that defines all of the terms? Can someone else who comes in and looks at it understand and reuse that data set? Are there biases built into the data so that it isn't telling the story it's meant to tell? Are those biases intentional? Are those biases unintentional? And finally, has the right statistical method been applied? Do we have good quality and well described data, a good data set, but we're not actually applying the right statistical method to it to draw the conclusion that it is being drawn? So I mentioned all of these because we have really a growing crisis that we have been hearing of of reproducibility in our kind of scientific literature especially where people trying to take data and reproduce experiment are not getting the same results. And so all of these different things I've been mentioned can be reasons that that might be happening. And we have to actually understand how to tease apart whether something is intentional, whether it's unintentional, what what we need to be doing to really make this a better environment for the use of data. This is going to be something that I believe will be increasing as at least in the United States, we have increasing requirements from our funding agencies that researchers share their data. And that means it's going to open it up for further scrutiny. That means that we have more and more people who will be looking at things. And the small studies that have been done around reproducibility may really grow. We might be able to start seeing them more and more sealed as well. And then if we look back to what I started with talking about AI and how we developed the necessary expertise to understand when it makes sense to use it, but also guard against its misuse to understand how to evaluate whether someone has written something themselves or whether they have produced it through AI. And when is that OK and when is that not OK? So what role can libraries play in all of this? I think that there are a lot of strategies that we can be thinking about. And it addresses many of the different kind of spectrums that I've been talking about. And the first is looking at infrastructure. We need to be thinking a lot as we build the kinds of systems that are holding the papers and the data that our researchers are producing. We need to be thinking about what do those systems do to help with an analysis? What can we do the forensic science that we know how to do on print materials? Do we have the ability to do that on electronic materials? So do we have provenance change? Do we have versioning? Do we have checksums to make sure that data has not itself changed? Do we have security conscious networks and platforms? And when I think of this, I think of the fact that these are all things that I as a library professional need to think about. They also are not all things that I can do on my own. This is part of a larger infrastructure on our campuses that we need to be thinking of, particularly in relation to these kinds of real technical issues around the infrastructures that we're building. But I think another really important role that libraries can play in this is a place where we shine is, of course, instruction. And, you know, we're doing a lot of this already, but I think that we need to be doing even more and thinking about what else. What are we doing and what could we be doing better in the kinds of instruction we're doing? So, you know, around data and code, many of us have librarians who are working with faculty around data management best practices. They're teaching people how to not end up in a situation in which through simple, you know, not having good data management plans and practices, you end up having not being able to actually show the data and the processes that went into what you've done. We also are often teaching statistical analysis best practices. You know, we become real partners with our faculty in giving students the kind of skills that they need to have to understand how to actually use data and how to use a lot of these tools. Something new that I think I have seen less of but is a direction we need to be going is really teaching people about secure programming techniques and raising awareness around the issues of whether or not your code is secure. Are you writing in a way that your memory is open to exploitation? And so I think this is the kind of thing that many of us are teaching programming. We're teaching how to use basic Python and R and other types of things, but are we teaching people how to do that in a secure way and showing what kinds of techniques would, you know, in using those language, make sure that your code is secure? Open source software, a lot of our faculty are starting to develop open source software and all of those questions I was asking around. Do we know, you know, do we know what it's doing? Are we using code from someone else and do we actually fully understand it? Are we thinking about, you know, the way in which we are versioning what we're doing or again how secure the software that we're producing is? Something we've done at Hopkins is is actually established the first at least in the US open source software program, you know, office in the university and it's something that the library is running and it's there to support the development of open source software and to help faculty in doing this and in thinking about how do you do this the right way? What kind of infrastructure do you need? What kind of community do you need to develop? And I think these are new areas that libraries could be moving into. This is something that is starting to really blossom in the US and a lot of institutions are opening what we call OSPOs. And I have a feeling that maybe, you know, that you may be further along. Certainly I think the EU is. I love to talk in the in the question period about where the UK is in doing that kind of support as well. And then again, looking at artificial intelligence, there's so many questions right now about, you know, and I think a lot of fear about what role AI is going to play. And I think we need to figure out how we skill ourselves to understand how to evaluate these tools and then to talk about how we instruct others to do so, and that is something I think we can really be doing in collaboration with our faculty. I know we've got faculty at Hopkins who are starting to do some classes with students where they actually have them produce papers using things like chat GPP and then they do an analysis of them and they talk through, you know, what did it do right? What did it do wrong? What what what biases are we seeing based on the data set that it might have been using to kind of to build the topic and such? And then I think also libraries are always deeply involved in data creation and or data acquisition, helping helping our researchers acquire the data they need. But I think we need to be sure that we're playing a role also in helping give advice on how to evaluate the quality of that data. We often know a lot about the corpora that we often we're the ones who licensed it or we're the ones who's actually helped create a tech corpora. And we know a lot about, you know, what does it include? How how accurate is it? Does it have problems with the OCR perhaps that was run if it was, you know, a corpora made out of older tech? Can we help people understand whether those, you know, whether there's things in that set that might actually affect the outcome of that, you know, will it answer the question you're trying to ask? Or are there things inherent in that data set that are going to buy a set answer or or make it unable to do? You can't really draw the the the conclusions you're thinking you might be able to draw, because of the fact that you might have, you know, not realize that that information isn't in there or it's not coded well. And therefore, you know, it's a kind of messy data set. You need to do a lot of cleanup on it before you do the analysis that you're wanting to do. Sorry, jumped ahead too far. So I think that there are some opportunities here and there are some challenges here. And so I've just been talking about some of the opportunity that we have in the traditional work that we are doing in support of the use of data, the use of AI. But I want to turn that back on ourselves to also say, how do we think about what role AI could play in library operation? And these are, you know, these are questions I have more than things I have any answer to, you know, could the nature of cataloging be radically transformed? I mentioned earlier the fact that the Morgan Stanley is using ChatGPT to actually do all of the organisation and retrieval of information in their knowledge base of documents that their analysts and their financial people use on a daily basis. You know, is that something that should make us think about? Could we provide a lot of data to something like this and have it managed the access to our materials or catalogue or figure out, you know, how how might you extract subject headings out of, you know, using something like this so that rather than doing things the way we always have, really start looking at how do we apply these tools in a new and different way? Same thing with research consultation. Could a first stage research consultation be done by a chatbot? What would it look like to do that? And and in a minute, I'll get to the question of what would be the problem with trying to do that? Where would it work? Where would it not work? Could we think about a, you know, totally different way to approach the question and problem that we many of us are facing, which is that we no longer are allowing browsing of our stacks because our stacks are not all there on site in our library. We're moving things off site shelving and we have I'm in the middle of this right now at Hopkins where we're redoing the library and it's going to involve moving things off site and staff and faculty and students are very unhappy about not being able to browse the stacks anymore. So I was trying to the head, the dean of our engineering school the other day and he was talking about what would it take to actually create an ability to literally put on a VR headset and browse our stacks virtually and could we think about things like that and different ways of accomplishing what we used to do in the physical world and which didn't want to do that? Would it be worth it? Is there a problem with does that not actually solve the full problem because they wouldn't be able to pull the books off the shelf? Or could you actually create something in which you could literally pull the book off the shelf if you had it in the electronic form and then look through it? So, you know, so many interesting ideas, but how do we think about which ones are the right ones to start pursuing and how do we at all go down the road of what it would look like to actually pilot anything like that? One of the concerns I have is that in an AI dominated world does anything that's not digital become completely invisible. And what does that mean for us? Do we need to think about getting everything digitized so that it doesn't become invisible? So if I go back to that question of could we do research consultations? Could that be done by a chatbot? Well, if we do that, there are certain subjects to areas that have, you know, a lot of deep backfiles of materials that have never been digitized. And that would radically change the data set that would be being used. And that becomes a problem because then it's not pulling. Even if in theory it could be done well, if you can't feed it the right information, then it's not going to be able to do it well. And so could we find that if we wanted to try to explore those things, we would have to be able to say that could only be appropriate in certain fields where you could actually feed it all the data that you'd want to be able to inform that kind of an initial consultation. And so I think it asks a lot of questions about, you know, again, how do we help people understand where does this work well? Where does it not work well? And if it's a world in which people are going to be going towards it and trying to use it anyway, does that create an urgency that maybe hadn't been there before about how we need to get more and more into digital form so that these things can actually be fed the kind of data that they need to be to do the job that we would want them to do? So I've asked a lot of questions. I've provided some observation. But what I'm really hoping is that we can explore all of these. And I'm more than happy to answer questions, but I also would love to hear comments that you have or ideas or thoughts of yours because I think that there are so many new things here that we really need to simply be talking about them a lot, you know, in a lot more depth. And so I'd like to stop my formal presentation right now and move into what I hope to be a much more interactive session where we can talk about these things. And here I'd love to hear what's going on in the UK in some of these areas. So thank you. Thank you so much, Elizabeth. That was really exciting. There's so many questions in my mind, too. We might just be pinging them back and forth to each other, though I know we'll quickly be getting some through the chat as well. I just wanted to thank you. And I think there's so much to enjoy in what you have presented to us. And there's actually one of the bits I just wanted to dip in, a great start and then a great pivot. That was a really great reveal. So thank you for that. And I hadn't you'd kind of cued up with me that something was going to shift. And I was thinking on your first set of, yeah, we kind of know a digital transformation data manager. We know these are the challenges and you totally turned it around. It's brilliant. Well done. I also liked your next slide, which is the reminder that for centuries you've been dealing with some of these ethical issues around authenticity, around authority, around appraising the validity of different sources and that we already have in our profession, albeit to build on, but some of the critical skills, which we are continuing to apply, but apply in new ways and gather new questions and kind of solutions to around it. I actually think that was a hugely generous framing, which allows a kind of reassurance, if you like, to us as professionals, that we have some of the tools already. We can build on them, but let's not be scared. Let's see how we can play with those and apply them in the world. So real thank you for that. I would like to just take chairs privileges, if I can, for a moment, because there's some really interesting connections you made, including that relationship when you were talking about data and the increasing funder expectations about data around reproducibility and research and the reminder that it's not simply a matter of making your stuff open, but also of the underlying ways in which we do that following the kind of standards like fair data standards for data and the principles of open science, open research, open scholarship, whichever way you frame that in order to kind of bring a managed and authoritative set of information, if you like, or a set of information that can be reproduced and so on to that landscape. And on that note, your final question, your final slide was in an AI dominated world do non-digital books matter. That was really interesting. We so rarely say non-digital. We normally say non-print and it's defining it the other way around. It's kind of a nice trip, too. But I'm kind of wondering if it not only non-digital, but also stuff that isn't open access, you know, how will, you know, does that also give us a very strong argument for pushing even further forward through the kind of the kind of funder and researcher and library agendas for making content open as long as we're adhering to those principles that you reminded yourself? I think that's an excellent question. And you are you are spot on with that because it doesn't help if it's in digital form, if it is tied up into a form, either a format or a license or whatever it is that prevents us from being able to make it available in these in these, you know, kinds of ways, then it again becomes invisible. It becomes not part of what, you know, and whether that be, you know, the kind of automated chat, TTT type thing, or it becomes the kinds of things that are happening in the digital humanities where there's a lot of really deep questions being asked by faculty and a lot of really, you know, thoughtful work going into how you do analysis and what that looks like. But if, again, you cannot give people the data set that they really need to be able to do it in the subject area they want, they can have the best algorithm in the world and it's going to be really skewed. It's not going to to provide what we want. So I think that's right. I think that understanding the many ways in which open access is important, not just to the researcher who is at an institution, for instance, it can't afford to have a subscription to that journal and, you know, they simply want to read that paper, you know, in the traditional way. Open access is hugely important for that. But we now have this whole other layer in which materials are being made available for doing all sorts of, you know, AI work on them. And if we can't get at them, that is, it's really a problem in what this is going to mean for the things that come out of this. And I don't think AI will stop and that means we will have really skewed information if we're not careful. I would like to just keep talking to you, but I'm going to have to go to some of the questions from our audience, which is a real sadness to me, though there's some great questions. I'm going to pick out a few that have come through. And the first one is right at the heart of some of those ethical approaches as a set of libraries and the kind of value-based organisations that we try to be. So this is on George Cronan. They write that while I recognise the chat, GBT, et cetera, aren't going anywhere. I'm curious about what ethical obligations we should consider when it comes to using these tools to teach, including, she goes, say, not just the, I guess the, as you say, the skewed information we might have and how to navigate it, but also concerns about who is building these tools, the potential abuse of poorly paid workers and issues of inclusion and decolonisation, which are inherent in the digital sphere and condition information in, information out and the complexities of that. Anything you can help us to think through on that point? No, I think that's a really good point. And my understanding is that there have been a lot of presentations at the conference so far about some of these ethical issues of, you know, how we are thinking about our own workers, how we are thinking about the, you know, what really is the full production chain of things now. And not just thinking about that end product, but kind of all of these elements of it. And I think that this is simply another lens that's important to look through in terms of why transparency is important. Why we need to be able to see into not just the technology of what something is doing, but how it was created, how it was produced, who was involved in producing that. And so yes, I think these are all questions and I think we need to understand how to start raising these questions. And, you know, as equally important as the questions around, you know, what is it that is being done with these, but also what is it that that went into creating them? Thank you so much. That's a really great summary and I want to come to a couple of other questions, but I was just going to reflect for a moment on your interest in what's happening in the UK space. And I'm going to just just say I think there are some risks for all of us, but I think one of the outcomes of being now outside the EU in the UK is we are also outside legislation is moving quite fast within the EU context, some of which is I think reflected in the states around regulation or frameworks around digital services and so on. And so I think this is a really vital conversation to be having in the UK minute and working out as libraries. How do we how do we adopt, embed and keep working with our international space? Now we are outside of some of those other regulatory frameworks or increasing thinking frameworks, but I'm going to come to a couple of questions here. The first is from my colleague, Tracy Stanley in Cardiff, who asks a very abstract question. Do we have a sense of what we're trying to achieve with AI? We might not yet, but do we have a sense? Do you have a sense, Elizabeth? Is that to improve user experience or speed up processes, save money? Would you staff maybe all of those things and more? Is there is an area that has particular priority for you at the moment, Elizabeth? That is that is hard in terms of asking me my own priority with this because my first inclination, I have to say, would be to make the user experience better because I think so strongly that we have created a very fractured world for our researchers. We I think have been getting better at bringing together our many different threads, but we have because of the fact that tools have been built separately. We have archival finding aids over here. We have book catalogs over there. We have our digital collection in a million different little specialised interfaces. We have our catalogs. You know, and I think that is not what and how researchers think. They don't think necessarily about the format of the material, especially the original format of the material. They think about the topic that they're interested in, and they want all sorts of things from that. And so from that perspective, if there are ways in which AI could help us bring together all of those things, because I do also recognise that there are really useful reasons why these objects aren't all of the same type, and therefore they're in different kinds of systems for a reason because we describe them differently, etc. Then I think that would be that would be a really important focus. But as a library dean, I'm also keenly aware of the fact that we have a lot of budget constraints. And so, you know, if there were ways in which we could think about this saving us money and therefore allowing us to take that money and put it towards other things that we know are really important that we should be spending time on, that also feels like a really important priority to me. I do think one of my concerns is how we move any of this. Forward, because we aren't big business. We don't have the millions of dollars to invest in applying the technology to our two libraries, and we are not necessarily the target for that. And so I think this raises an important question around are there possibilities to be working with our own researchers and getting them interested in library problems as part of what they might want to be looking into, so thinking of particularly our computer science faculty, can we get them to be the ones who look at how you might apply this to our operations? Because I don't think it's going to happen in where it's not going to be something that a company invests in or what they'll do is they'll want to sell it to us for so much money that it won't save us anything in the end. So I think that that's really the challenge that we have here is I don't know that we're the ones who are quite ready and going to be able to take advantage of what could be the good aspects of all of them. Elizabeth, that's a really great answer. And there's a couple of things going through my mind, including the fact that for a number of years now at the University of Cambridge, where I work, we have been embedding some of our research support activity, for instance, to support data curation right inside research projects. So about the lab or the equivalent of the lab. And that is one side, but it's taking our expertise back to the bench, if you like, and embedding that data curation alongside. Great. But what strikes me exactly as you said, how do we get that journey going about the other way? How do we get the kind of AI fellowships you like working at the heart of our teams, working on discovery? So they are thinking beyond that fragmented culture, which you're so right about. I mean, we were brilliant colleagues, I'm sure at Hopkins certainly at Cambridge and across our community with our colleagues who are so great at thinking about discovery, the tools for cataloging, many of them are great innovators. But the way we've been administratively organised has meant that they have done that in channels and that is no longer the way we need to work. So what does it take to challenge those and to kind of break down those kind of ways of thinking and ways of working and who do we bring into our spaces? You know, I've worked in libraries all my career and I'm passionate about them, but we do like to feel like we're the ones who have to have the answers. There's something here, isn't there, which is why you're giving us questions about actually we need help. Who is going to help us? And speaking of AI fellowships, I'm just going to come to a question from a Sudkoka, who I think many colleagues will know is the incoming chair for RLUK as of six p.m. today. So great to hear from you, Masud, on this call and as the University of Liberonia leads and Masud asks Elizabeth, huge thank you for these provocations and they are wonderful. Historically, there have been several technological shifts and they've raised similar questions and themes of insecurity in the academic community and clean libraries. And we can all go all the way back to introduction of calculators, to smartphones, to tools for grammar checking such as Grammarly. It's true, you know, things like reading this software, which made academics feel would be spoon feeding. All of these kind of the audience kind of shifts. Do you think we should feel differently, less skeptically, less insecure about these and only embrace tools like chat GPT, or is it that we will, you know, evolution will happen and it will become normalised? I think that that is the heart of the question in my mind because. I think you are absolutely right. I think this happens whenever new technology comes upon and that fear of we won't know how to do math or we won't know how to spell or we won't know how to write cursive and I think actually students don't know how to write cursive anymore, you know, what are we losing when we lose those things? And it is, you know, if you think of that also as a spectrum in which at one point it seemed horrible to imagine having your software correct your grammar for you. You should know that yourself, but you could also say when we don't have to think about that as much, we are freeing up our brains to think about other things, same with calculators. So should we be thinking about chat GPT this way? I think one of the things that feels different about it is that what it is potentially replacing is feels like a much higher level, kind of skilled worker work than many of these other things have maybe seemed like in the past, they seem a little more routine. And I think that is where there there is a little bit more of that concern about what does that mean? And I think what these tools are doing is making us have to think about what is true creativity? What is it that the human continues to bring to the table? And if we learn how to ask the right questions and think about how these tools can be useful, but also be sceptical of them? Because the other thing that we have a lot of history of throughout the centuries has been the eager adoption of what's new without at all understanding the challenges of that or in kind of going too far with it. And so how do we play that balance and understand how do you put it to good use, but how do you also understand what it can't do? And I think there's a huge amount of potential in having this really push human thinking to understand what true creativity is and what the human being is able to do that the computer is not able to do. That would be the exciting kind of outcome of all of this. So fascinating. I'm reminded of a kind of conversation that's been happening in my part of the academy amongst teaching and learning staff working with the librarians of our group, thinking of those kind of core role in information literacy, the new kind of tools that you described and the new sort of areas of teaching that's built on that and the kind of easy way in which I certainly don't mean knee jerk because it's it's it's there are really implications you say for plagiarism implications for assessment in teaching and learning, which are rightly getting a lot of attention at the moment in the UK universities as they will be elsewhere. But there's also a fascinating comment from a colleague of mine who was closing the education side who all said, but what maybe the benefits here in teaching and learning in terms of this providing another short start, a short cut, just really like your opening slides that might get you started beyond the blank page and then you're building on that because you've got a catalyst to actually where your own creative, your own originality comes from. And I found that a really helpful pivot for actually how do we begin to think of the benefits that come and what those mean within the profession that we have? No, I think that's right. We've got a question from Kirsty Lingstadt, who is from the University of York, has the librarian there a great question. Do these shifts lead to changes in what we collect from what we have traditionally collected books to digital resources to data and software, which then shape what we do differently as well? So I do think that that shift is already happening, you know, regardless of AI, it is definitely happening from several perspectives. One is I think the way in which libraries are getting involved in data is both analogous to what we have always done, but also different from what we have always done. You know, I often try to talk to faculty about the reason this makes sense for the library to be in this space is it has always been our role to collect the intellectual output of our faculty and to preserve that and to share that and make that available. So in many ways managing their data, having that be part of what we collect totally what we've always been doing. On the other hand, it's not because we are collecting something that is a stage in their research that is not what we have always done or we've done it for a much smaller number of our faculty, right? We do collect our faculty's papers and sometimes those have had University of Chicago where it used to be we had in the Fermi papers, you know, a micro-families like all of his lab and so, you know, that was really looking at that data. But we did not have that for the hundreds of people whose articles reporting on the results of data they have collected, but not the data itself. So I think we are getting involved in earlier stages of the research process that we had done before. I think we're also seeing because we have people interested in this, we're starting to see libraries collect in digital areas that we've not ever done a lot of before. So when you think of, you know, archiving Twitter streams or the web archiving that we're doing, we're kind of getting involved in a whole set of materials and things that we have not always done very well. And I would I would make the analogy to the collection of ephemera that that, you know, libraries, sometimes there are certain collections that have, you know, a focus on that, but there's huge amounts of material that have in the print world that never ever got strongly collected. And, you know, that equivalent lives right now. Web sites, especially during election times, right? Change. They go up. They have all sorts of information. And when, you know, when things are over, they go down often. And you lose all of that if we have not been thinking about how do we do web capture of those kinds of material? And I'm seeing a lot of interest in this kind of thing. So I think we are starting to see a transformation of the types of things libraries are thinking about collecting. But I think that this question of as you start thinking about AI and what is or isn't skewed about the data sets that are going into it, that should make us think a lot about what is it we haven't been collecting that maybe we need to be? Yeah, it's really great. And I'm thinking there's a kind of parallel to George Cronin's question around the kind of the kind of different kind of inclusivity here, because we know that kind of movements, particularly south in the US about document now in response to activism, to protests and also community action groups, which required such radically different approaches from archivist curatorial staff in order to respond to now, not wait for something to be kind of an authorised, you know, historically written and approved version. Also becomes part of the toolkit, doesn't it, in terms of actually collecting the many voices and using the tools for a very different set of collections and preservation kind of commitments. I am wanted to come, if I may, as a sort of final kind of pair of questions, one that came in the chat rather than the Q&A by our colleague William Nixon, who's on the executive team for RLUK, who asked. And you can challenge the question, but how can libraries remain independent or can they, should they in this space in the face of, you know, an overwhelming kind of corporate scale and metaverse, as it were, in which we are influential, but not global players? That's an excellent question. And I think that that that harkens back to what I was saying earlier about the cost of a lot of this. I think that's right, you know, I don't want to say we're not global players, but you know, there are many ways in which we aren't. And I think it is extremely important for us to think about what roles we can play and how to have a voice. You were mentioning earlier, you know, issues around legislation that relates to this and policies and policies that are own institutions and our governmental policies. And I think there is a lot of scope for libraries to think about how do they have a voice in these things. Right now, this is moving in a slightly different direction, but in the US, there is a case that is being argued like right now around controlled digital lending and the Internet Archive case. Right, the Internet Archive case. And as you know, at its heart, we are talking about things like the definition of and that's thinking about what does it mean to own an object that has been the way in which libraries have operated for centuries. And we have corporate interests that are really trying to change that and change those definitions. And it's important that we find a way to have a voice in this space or the corporate voice will they know how to be loud. They know how to try to assert rights that they have not had, but they would like to have and find places to kind of interject to be able to try to make that happen. And so I think it's extremely important that we figure out how to how to be strong voices and how to find allies around around the world that can help. And again, getting back to that. How do we work together and become stronger voices than working individually and being lost in the noise? Really well said. I know there's a piece of work in response to one of the UN calls at the minute about digital regulation, digital space, digital open, equitable possibilities that come from a digital commons, which we hope for. But that's not quite how the digital space is being operated at the moment. And I think looking out for those opportunities that also come from like IFLA for how can we feed in, feel really, really valuable. And I'm going to kind of give a sort of closing question on that, which is the kind of partnerships that you think we could be looking for because we can't do all ourselves, not any one library, not any set of libraries. What kind of partnerships might we look for to explore this space, participate in development, but do it in the grain of the faculties which we support? Right, right. So, so I think if we start, let me start small and say that I think the first step is a need to think about partnerships across our own institutions, that this is not a library only thing that needs to be approached on campus. So thinking about simply like the repositories we build, who else on campus has expertise when I talked about some of the security issues and such, you get into networking, you get, you can get into all sorts of things that really is not where our strongest, that's not our strong suit. Who else on campus can be that partner? So I think that's step number one is make it a campus wide problem and a campus wide solution, not a library solution. Then I think we have to look to each other, where and how can we be looking to other institutions, other cultural heritage, other universities? And I think we need to do that within our own countries. But we also need to be doing that for us across the pond. It's why I was so happy to be asked to be a part of the RLUK, because we need to be thinking about this globally, not just even nationally. Though we have infrastructures nationally that help us have these conversations. So this is why I think of it as an onion where you kind of start at your own institution, move out, move out, move out. But I think we should be always doing it with that eye towards looking at not continuing the building, the individual solutions, but really thinking about how at least we're building the standards that let all of our solutions interoperate. Then I think we should really think about how we do things in an open source environment and how we think about that so that what we're doing has a transparency. But then I think there's a lot of interesting new ways in which open source is being embraced and used by corporations as well. And I think that sometimes these could be places where we could start having conversations even with corporations, and that can seem unlikely. But I think that actually there is space for us to be moving into being a player in a new and a different way and that we should be thinking about that as well, because it's a way to start having that voice and that influence where we currently really don't have any purchase. Brilliant. I love that provocation closing to make it a campus wide problem and to be looking at those partners kind of corporate and otherwise. And let's face it, some of these solutions and companies are spinning out of our universities, so there are people there that circle us who are a source of information, a source of knowledge to partner with. I am going to close with a huge thank you to Elizabeth. This has been so stimulating. I do wish we were talking over a cup of coffee and could continue talking. And we've kept our audience with us, Elizabeth, which at this time of the day in the UK towards the end of the working day is not always easy, but thank you so much. We've been absolutely delighted. And I think if I may, I'm going to end up with a provocation as a chair's privilege to RLUK. We've been talking within the International Alliance of Research Library Associations, which includes Arl and Carl and Call and Libre and RLUK about a future kind of global event and conversation around the future of publishing, which is clearly something we're all talking about. It seems to me to be thinking of a parallel session that's exploring this space could be part of that kind of convening that we do through our research libraries internationally just to kind of open up our experiences and our thinking to each other. And I think if that's one of the outcomes on this great keynote, Elizabeth, which you have really stimulated thinking through, that would be a great outcome.