 Oh, the report written for SILIP really opened my eyes to a lot of things going on across the whole information profession, which is a really broad area of activity and this presentation I want to kind of share some of the insights from that work. And I'm aware that Sandy and Amanda in the previous series of talks did already say quite a lot about artificial intelligence, so I kind of focus on maybe things that they didn't say, trying to say something slightly different, more provocative things around that. So I would hope that people would wish to read the report, that'd be really great and if they do and want to send me comments that'd be fantastic. And if you don't know us, the Information School is one of the main providers of education to information professionals like librarians in the UK. We teach information systems, information management, librarianship and data science. So when I was doing the study for SILIP, I think I noticed a pattern that the more technical interviewees didn't really like the term artificial intelligence because it's rather vague, it's being used in multiple ways. There's a lot of hype around it. But it's also, from my point of view, I really like the term. And the reason I like it is because, because it's so controversial because it's excited so much interest over time, I've been to exactly zero films about machine learning. But we've all been to lots of films about artificial intelligence and it's asking questions about the nature of humanity, isn't it? And there's a really interesting project to look at the different ways that different cultures are talking about artificial intelligence. So I think starting with definitions, I'm not looking for one definition. I'm quite excited that there's multiple definitions and there's no real agreement about it. I think in the report, I try to say, well, a lot of this stuff is pretty familiar. We might not have called it artificial intelligence, but a lot of the work has been done in a search libraries in the UK around texting data mining, supporting digital humanities, training people in data literacy and all the work around learning analytics and also developing better data about users, if you want to call that library analytics, that's all, to me, part of the artificial intelligence story. So I think we already know quite a lot about artificial intelligence. Let's not start from the assumption this is completely blank sheet of paper that we don't know anything about. Also, in our daily lives, in ordinary kind of knowledge work, we're encountering artificial intelligence functionality like auto suggest, auto correct, tools to correct our grammar, tools making recommendations and ranking searches, even something like Zoom with the captioning it does on talks like this, that's using types of artificial intelligence. Similarly, things we might be using like auto summarization. And what really excites me is the really grown quality of translation tools, meaning we can access knowledge written in languages. These are all, I would say, applications of artificial intelligence. And the reason they're possible that these these new tools is really because Google, Amazon, and these websites have got huge quantities of data. That's really what's enabling them to that. So we're encountering, we've been having a battle really with artificial intelligence in alternative ways of search for quite a few years. And we shouldn't forget that history with artificial intelligence we know a lot about already. At the same time, quite a lot of the interviewees in the project did see a lot of artificial intelligence talk as basically hype, particularly say in the legal sector, some of the really big law firms in the city of London. Yes, they're attaching and saying, well, we're doing artificial intelligence. But it seemed to be they weren't doing very much that was new. The technologies weren't particularly mature. And it was more about making the law firms look cutting edge by latching onto the latest technical trend. So again, there's a lot of hype around artificial intelligence. I think that's what makes it interesting. But I think we can have a role in reminding people that a lot of that is hype. And we're just talking about things we could do before quite well anyway, potentially, there's some exciting possibilities in the future, but some of it is just hype. And then I'm really interested in some kind of quite critical perspectives from critical librarianship on artificial intelligence. So it was a very great article by Mearsford and Seal that look at the way that a focus on technologies, which we often do in the library world, we write lots of articles about technology, can perpetuate certain kind of myths and kind of ideologies, such as technological determinism, the idea that, you know, everything's going to change because of technology, there's going to be a transformation and really, we can't do anything about it. That sort of thinking, technological solutionism where you believe very complex social problems that we have in our society and globally can be solved by technologies alone. We probably encounter that many times like the belief that the one laptop book child project just beginning with computers could solve all our problems. So ideas like artificial intelligence are getting so much hype at the moment, it's so much media coverage. They are ideological, they are making a statement about the role of technology. And we might be quite cautious about how we relate to those. Mearsford and Seal really contrasts this idea of a technology driven future. They see that as hiding really a mask for kind of white masculine kind of dominance, portraying certain changes as rational, neutral, as progress, when they are necessarily contrasting that with the views, the kind of more human-centered views of our profession, maybe a concern of service, emotion, care work. So while I think AI is a really interesting topic, I do think it's also a very powerful technocratic discourse that we can use as librarians, I think we should be trying to use it, but we should be using it with care. Again, the work of Crawford over a number of years has really shown that artificial intelligence, we might see that as like something rather immaterial. Actually, the reality of how maybe Google operates using AI is, first of all, it's appropriating and exploiting our data as users. It's exploiting cheap labor click workers in the global south. It's based on exploitative industry. And we had a previous talk in the series from Frederica Luxevere about the environmental impact of the digital. Actually, the nature of AI, the way that AI is trained using going over and over again training data does have very high energy demands. And there is a bit of a concern about the environmental impact of AI. AI is an industrial complex. And we shouldn't lose sight of it's not just, you know, immaterial digital talent, digital reality. And that means I think we have to think in terms of in appraising AI, and the digital turn in general, we have to think about the environmental and societal impact of that as well. And then the third layer, of course, is there's been a massive wave of concern about the ethics of AI. One study, even back in 2019, found nearly 100 different statements from governments and technology companies about how to do AI ethically. And we're probably aware of issues such as bias. We know that the AI industry itself contains most of the run by nails. The data it uses is biased. The algorithms are biased. So we have to think about the way that there's a potential for reproduction of bias through AI. There's also an acute issue of explainability and accountability around AI, because the way machine learning works, it's a bit hard to say why a result has come about. And if we can't say that, and how can we really explain how a decision has been made, there's a risk of a de-responsibilisation as well. Who's accountable if the AI makes a mistake? There's issues of privacy, and consent, and even of safety and security. So I think there's some quite complex ethical issues around AI, and there's been a storm of interest in this very interesting writing. Ifla's statement on AI, which is well worth reading, focuses particularly on the impact on free expression, and perhaps the chilling effects of surveillance through artificial intelligence, those effects that might happen. Certainly a question always about AI, about the idea of, well, how does it affect our human agency? And more directly, perhaps, the power of the big tech companies who are using this technology, and then trying to roll it out to different areas of application. How can our national governments control these very powerful tech companies? And I think there's a really interesting debate being had around, well, who is responsible for thinking about what is ethical AI? Often it's portrayed as like a technical dilemma for developers. But really, it's a societal challenge, because the implications of AI affect many stakeholders, directly and indirectly. And there's also a debate about what are we trying to do here? Are we just trying to avoid harm? Just have a neutral effect on society? Are we trying to use computing for social good? If that's the case, what's our concept of how, say, inequality is reproduced through inequalities in representation and data and artificial intelligence? How does that work? Do we have a sophisticated kind of sociological grasp of how inequality occurs? I know that RLK is saying that part of the digital shift agenda is addressing inequality. But I think we've got to think about what's the origin of inequality? How does it work? How does it reproduce itself? And that's not necessarily something that we always think about, particularly maybe in the academic library sector. So I think our ethical position on this becomes really important. So these are some those are some quite critical perspectives on artificial intelligence. I was asking some companies to ask some really difficult questions about about how we relate to this technology. Coming at the issue of what is artificial intelligence? Another angle that I've been developing a bit of work about trying to integrate across a lot of silos of literature, what's going to be the impact of artificial intelligence or higher education as a whole? And there's been a lot of talk over quite a few years about artificial intelligence in learning, things like intelligent tutoring systems, which essentially adapt learning materials to people's learning style, or even their kind of mood on a particular day. Automatic writing evaluation techniques such as you might find in actually turn it in or Grammarly, which is always being advertised to me whenever I log on. And these these are usually artificial intelligence to help people write. There's a lot of interest in the moment. I mean, you'll notice that just have just asked put out a call for institutions are interested in using chatbots, not in teaching probably, but more in administration to keeping touch with students. We can also see there's a wide range of applications in things like research. So there's the idea of the robot scientist, which is basically a machine that can perform many, many experiments in a short period of time. But it can also intelligently think about changing the variables in those experiments. So it's basically doing a huge amount of research in a semi independent way. 2019, were Springer published the first book, or they say the first academic text, which is basically written by artificial intelligence, which is a book summarizing the literature about particular type of battery. And there's also the idea of smart campus, maybe institutions are really interested in kind of changing monitoring behavior in classrooms, maybe prompting people to move things like that. So in this work, published this year, I was trying to look at what's the wide range of applications in our educations that might happen. There's also some quite interesting research that looks at, well, how likely are these things really going to be to be adopted? Because a bit like the digital shift, if you're going to really go into artificial intelligence, I think it's such a big mind shift potentially and how you think about learning, that actually institutions will really struggle to make that change politically. And there's also cultural barriers to implementing systems. So what is actually going to happen around academic libraries, some of these things could become really big, and therefore libraries have to participate. How do we fit into this wider picture of how AI might be applied in our education? So I think that's trying to think is more broad in and we can look specifically in the library field and in the report, I try to kind of itemize a wide range of things that might be deemed to be artificial intelligence, that potentially could be used in our field. So we could be using chatbot, some voice assistance to talk to our users. We could be analyzing data about users through things like sentiment analysis, it's not that cutting edge to do that. If we want to look at user response to services in social media, there's applications of a very simple technology, robotic process automation to routine tasks in libraries. There's the importance as AI becomes part of employability of the whole student population. I think we've got a role there to play in terms of teaching people about data and AI literacy across the curriculum. We might be managing libraries as smart spaces. Of course, the most obvious application, particularly in the context of research libraries is in knowledge discovery, be that searching the literature, increasingly, the literature is so vast, maybe on one medical condition, half a million published papers, no one can read it all. So just analyzing the literature and making new types of discovery in the literature, that's going to need AI. But also analyzing data be that data from our collections are sort of special collections, AI, from government publications on social media data, whatever it is. So focusing on knowledge discovery, I suppose, I remember from, we did a report with Steven Pymfield and Sophie Rutter in 2017 about the future of academic libraries. And I remember one of the interviewees said something to effect, they thought we might be entering a golden age for libraries. And it strikes me that looking at this area of knowledge discovery, AI offers us so many exciting possibilities. More and more of the world's content is digital, born digital, or is being digitized. We're getting this more accurate translation for more languages, breaking the hold of English as the only language to write scientific works, which I think is a big barrier to us engaging with other ways of thinking across the world. We could be using AI for fact checking, we can use AI for auto summarization text to enable us to read more quickly. We can use this idea of adaptivity or personalization, I talked to about in relation to intelligent tutoring systems to apply that to reading content as well. And ultimately, it seems to me, the direction we're moving is to think in terms of distant reading tools, where we can browse whole library collections in an immersive way. That is the kind of end point I suppose for AI in our context. And that's such an exciting possibility. And maybe there's even beyond that new ways of discovering knowledge, new ways of encountering knowledge that we haven't had the possibility of doing before. Equally, of course, there's many risks. I won't I'm aware of the time I won't go through each of these points. But in particular, I think for the area of knowledge discovery, there are some problems around the intensification of biases in our collections, our historic collections are very valuable, but they were created in the society that now seems to have some what problem that values. So how do we deal with that? How do we deal with that? How do we deal with the way that AI may privilege, although it opens up a wider range of languages that we can understand, but it can also in exacerbate inequalities, we already know that Global South scholars have haven't got the same access to data, open data that we hope that be achieved by the open data or open access. How is AI going to really privilege maybe researchers in places like the UK and USA over researchers in Global South got to think about these quite big issues. I think in the air of knowledge discovery, just to kind of draw the talk to a close, in the air of knowledge discovery, we've got quite a lot of choices when we're drilling down to like today's services, really, I've been talking about quite futuristic thing. But today's services, I think we can be thinking about licensing proprietary systems, but that's very expensive. We can offer our special collections as data, maybe that's the best way for us to participate in artificial intelligence. It could be that we're more about procuring data, supporting data discovery. Or it could be about more building communities communities, building communities that are using AI methods, or at least certainly participating in such communities with the excellent advice we can give on things like legal and copyright aspects, technical options, discovering content. Or maybe it's a big part of our information literacy and digital literacies has got to be increasing the data literacy and AI literacy. So we've got some big choices here about how to position ourselves to contribute to the growing amount of activity in most institutions around artificial intelligence across the across many disciplines, not just digital humanities, also in obviously computing, but also in many other subjects, starting to use these methods. So how do we get involved with them and support them? That's our one of our big choices. But to sort of end with a kind of more visionary element, I suppose, in an earlier paper, we talked about the paradigm of the intelligent library. And we were thinking in terms of maybe we've got to move just in as you know, digital humanities scholars have been talking about for a while from searching to find the text to read physically, to interacting with the full text of library collections, as that might be the vision of what the future holds, or maybe it's something slightly less grand, the living systematic review where you've defined a process where you can refresh your systematic review. Probably there'll be a human in the loop at various stages. But I know in the health sector, they're very interested in this idea of systematic review that can be constantly maintained. And maybe that's the vision we have. But certainly, I suppose my overall message is to think in terms of how we can define a vision for artificial intelligence, how we are not just to passively kind of respond to what other people define it. Let's let's take hold of this agenda and define it. One of the things I was going to talk about is our role in all this in terms of our skills as professional librarians. So I think that the four things that let's focus on the positive. First of all, there's many things we do in libraries that are highly relevant to constructing and supporting artificial intelligence, building the infrastructure for artificial intelligence. So things like understanding of metadata, understanding of the importance of standards, understanding things like information governance, or commitment to information literacy and helping people to really understand complex information landscape. So I suppose that's one thing that we bring to the scene. The other is something that I've been advocating quite a long time with my students, Twido and Nichols concept of computational sense. And that's not about compute, you know, I don't think we're going to be writing code. We're not computer scientists. We're not data scientists. But our compute computational senses are sense that we know what computing power can do for us. We've got an vision of what the technology could do. That's really what computational senses are now. And then going back to the the issue of ethics, I think it puts us back to say, Sillip, I've got a very good statement, or I believe of the values and ethics of the profession. In the academic sector, we don't talk as much about the ethics of what we're doing. I think we need to do that perhaps a bit more and think about articulating our core values more strongly, because they were not the only ones talking about data, anyone's talking about information in this area. But we can add a lot of value. And the fourth thing that we really offer, which is great is things like this are as a profession that likes to share knowledge with each other. We have a lot of for where we do that. So I think we should be very positive about what we can do for artificial intelligence and how we can define. So I think I'll stop sharing there. Thank you so much, Andrew, for a really invigorating and thought-provoking presentation, both in terms of what the possibilities are, what the risks are, and also what the opportunities are, but also which of those opportunities we need to pick and choose to begin with as well. And I think that actually leads quite nicely to one of the first questions which we have, which is you mentioned quite a few knowledge opportunities that are available through AI, special collections data as the underlying foundation, but also data and digital literacies. I think you mentioned the automated translation and linked with decolonisation of research in that context with Global South, autosimilisation of texts and audiobooks is quite coming through as well. Where would you start? Where would you want us to choose our position from? The tricky question, because we're such a diverse audience, aren't we? But for research libraries, the collection as data is an obvious starting point. As a profession, I think we all have a really strong commitment to information literacy and empowering users. And I think we can really get involved in understanding AI ourselves, and then helping other people to understand it. And I think there's a really big market for that. Because in almost every subject in university, they will increasingly want to use these techniques. And it's a big part of employability, not just computer scientists that need to engage with it. So I think we could have a really good role there too. And actually, that links quite nicely into another question, which is about literacies. And basically the role of library in its civic mission, and you mentioned about the ethical issues associated with artificial intelligence. Do you see that libraries will have an increasing role in not just talking about the ethics of artificial intelligence within institutions, but also beyond that across our communities as a civic role? Yeah, that seems like, I mean, if we're talking about public libraries, one of the people I spoke to for the report ran a makerspace. And it seems like that kind of unit, which we've got in academic libraries too, is a great way to kind of help people to really engage with things like robotics and artificial intelligence in a very practical way. And we can we can also try out things like I don't know if you've got one of these voice assistants at home. Actually, I think they're pretty useless, but maybe we can adapt them to answer library queries. And we can get one that doesn't just look up everything in Wikipedia, probably be quite good. Good point. There's a question about you mentioning AI being used for fact checking. And the question is, do you think AI can have a strong role in the destruction of the fake news culture? That's a very good question. And I think a lot of the evidence at the moment is that it's being used quite intensively. As the question implies, to create fake information and deep faked videos and so forth. But then I think it's just as likely to be able to be used in the other opposite direction. I think that's really down to commercial interests that are beyond our control. But I think our role is really to help keep informing people and just promote that kind of criticality of that information. There's always been, we've been about, hasn't it's been, give you to ask questions about the information they're getting. And that's the best thing we can do is help people think like that. Thank you. There's also a question about, you mentioned the librarians role in the future in licensing proprietary systems. And one of the problems with those kind of systems is the lack of transparency. And I think you mentioned in your conversation about the implicit nature of machine learning algorithms as black box algorithms as well. Do you feel that there's a role of librarians to promote transparency and therefore specifically challenge the lines on proprietary systems so that we can actually use things like text and data mining and other things, basically future proof our efforts as well? Yeah, I think there's also with these proprietary systems often it's for one publisher's set of works. It's not everything. So I think libraries have got a really important role in informing people about the information landscape, the data landscape, so that they can really understand what the results that they're getting back really mean. I think, for example, in the report, talk a lot about Dharma as a professional association for data managers. But their view of data I think is very kind of objective. Whereas I think librarians have an understanding of what I'm going to use an archival term, provenance, the provenance of collection, which means you understand that the nature of how it was created and the limits of what it can answer. I think we have to be experts, continue to be experts on content and data and information. That's really going to be, to me, that's going to be pretty important for data scientists who often they're quite technical, they don't like they're good at applying the algorithms, but they don't really understand the nature of the way data was created and the limits of that data. So I think that that's the most important role I see library play. It's a really interesting point because one of the things that's quite active in my mind is what's the role of librarians in curating that data as all in a way that it's more diverse, it's more well rounded and the sources and the provenance are well explained as all because, as you mentioned, the algorithms are biased, the data is biased, the outputs are naturally going to be biased, so what can we do to reduce it if not to completely remove it? There's another question about your thoughts on the academic library community taking more control of AI in divided debate, and what would that process look like and can we actually stand up against the tech giants? Well I think the answer to that is possibly no on our own, but in our sector we can have an influence and we're not trying to Google and Amazon and those are going to have a lot of power through artificial intelligence, we can't really stop that. In our own sector within education, I think we can have an influential voice to raise issues like about the ethics of AI, about the real meaning of data, its provenance and those things. Getting together as a community is one of the things that librarians do seem to be really good at. We get together, we make statements, we try and influence policy, to me that's the kind of thing we can do as a profession that does actually maintain quite strong community. That's not true of many professions, for example, one of the problems I think with the ethics of data science is that data science doesn't really have yet this professional organisation that we've had for a long time. There's also a question and I think I can relate to this question about collections as data as a foundation or building block for more further AI work and often the difficulty is where do you start? How do you begin on that journey? How do you establish this as an important priority within your library? Do you have any advice or thoughts on that? My two thoughts are one is obviously projects. I mean, our UK have obviously very much favoured the idea of librarians being researchers so I think getting into your digital humanities departments and talking about developing a project together, using data from the library gets hold of, that seems to be like a great starting point. That is how we tend to start with these new things, is a project and we get build-up experience and knowledge in that way don't we? So I guess that is one way into it. I did have something else to say but I can't remember what it was but would you like us to come back to that? Okay, okay then there's another question on the role of library in education on this as well which is over time the libraries have influenced open research and open education debates and have been now considered as partners in that discussion. That kind of partnership still doesn't exist on data science elements so we know that there are lots of courses on computer science with data science etc or in engineering but libraries are not thought of as contributors to that discussion or lecturing or adding their own views in that debate. Is there any way we can shift that and is that through data route or is that through policies route? Do you mean? I definitely think there's, so for example Sheffield, I'm not picking out Sheffield, do run our school run a data science course and it's more focused on technical skills but I think there's definitely a place in that type of course of case studies. I think a lot of courses like the idea of case studies so if you've got a case study of a library getting involved in artificial intelligence I think that would actually be something a lot of teachers would like to use. I think you can find a lot of and that going back to previous question and that kind of links to the other thought I had about the collection as data. I know Sheffield again, I'm not picking out Sheffield particularly but I'm sure it's happening at other institutions, they're thinking about having, they're more and more meetings across disciplines discussing how we relate to artificial intelligence, you've got social scientists, humanities scholars, you've got engineers, you've got scientists you've got many different disciplines coming together and forming a community I think the library can play a big part in those kind of discussions researchers come and go, the library is going to be there for a long time they can contribute a different type of expertise in things like that we've been doing for a long time like in the text and data mining area I know there's colleagues at many institutions who've done work on the copyright, supporting people choose open source tools to do data analysis so there's a lot of types of things we can do to get involved in those communities. I think linking back to this agenda about libraries being part of research actively involved research projects researchers themselves that might be a way to do that within these are kind of emerging communities around sharing skills in data analysis. Thank you so much Andrew that's really really helpful context around that. We have another question on whether you've got any ideas on how libraries in future would merge artificial forms of knowledge generation in the society with the more traditional forms of research and publication processes do you envisage a hybrid model in the future where some of the knowledge creation is artificially generated and some of the knowledge creation is still traditional? Wow that's that's a very interesting question I'm not sure I've I mean we've got this book already that's been written by AI and all it did I suppose was synthesize existing literature but that's I've never read that book I'm not an expert on batteries but if you can do that what are the next steps about how AI can have kind of active role in doing research I mentioned this kind of robot scientist idea apparently you can do more experiments in a year than like 6,000 PhD students so the scale of these things is massive how knowledge generated by these means is it really different from what we're already doing I'm not sure how do we how do we differentiate it is a very fascinating question yeah I don't have an answer really thank you just remind me of this really interesting IBM project that happened in 2019 where they had a live debate competition on whether we should subsidize public schools with the world champion of debating and an artificial intelligence entity through an earpiece of the other debater and what was really enlightening is that the AI had more facts but less emotion and therefore eventually people were swayed more towards the emotion side and also there was an interesting dynamic about actually sometimes less is more and you don't convince people through facts but few arguments which are well crafted however there are also counter arguments on areas are also getting quite strong on emotional calibration as well so it's such an interesting landscape to keep an eye on and how things evolve over time as well I'll share a link if anyone is interested in watching that debate as by asking one last question and from my own cheeky side that's okay and which is on the element of how bold should libraries be so how bold should we be institutionally in making this our agenda when many people think of it as an IT agenda or an academic agenda yeah that is a good question I think it's fair enough for us to assert if we can articulate clearly what our role is I went through a few different types of things we could do I think those are all very positive contributions I guess increasingly librarianship is about collaboration isn't everything we do we're collaborating with academics and other service units like circus services computer so I don't think we have to be bold in the sense of pushing anyone else aside but I think it will respect the fact that we've got a position and quite a big contribution to make that is absolutely true collaboration over competition but being also open and clear on what our role and responsibility in that discussion is