 All right, I think it's time to begin. Welcome everybody, my name's Cliff Lynch and I'll be opening up this presentation briefly. You have found your way to the session providing a report on the progress of the joint ARL and CNI task force on AI features scenarios. And we're gonna try and walk you through where we are, show you a little bit of how the scenarios are shaping up and take your questions. So first off, I just want to share the membership of the task force, which has been working diligently including a two-day in-person workshop in Washington DC in February. I also want to recognize my colleagues from ARL, Judy and Cynthia there and Susan Strickling who has been working with us as a consultant on the scenario planning and development. So that is the task force. Here is the timeline and just kind of briefly where we are is we are talking through the scenarios at CNI in several venues. We held a focus group this morning. We have this session. We have some breakfast table discussions that you're welcome to join in tomorrow morning. After CNI, we will be doing a session similar to this one for people who couldn't be with us in person via Zoom on the 9th of April. Following that, we will take all of the input we've collected from all of these sessions and do some revisions, talk through those revisions through the task force. And we expect to in very late April or very early May prior to the ARL spring meeting in Boston the week of the 5th of, I'm sorry, of the 7th of May release the actual text of the scenarios for public review and comment. There will be a session on those scenarios at the ARL Boston meeting. And following the Boston meeting, we will regroup as necessary and make any final adjustments to the scenarios. So that's roughly the timeline for the next two months as we see it right now. So with that, let me turn to the scenarios themselves and I guess for some of you you'll be very familiar with scenarios and scenario planning. If you don't, if you aren't, I don't wanna spend a lot of time on scenario planning as a practice or scenario construction here today. I wanna get on to the scenarios but I do want to kind of stress a few things. These are some of the questions that we were trying to get at in the development of these scenarios. There are a lot of uncertainties. Scenario planning broadly is a way of engaging with and devising strategies to address an uncertain future. And if there ever was an uncertain future, it's the question of what is society broadly? What is the research enterprise? What is higher education? What is cultural memory gonna look like 10 or 15 years out as some of the developments in AI and machine learning take their course in society? So we actually collected up impressive array of uncertainties. These are some of the highlights. Now obviously the magic of all of this is to take these issues, these uncertainties, which I would note are all interdependent and interrelated in complicated and messy ways. Try to isolate them, try to come up with a sort of an aperture on which we can hang some scenarios and explore as many as possible of the alternatives for how these will play out in a series of four scenarios. If you're interested in more things about scenario planning by the way, there's a wonderful book, The Art of the Long View, which I recommend to you if you wanna dig into this. Here is the way we isolated that, those are just what make good scenarios. Here is the way we kind of built out that aperture. And the one thing I would say that may not be totally self-explanatory is we have this axis of intentionality. And this is a tricky word because what we really are talking about on that axis, axis to some extent is the level of sort of social intentionality. At one level you can talk about a society being intentional about what it's gonna do with AI. At another level you can talk about the society not being very intentional and devolving intentionality to a variety of commercial or other interests that pursue their own parochial intentional interests. So that's part of what that axis is intended to get at. And my colleagues here, four members of the task force will explore these four quadrants, the four scenarios in a moment. I just wanna leave you with a couple of really important caveats to keep in mind. Their four scenarios all are quite different, all are plausible in a certain sense. None of them are paradises, none of them are utter dystopias and hell holes. They all mix some good things, some bad things. You can make your own judgments about how the mix is in various scenarios. But the point here, at least as I think about it, is to try and gain insight into the future and what strategies are likely to be successful. The scenarios are not predictions. In fact, whatever future arises will probably draw elements from all four of the scenarios and very likely a couple of other surprises that we didn't see coming. So please don't look at these as future predictions. Don't get invested in one of them at the expense of the others and say, that's my preferred future, how do we go there? I use these as a mechanism to gain insight. And I think that's about everything I want to say here. So I will pass the mighty clicker to my colleague, Keith. Thank you, Cliff. Good afternoon, everyone. So Cliff already mentioned that horizontal axis around intentionality. Just to complete the picture, the vertical axis was about the societal adaptation of AI. Will it be extensive or limited? The first scenario is one with an anticipated process and design and extensive adaptation of AI. We can think of this as a scenario in which human and technological capabilities are integrated. And remember that these scenarios are set in 2035, only 10 years from now. And they are built upon drivers and signals of change already present in our environment. Two broad themes have aligned to shape this scenario. In this world, there's an extraordinary convergence of advances in human computer interfaces based on enhanced augmented reality and a range of AI technologies that together have created an unprecedented integration of human and computational capabilities. There has also been a thoughtful and intentional design process and financial investments have been made that allow AI to integrate with humans seamlessly, responsibly and safely. These new interfaces transform research, knowledge development, collaboration and communication, leading to AI enhanced humans and teams with super abilities and enhanced human agency. This was only possible through a healthy collaboration between the public and private sectors, including government, industry, civil society, the scientific community and higher education. And I represent this on the slide by the triple helix, which some of you will be familiar with as a way of encapsulating that integration. Due to these relationships, society created a set of responsible guidelines and standards around the design and deployment of AI that ensured safety and inclusivity of the tools. Key amongst the actors was a digitally literate public involved in the development of the guidelines and their deployment. And thanks to positive experiences with LLMs and AI assistance, public trust in AI is growing. In our sector, many researchers train, learn and develop their expertise outside of traditional higher education with disruptive new models of precision learning through personalized integrated AI. Research collaboration happens easily as topics and curiosities attract a multitude of researchers to explore and experiment in new and exciting ways. Indeed, everyone can question and contribute to research. A new era of research driven by enhanced levels of curiosity and or complexity of the problems being researched overtakes disciplines and human contrived bounds and organizing factors. AI systems revolutionize scientific workflows by accelerating processes and providing predictions with near experimental accuracy. With open AI and private and public sector collaboration globally, measurable progress is being made on grand challenges facing humanity at a pace never before conceived. Research libraries focus on the researcher and learner experience and on creating the conduit to data software knowledge and stewarding the ever expanding body of information as the technologies to navigate and utilize this corpus continually evolve. The most advanced libraries operate almost exclusively on an AI platform. A transformation of work is well underway with the introduction of AI enhanced humans into the workforce and robots being deployed to fulfill wrote repetitive activities. Public policy and societal debate is setting the stage for future generations of human machine interfaces, which will likely include various kinds of direct brain computer interfaces and neural implants. If I was to pose some strategic questions for you to reflect upon, I would encourage you to think about from a library perspective, aiming at the ARL, CNI communities, how can the library best leverage its interdisciplinarity in the research and learning process? How can cultural memory be preserved in this fast moving scenario? And how can the research and knowledge ecosystem be optimally positioned for learning in this scenario? With that, I'm going to pass on to second scenario. So for scenario two, this is a world in which there is limited AI adaptation for meaningful and positive societal transformation due to the controlling interests of a few commercial tech giants. AI has its greatest impact on the behavior of people and society in areas like urban planning, entertainment, social media and public education. The tech giants have focused their artificial intelligence research and development on readily profitable, relatively uncontroversial and lower barrier applications. The public embraces these new consumer AI apps and gadgets. They do useful things, they work well enough and they are highly entertaining and engaging. Access to deeper, more sophisticated and impactful applications of AI is generally limited to the elite. Those that can afford it and those who are part of the technocracy. Consumer data continues to be compiled, mined and leveraged to expand markets and foster consumer dependence, solidifying an oligarchy of a few influential and powerful technology companies. On the whole, AI is implemented for corporate and individual gain. However, urban governments and city planners are enthusiastic about the economic benefits of AI in intelligent cities and entertainment districts. They focus on making their cities attractive to big tech applications, to drive sustainability, hospitality and tourism. Thus, the economic and digital divides between rural and urban centers grow. Tech giants and entertainment organizations also drive innovation in the interactions of individuals with each other and around real, virtual and hybrid worlds through advanced technology that incorporates artificial intelligence to create enhanced environments and fantastic experiences. To protect their market share, to respond to the general societal distrust of corporate interests and to improve the quality of their products, the tech companies ensure the AI programming includes mechanisms to accurately discern, identify and tag with persistent identifiers, bias content and deep fakes. Some amazing and novel applications result, such as Lazarus, which allows for historical figures and ancestors to be reanimated in highly realistic past worlds in an interactive individual or group setting. These immersive personal experiences transform family and community life, education and entertainment. Dependency on a variable of visible and invisible AI technologies and the proliferation of closed, seemingly innocuous AI applications has led to a very low rate of digital literacy among the public. Consumer AI development moves quickly into the space of learning experiences. Leap-frogging existing traditional educational models, most consumers are able to access affordable online, specialized learning experiences in place of traditional degree, diploma and continuing education programs. A few highly prestigious institutions serve the elite learners who also seek a campus legacy experience. The tech organizations themselves identify elite learners early and selectively train and educate them to be adaptive workers in advanced industries. Government and policymakers are not deeply engaged in oversight and follow the recommendations of tech experts leading to a period of low regulation, strong consumer markets and a robust tech industry. This is a world in which AI's impact on the broader research and knowledge ecosystem is relatively low. The applications of AI and research that do emerge are primarily controlled by tech companies in the private sector. An elite research enterprise and technology company alliance emerges that reconfigures the research and higher education landscape. The potential of discovery and research is thus greatly restricted. Some of the more active areas include pharmaceutical discovery or material science where the tech firms can readily monetize their AI investments. AI research is happening in costly, centralized AI computing centers, many of which are owned by either large private research institutions or by tech companies. The result for the higher education sector is the consolidation of research activity among highly resourced programs at elite private institutions while many smaller and poorly funded community colleges, universities and state institutions struggle for resources and survival, focusing primarily on human-based education enhanced with online and virtual options. And they are among the few centers of higher education that champion education and digital literacy skills. There are occasional startling breakthroughs from the university sector where new technologies and algorithms are developed that are much less resource intensive than the dominant industry practices. And as the shift in climate and conditions on the planet continue to worsen, the tech companies begin finally to proactively collaborate with policymakers, the private sector and various research institutions on novel approaches to mitigate the dangerous state of the climate and global systems. In my scenario, this is where intentionality in AI process and design is inadequate and societal adaptation is limited. So in a world in which AI is adopted too quickly and implemented poorly and society doesn't take the time to understand it, the result becomes overreaction and overregulation or bad regulation and increasing separation between the haves and the have-nots. So hallmarks of this scenario are the existence of massive data privacy breaches, poorly fought out and unregulated system and services, low digital literacy, poor data integrity, and it's a world in which history can easily be rewritten. So a couple of highlights of this world. The complete regulatory failure surrounding machine learning and the underlying data means that bad players can flood the market with poorly implemented systems. And in a world in which AI is training itself now on built in biases and false information, it's harming some parts of the population more than others. And that leads to this ever downward spiral of reactions in which some try actively to improve the data and the models, but others take advantage with little care for the public good and the government reacts by trying to regulate in an environment of fear and distrust. So many people and institutions are left behind as the world validates and strengthens its existing flawed system that exclude many and strengthen and enrich a few. So one of the most challenging aspects of this scenario is the missed opportunity that happens of being able to do real societal good with AI. There's no mechanism for harnessing large scale data, for instance, patient data or climate data for the public good. And the scenario, as I said, tends to lead to an overreaction both in regulation, but also in often focusing on a very limited number of things like over regulating in the name of national security, but not thinking about the broader environment of what regulations might be needed. So some of the areas of importance for our community include those institutions with the means to do so, creating models to successfully apply AI and research and learning. Government funding is likely very scarce in this environment because of the distrust of AI. So rather than being aligned systematically with large scale societal objectives, it focuses on these niche problems and technologies that happen to be the kind of of the moment that they get, the shiny bright object they get interested in. So research libraries then face increasing expenses, less independence and forced reallocation of efforts, even among the well-funded programs. Many libraries end up shifting to a curricular focus ensuring the quality, integrity and provenance of content used in educational programs. But as our speaker this afternoon said, we don't want to be roadkill. So my concern in this scenario is especially that I tend to think of this as the kind of crying, howling into the wind, trying to deal with this self-destructive spiral, but one in which the students themselves may be skeptical of our efforts and not even trying to come to us as a source of trust. And so by trying to stem the tide by focusing just on education around data quality, provenance and integrity, we certainly are trying to make a difference in this world, but it's one in which we may not have the position and power to actually effect the scenario much. So some of the strategic questions that we think apply to this scenario for us to all think about is, what can be done to address issues of bias and lack of data integrity in a world in which there is very little trust and very little groups to help lead this process where everything is really driven by simply who's made the money in this world who may not care at all about the data integrity or the bias that's implicit in the data, in fact might be profiting from that. Another question, what would be the optimal data management model for libraries in this society? And how can research and knowledge ecosystems be optimally positioned for learning in this scenario? And now I will turn it over to you. Let me click the slide. Okay, in scenario four, we have extensive AI societal adaption and inadequate intentionality in AI. This is a world in which AI becomes an increasingly independent partner and collaborator in research and learning, leveraging the expanding open resources and data made available to advance understanding well beyond the research advances possible by humans without AI. Some highlights of the scenario are that we've achieved effective guidelines and standards to ensure data integrity, provenance, and persistence that enables the advancement of AI. We see that in the growth of digital literacy and early education to build a strong foundation for interaction with AI and the skills to discern between false, inaccurate, and real, accurate content. Open access to knowledge and data is growing along with the quality and integrity of data and knowledge sources. Human and AI faculty work together with AI expanding its role in educating humans, but replacing teachers and educators with a fully personalized and customized model. But we lack intentionality in thoughtful design and deployment. Society has knowingly and unknowingly given up agency to AI. We see AI developing new knowledge, products and services valued by AI counterparts and the human population. AI has surpassed human capacity in a few areas without guardrails and begins to develop reasoning and creativity. In a race to catch up, governments governing bodies and tech players scramble to regulate autonomous AI. As you can imagine, policies are reactive and a growing stream of unique issues continues to mount. Societal control is eroding, but with little notice by the public. AI crosses the boundary between serving humans to leading humans. We see AI actively involved in autonomous collaborators as autonomous collaborators and leaders, having moved from AI co-pilots to collaborators to leadership roles. Lawmakers discuss the rights of AI in comparison to that of humans. And many believe that a Nobel Prize may soon be awarded to an AI lead researcher. Areas of importance for our community include changes to traditional library functions and systems. We see traditional library functions and information management embedded in many AI research platforms. Traditional systems of scholarly research, publishing and communication are replaced by adaptive real-time systems that are vetted and kept up to date through AI agents. We see the research and library enterprises both undergo significant restructuring, destructuring and pruning of human workforce with the inclusion of AI workers and robotics. Contributing factors include lack of investment in public institutions and increasing labor costs. We also see the transformed new fluid models of research and discovery, new financial models emerge and vary by discipline. And AI scientists lead to rapid advancements and many fields of study. In this scenario, cultural heritage and memory become less important to autonomous AI. Some strategic questions for the ARL and CNI communities to consider. How can the library maintain relevance in AI led research and learning models? What is the library's role in expanding and maintaining open science? How can the research and knowledge ecosystem be optimally positioned for learning in this scenario? Thank you so much for laying that out. This is the part where we'd like to engage the audience in conversation and hopefully there are questions for this set of possible futures. So while you're sort of thinking about that or reflecting on what you've heard, I do wanna just kind of pose a question to the panel, which is, so this is all very interesting. We've got this axis of intentionality and design and social adaptation on the other axis. We've got these kind of forward potential plausible futures. How are you thinking about these, why is this a helpful tool in planning as we all sort of face this very uncertain future? How is this helpful as an exercise and how might you think about as leaders using this in your own institutional context? Maybe take the first go at that one. I think Cliff made the very important point that we are not trying here to predict the future. This is not what scenario planning is about. What we have done is extrapolated from what we see today into a future point, roughly 10 years from now, but it could be eight years, could be 12 years. You can make your choice there. And we have four scenarios which are very different, but each of them is plausible. You could work back from that scenario to the evidence we see today. So for me as a library leader, what I need to do is figure out, okay, these are all plausible. What should I be doing today to prepare for those futures? So if I think across each of the four scenarios, I can begin to ask questions. If I know that I'm going to be in the third scenario, what are the capacities that I have today that I could leverage to thrive? What am I lacking that might be worrisome? What are the strategic investments I might make to make me better prepared? And I can go through that question set for each of the four scenarios and begin to build up robust actions, things that will serve my university well, maybe not in every scenario, but hopefully in at least three of them. And then what are the actions that I might take? If I knew for certain that I was heading primarily in a particular direction, are the things that would be pointless to invest in today? But if I knew that ultimately scenario three is going to be the dominant feature in 2035, what are those contingent actions that I might take to help us thrive? And then put in place some sort of tracking or early warning system so that every few months from now, I can update my thinking and figure out where are we going? Are we seeing a particular pool or momentum behind that? So truly thinking about the actions and investments today and the ones that will best serve us. Given that all of this stuff is underway, there's a lot of the building blocks already in place. So it's hopefully nourishing and nurturing what we're doing well and figuring out where the gaps exist. I think another thing that I've been thinking, and maybe it was especially because I had to focus on the kind of doom and gloom scenario three, is how much does that scenario reflect the reality within the library and how much does it not? Are we somehow counter to and different from the scenario? Are we part of that scenario? And I would love to think that we are different from that, that in the world in which there's no support for caring about data integrity, that the library will care about data integrity. But I think it's important to look honestly at ourselves across the library and realize that people are always affected by the world around them. And that it may not be as idealistic as I'd like us to think. And I think that's worth thinking about itself and worth taking back to a library to think about. How do you live in a world in which that world is shaped very differently than the way you're thinking or want to be thinking? So I think that my colleagues have raised some really great and practical ideas of how these scenarios can be used. One of the other ways I see them being used is to work with the scenarios in my organization and have individuals think about the complexity, the number of unknowns, the number of drivers in these scenarios, the competing interests across our society. And right now these don't speak very directly to it, but the competing interests across the globe. Because I think it's very easy for us to see a very valuable and important role in protecting that data integrity and supporting research. But it is a very complex system and structure within which we already work and these technologies themselves open up new possibilities that we haven't experienced before. I forgot to mention the implications for scenario two for research libraries and that's the one with the technology giants driving the research agenda. So in that scenario there are fewer research libraries than there once were. They serve the well-resourced, private or elite programs that can offer AI enhanced research and learning tools and they also serve the technical companies and the technocrats. And in that scenario, the community colleges and state institutions that are struggling and continuing to work, they are championing digital literacy. In this one I imagine them to be those advocates and those resistance fighters I guess. So, but the one question we haven't asked and I've asked it of the task force just recently is there a role for the research library in all of these scenarios and I think that's a really important question for us to ask because I think we've only gone out 10 years. If we went out further in imagining these worlds I think it is a very important and valid question that we need to address sooner than later. Great, great comments. What I'll add that I'm particularly excited about is having a framework and a tool that I can take back to my institution to engage others to have conversations, understanding that the future is dynamic and it's not going to be one of these scenarios but we can start to think about how we might chart paths in various quadrants if you will for the scenarios. And it does bring me back this type of exercise to it's really about the dialogue that we're having in our institutions, in our communities and it's about the planning process and not really the plan because we don't know what that future holds but having the conversations and understanding what might happen and what we might do as we navigate that future I think is going to be really important. Thank you, those are great answers. So I'd invite the audience to the, so you've, oh sorry, I've got some water here. I haven't touched this. So you've gotten a little bit of a preview of what the scenarios are going to look like. As Cliff said, they're going to kind of roll out between now and early May with some other opportunities for comment and they're going to roll out with a persona attached that will kind of animate them, help explain, bring them to life. But I think the panel would welcome questions about the task force, the process, the scenarios, sort of how, again, building on how we think this is a contribution to the community at this, again, sort of set of uncertain times. Welcome your questions. Or thoughts about things you find plausible, engaging or implausible in the scenarios. So in terms of the process, I'm really kind of curious as to what the debate and discussion and thinking was in coming up with the two axes and sort of deciding that that was how to arrange the scenarios. I think they're really interesting and I think they lead to some interesting conversation but that seems like a key part of the decision-making process. I'm happy to get us started. We were led by a fearless leader who helped us understand what we were looking for and we actually had, I don't know how many, we had the entire wall papered with different axes that we could be talking about and what we learned about this as an actual process of creating the kind of one that will really be useful is it's important that every one of those quadrants ends up having a really different scenario. And what we discovered was that that wasn't always true with some of the things. We'd have, you're kind of looking at this and then looking at that and suddenly these two scenarios would end up sounding so similar that that really wasn't a good one. Another thing I think we discovered was we went down certain roads and we started grouping them. So the kind of almost the first thing we did after we looked at everything we had and we made lots of notes on them, we started saying, well, okay, you could see that as just a subset of this one. So it really was about honing, kind of brainstorming a large number and then honing down to things and also ones that themselves on each axis could really be opposites of each other, not just kind of one variant of it. Anything else? That's what I remembered from that wonderful in-person where we came up with this. There was an earlier slide showing the critical uncertainties that we had identified from a combination of intervals with experts in the field, literature surveys and so on. Thank you, let me, that one. And that formed the basis of beginning to work towards alighting upon the two axes. Now, we could have gone in different directions and come up with good scenarios, but I think we really did try to embrace as many of these as possible in the two that we worked with. Maybe I'll leave that one up just to show the full set. Irene Harold, Virginia Commonwealth University. The one thing that I thought maybe was missing or could have been teased out a little more was about privacy, licensing, and also the role of the library in terms of our institutional repositories and our open access materials and how we're feeding into AI or not. I think part of it was maybe in the tech company teased out, but it's not just about tech companies. And the last thought I had as I was listening to you was a conversation I had with an alum yesterday in San Diego who said to me, what's the library doing with AI? My child is in a graduate program at a different institution and they're using it to generate all their research already for what they're doing in their graduate program. They're an MD program. And I was like, okay, we've just started to talk about a product like Kineas that we're going to train and use. Maybe we will, maybe we won't. But it's just very, very interesting and made me think about those issues in regards to this. They're all looking at me. So good points. I think what's critical here is to think about that key question about the potential of AI in research and knowledge. Now undoubtedly, libraries are part of that knowledge ecosystem. My personal view, not that of the task force, is that we need to be careful to think about the role of the library in the campus environment and to understand the AI directions of research and education and then figure out, and it's almost to your question of the survival of the library, where can we add value better than anyone else can and let's invest in that space? And I think by taking the scenarios at this higher level of the research and knowledge ecosystem, you can begin to reflect upon the campus environment or the research environment in which libraries may or may not exist in 10, 15, 20 years and begin to answer that question. I worry at times that if we come in very quickly at a granular level, you know, should we invest in key news? Come and talk to us about that. We've been working with that for a couple of years. That you miss out on some of the other bigger questions. You might end up in a scenario where research is so very different that the notion of the libraries, we understand it today, becomes almost a secondary or tertiary concern. I would also say there was an interesting presentation earlier today about how our repositories are not really well designed right now to be able to support AI or machine learning. And I think you're absolutely right that we have a lot of data. We have a place and a potential place in thinking about data curation. There's all sorts of ways in which the library has a lot to offer. But I think if we're gonna really do that well, we have to also look very critically at ourselves and understand that what we've been doing has been being done in a world that was expecting that all of that be used in a different way than it gets used in AI and machine learning. And we're gonna be left behind if we don't think differently about, we'll sit there with a lot of wonderful information that won't be what's being used because it doesn't work well with the way in which researchers want to do that work or the way in which even tech companies want to do that work. And so I think it's absolutely worth it to us to think about if we want to have that place in the future, what's going to be demanded of us and how do we need to change to meet that? Okay, I think we're at time. We can do one more question. Quick one. I didn't see you there. The lights really are bright. It could be very quick. Of course. Did you use generative AI in the process? That's what I am serious. That's a good question. That's a good question. We did not, but we looked at the drafters over here. I don't know. Very purposely, no. Okay, now I think we're at time. But thank you very much for your attention and please thank the panel.