 Adol. Rydyn 2017, Alison bydd yn cyfleoedd yr cyffredinol, CEO yw Llywodraeth Pwyloedd. Mae cyfnod o'r iawn o'r cyffredinol sy'n gweithio'r bydd yw'r cyffredinol, ond ti'n gweithio'r ffordd fydd yw'r ysgrifennu. Peir o'r cyffredinol Alison wedi'u cyffredinol o'r Unifoledau o Caerthog, a'r cyffredinol yw'r Ysgrifennu Cymru, a'r cyffredinol yw'r cyffredinol. Alison is chair of the Board of Directors for the Centre of Open Science and also serves on the Board of NISO and the American Chemical Society's Governing Body Board for Publishing. Alison. Thanks so much, Jane, for the introduction and hello to everybody over in the UK. I actually spent the first part of this week's in Cambridge, Massachusetts at the MIT Libraries Board meeting, so I feel as if I spent my whole week immersed in the world of libraries and building open equitable scholarship. So being with you today is a great way for me to book in my week, and I'm really grateful for the opportunity to share my thoughts on these issues with you today. And so what I want to do today is to try to weave together an assessment of where scientific publishing finds itself, what prevents it changing more radically and how we can carve out that more sort of radical pathway to create a system that better serves us all. But just before I sort of dive into that, I wanted to make a quick note on terminology. I'm going to be using the term open science a lot today, but I want to be clear that I do that in the in its broadest sense. Plus itself is focused primarily in the sciences, but I know all too well from my years leading the University of California Press that that the arts and humanities have unique and different needs, and I certainly don't mean to undervalue those in the terminology I'm using. So this will move forward. So I want to start with an assessment of where open access finds itself at this point in time. And if we sort of look back to the to the origins of the open access movement, it was primarily focused as a movement on focused on achieving open access to research articles. And looking at Plos's history back in 2003, our founders aimed to catalyse a revolution in scientific publishing. They wanted to, and this is a direct quote from the founding documents, the goal was to eliminate monopolies over essential published results, diminish profit margins and create a more efficient market for scientific publishing. And I don't think they were alone in having pretty ambitious aims for open access and pretty high minded aims in some ways. And so I think 20 years on it's worth reflecting on on how well we're doing there. There's been quite a bit of discussion about this recently. And I think while we can certainly see progress, you know, for example, Delta things annual market sizing back in 2022 indicates that over 50% of publications that year were open access. But in spite of that progress, I think that rather than achieving this greater equity and a focus on making research really available to all the elimination of monopolies and a more efficient market, what we've arguably ended up with is the establishment or, you know, in many ways, the entrenchment of APCs as the dominant business model, which in turn is pushing up costs. The continued dominance of big deals through so-called transformative or transitional agreements, a consolidation in the market. And just two weeks ago, we saw another independent non-profit publisher exit when Piaget was acquired by Tehran Francis. And the economic barriers to participation that prevented people from reading have now become barriers to monetary sharing itself. Overall, I think at this point, my primary concern is that this high-minded transition to open access has shifted from being this movement for systemic change to becoming more of a business model for supporting the status quo. But during the pandemic, we were really exposed to a very different model of what's possible. We saw global scientific collaboration on an unprecedented scale. Thousands of scientists across the globe focused urgently on this single problem. And that also forced some critical breakthroughs for scientific publishing. Information that had been previously locked behind paywalls was opened globally. Results were shared immediately. Preprints and other forms of online sharing became the norm. And even in academia, the usual secrecy and hoarding of data for future publications and grants and so on was really eroded by the urgency of the moment. All of that happened because in many ways it was a matter of survival for all of us. But the fundamental flaws of the way in which research is traditionally shared was also laid bare during the pandemic. And I think they clearly illustrated the need for this more radical change that I mentioned earlier, particularly in the way in which we share and communicate the results of research. The accessible publication, not only of the research, but of data and ideas arising from research is a pretty fundamental part of how science functions and ultimately advances. But unfortunately, it's pretty different from the way in which the system works today. And I think without this more radical change that I've talked about, the potential impacts here, the potential negative impacts spread well beyond the research enterprise itself. One that I find very much on my mind these days is this continuing erosion of trust in science and expertise that we see. At a societal level, those two things that trust in expertise and the confidence in science are critical if we're to be able to tackle all of the huge problems that our world faces today. And there are ways in which we saw during the pandemic again a sort of lack of public understanding in science was a big contributor. The solution isn't simply to put science back in its black box. But achieving this kind of systems change is pretty hard and it's really complex. There are plenty of people, mainly those who've benefited from the current system, who think it's just fine as it is. And unfortunately, a lot of those people who really want to make change such as early career researchers lack the power and influence to do so. There's still a wide recognition of the problems that the overall system for changing how we publish science has just not really, it hasn't changed much even in the digital age. Now, I think there are definitely commercial drivers at work for many publishers radical change is a threat to what's been a pretty profitable business model for many years, and they obviously want to protect that. But publishers also operate in a conservative system and changes slow and it's blocked in large part by a pretty broken system of rewards and incentives for researchers themselves. Most established researchers have been working in these close ways for years, for even for decades, and changing those habits really requires some up front time and effort. There are ways in which new technologies are helping us with that, but it's ultimately behavioral change and that's really hard. Scientists are just like all of us. They tend to repeat the behaviors that are reinforced and rewarded and given the profusion of demands on their time for review of papers, reviews for grants, reviews for promotions. It's all too easy to fall back on biased proxy measures like journal impact factor. And so the challenges associated with changing these systems like, you know, at times think you can feel pretty overwhelming. It's a this kind of change is something that's really going to require many of us working together over an extended period of time, but that absolutely doesn't mean that it's not possible. And it's the reason why I joined plus seven years ago, an organization that exists to advance equitable open science for the benefit of everyone everywhere. We've never been driven by tradition, but I think by a willingness to really question the status quo and an eagerness to explore the ways in which we can change things and make it better. Over the recent years, we really move from a focus more narrowly on open access itself to focus on building the full open science ecosystem. Because open has to be about more than just reading an article. It's about providing the right context to be able to understand it and the information and resources to replicate it in particular access to the underpinning components of the research itself. But just as importantly, our focus is also and very squarely on equitable participation in knowledge, creation and sharing. We need to really intentionally address the power imbalances and the legacy of devaluing knowledge from different groups of researchers and different communities. At its core, open science itself is really about culture change. It's about changing the way in which researchers work, but it's not simply for its own sake for its own sake. Audrey Azule, who's the UNESCO director general, has described open sciences, the science we need for the future we want. So how does it work? Open science produces work that's more reliable and it's more reliable because we have access to more than the article itself. We can see the underlying data, the codes, the methods, which makes it possible not only to check those, but it makes it easier to replicate the work. And just as it did during COVID, it makes science faster and more efficient because the outputs themselves are reusable, but also we don't waste time on dead ends because we share the studies that didn't work as well as the ones that did. And this broader inclusion across different research communities ensures that the research meets the needs of more communities and it's more trusted through its transparency. But science alone, open science alone can't deliver all of this. There are, you know, I've touched on this before, but there are some really key system changes that have to happen in parallel for us to be able to realise these benefits. And the first of those is that we need a different and better process for the evaluation of research. One that understands that science is an evolving and self-correcting process. It's not just a set of facts. We also need different and better incentives and rewards for researchers. Ones that reward and value collaboration and teamwork, as well as the tradition ways in which we tended to evaluate research. And finally, we need models that are equitable, inclusive and sustainable. One of the overarching lessons from the work that PLOS has done over the years is that all too often we're dealing with challenges that aren't really technological in nature. And so the solution isn't simply about building more technology or building more new systems. There's one really good concrete example of this. A lot of our internal research shows that scientists are pretty happy with the systems that are available and the options available to share their data, things like FigShare and so on. But most of them don't share their data. So the solution here clearly isn't about building more systems. I think there are a number of things going on here. The process of sharing is messy. It takes time. Some scientists aren't confident in how to do it. But as we've already learned, if there's no reward, busy scientists simply aren't going to take the time to engage in these behaviours. And so much of our thinking at PLOS as we think about this behavioural change has been informed by the classic theory of diffusion of innovation that was developed by by Ian Rogers back in the early sixties. Essentially, this theory states that the adoption of any new ideas, behaviours of products doesn't happen simultaneously. There are some of us who are more rapid adopters and others who take longer to adapt if you like. And so we've been using that to inform a sort of portfolio based approach where we focus on innovators and early adopters for some things. But if we really want change to happen, we have to be thinking about how we bring along that sort of bigger long tail of people who are more sceptical about change. And we found that offering new features on our journals, which is a format they're much more comfortable with, is likely to be more successful. And so one brief recent example of that is an experiment we did with co-sharing in our journal PLOS Computational Biology. We spoke to a lot of researchers in the field and we heard that co-sharing, it signals confidence, it signals integrity on behalf of the authors, but also authors expressed a willingness to open doors for their fellow researchers by enabling them to reuse their own colon scripts in a way that would help them advance their own studies and ultimately to field as a whole. And so we saw that there was an opportunity to sort of shift behaviour here by piloting a new policy which required journals to make public all of the code that was associated with the results of their article on publication. And we had a pretty good rate of code-sharing before this policy was introduced, it was just over 50%. But after we introduced the policy we've seen that number jump to 90%. And so that's been enough for us to make this a permanent policy. One of the other areas that we focus on is how we can use data to understand progress towards the goals of the open science practices that we're targeting. And to really understand how the adoption differs across different groups of researchers. One example of this is the one you see on the slide here. In partnership with DataSeer we've developed a novel AI supported information source to help us meet those needs and we've called that our open science indicators. And so those are forming really useful business intelligence for us internally at PLOS. But we've also made them available to support organisations beyond PLOS who may have similar needs to understand the ways in which open science practice is evolving in their institutions, things like data and code and pre-print sharing. And so the data set currently includes an analysis of over 100,000 articles, those PLOS articles, but also a growing sample of other publishers content. We're updating that every quarter and, as I said, using it to sort of monitor trends and assess impact. And fairly recently we began a pilot partnership with a number of UK institutions through the UK reproducibility network to pilot and co-develop and test these indicators of open research and see if they help understand the effects of those institutional open science policies and can help improve and support improvements in practice in that way there. But I want to turn now and spend some time on issues that for PLOS are really critical to the way in which we're thinking about the evolution and development of open science. Far too much of the discussion about open access and now open science has framed problems in the context of the global north. But really these are global problems and so that's why we've had a pretty heavy emphasis on supporting models that are going to expand global recognition and inclusion. There was a study that was released a couple of years ago, I think, by the international network for the availability of scientific publications, which wanted to sort of understand the challenges and opportunities for open access in lower and middle income countries. And what they found was that while stakeholders clearly believe that they benefit from open access in many ways and particularly and obviously access to the content itself, there's a picture that emerges that's a lot more complicated. There's a clear conflict between the desire they have to strengthen their own local platforms, which often better serve their local needs, but they also feel pulled to play this game in which all of the norms have been set in the global north. And so one way in which we've been tackling these issues has been a new policy we introduced to improve transparency in the reporting of research that's being conducted in other countries. This is a practice that's kind of become known as parachute science. And what it describes is how researchers from, you know, often from wealthy Western institutions drop into foreign communities to carry out their fieldwork. They spend a lot of time gathering data there and then head home without much engaging local researchers and particularly without acknowledging the contributions of the local researchers. It's a disparity that's been reported for as much as a couple of decades at this point. And there was one study I read that illustrated that only six and a half percent of research articles in general medical journals had a co-author from the country in which the study population lived. And so what we've done to try and address is to introduce this policy that requires information about ethical, cultural and scientific considerations and about local engagement and authorship. We share all of that information with our editors and with our reviewers and ask them to make sure that the research meets our stringent standards for research integrity when they're reviewing the paper. It was a policy that we developed in close collaboration with those global research communities and we're hoping that it helps to not only improve awareness of these issues but will help us to develop further policies to support in different areas. And I'm delighted to say that we've also had a couple of other publishers who've approached us to see how they can adopt this policy and roll it out for their journals. But I think perhaps the most important area where we've been focused over the last few years has been on trying to create business models that move us away from APCs. And many of you will know that PLOS was one of the first open access publishers to introduce APCs 20 years ago. And I think back when PLOS was launched in those days and we were heavily focused in the biomedical sciences, I think the sort of idea of charging authors a fee to publish seemed pretty fair and reasonable. A lot of those authors were awarded huge grants in some cases and so if this nominal fee meant anybody could read and reuse the research it felt like it was a price worth paying. But with the benefit of hindsight, I think we fail to anticipate how successful APCs would become. We fail to anticipate how commercial publishers would exploit them and how inequitable they would ultimately become. And so we feel that continuing down this APC path will further disenfranchise a lot of researchers, especially in the global south, and it just risks deepening the equality that we're already seeing. And so we've chosen a different pathway and over the past couple of years have been piloting a couple of new models. We launched community action publishing in 2021. And this was really focused on trying to demonstrate that highly selective publishing journals that have pretty high rejection rates is possible without some of those pretty eye-watering APCs that you see from certain publishers. And so in this model, in the model, the cost to publish is assessed both on the needs of the corresponding author as well as all of the contributing authors, which means that the cost of publishing is distributed more equitably across all of those institutions. The institutions commit to an annual flat fee and that ensures that their researchers receive access to unlimited publishing in our highly selective titles. The other thing that we do here is that we cap our margins, so the more institutions that join this effort, the lower the costs become for everyone. A couple of years ago we added our global equity model, which was trying to solve a slightly different problem. And what this model does is it also provides a pathway for institutions to cover the cost of unlimited publications for their authors and again to eliminate APCs. I think what makes this model different is that the fees for each institution are adjusted by World Bank lending tier group to be more reflective of the regional economy that the institution is based in. And so the GE model systematically acknowledges the economic differences that exists and it's offering an appropriate solution to authors in those regions so that they don't have to ask for assistance, they're just included automatically. We do still offer a robust publication fee assistance programme, but ultimately we hope that as the institutional partnerships expand, that's not going to be the only recourse that's available to so many researchers. It'll just be a backup option for those who aren't covered in other ways. But unfortunately, we can't change this landscape on our own. It really needs a lot of engagement from other key stakeholders who control the funding flows in the scholarly communication ecosystem. And so that's why we've partnered with Coalition S and DISC to launch our beyond article based charges initiative. We formed a multi stakeholder working group and there are five main goals there that these include considering how APCs can be replaced by more equitable payment models, exploring how funds made available by research funders can be used to support open access in non APC models and making sure that we understand the possible unintended consequences of some of those changes. I also wanted to briefly address another key unintended consequence of APCs. And that's the growing challenge to research integrity that unfortunately we read about all too often in our papers these days. Open access itself and APCs are certainly not uniquely responsible for the problems that we see here. The publisher parish culture has an awful lot to answer for. But I think it's also true that the current model we have that's built on article publication is really incompatible with an ecosystem built on open science principles. And that's because it relies on and further embeds the article as the unit of value and the unit of reward. And what that does and what we've seen I think is that it creates an article growth economy where the push is to just publish more and more. This article growth economy is unhelpful for openness and I think it's ultimately unhelpful for science. And unfortunately as a result of that it's also become a sort of an accepted truth for many publishers that because APC funded away has a lower part lower per article profit margin. Particularly in the subscription model did it has to be a volume business. And so what that's created is this clear incentive for publishers to increase profit by pushing for article growth, which has exacerbated this existing pressure on researchers to publish more. But it's also added significant pressures to the peer review system that was already pretty strained there. And I think you know what we've seen over the last year or so with the fallout and Hindawe and the recent layers at frontiers just highlights the fragility of an open access business model in which all of the kind of remuneration if you like is based on the volume of output. But I would also note here that I think publishers are the only ones who bear any responsibility here. I think there are others including policy makers funders and yet even even librarians who focus so heavily on cost reduction that makes it really challenging to invest in increasing rigor and editorial oversight. Now I want to be really clear that I am not letting certain publishers off the hook for inflated APCs. If I just look at this from Plos's perspective, we're investing more and more in research integrity because of all these challenges and in expanding the open science features of our journals and there's a cost to both of those things. But I think beyond the cost issue there are two real concerns that I have about these challenges. The first comes back to something I was talking about earlier that the constant drip drip drip about research misconduct retractions and so on, further undermines trust in science. It's making it much easier for those with ill intent to weaponize that if you like. Secondly, I think it starts to undermine the shift to open over the past few weeks. I've heard a couple of people say to me that they felt that Wiley had been unfairly punished when its stock price dropped after the scale of the problems that Handawe came out because all they were doing in retracting the articles was they were doing the right thing for the scientific record. Personally, I think that's something of a distortion of what actually happened and doing the right thing would have involved much more rigorous editorial oversight so they didn't get to that point. But I've also heard other people suggest that given the ways in which open access has created this profiteering incentive, we should just go back to subscriptions. And so while I think these these issues are complex and certainly, as I said, not simply about open access, I think we actually have to acknowledge and understand the connections between these things and the wider goals that we're trying to achieve. I want to spend my remaining time here looking at how we move forward. So let's start by looking at publishers. I don't think that the evolutionary change that we've seen are going to get us where we need to be fast enough. At the SDM conference I was at in October 2003, there was a panel of CEOs from some of the largest commercial publishers and even they had to agree that scholarly publishing really hadn't changed that much as we've moved into the digital world. There are of course ways in which it has changed. Publishers have innovated in ways we've moved from print. There are lots more features that are available on our digital platforms that weren't available in print. But the problem is that all of this exists in exactly the same paradigm. We're still bound to this traditional model that was developed for print hundreds of years ago. And even at plus, while I'm pretty proud of what we've achieved in our first 23 years, our role has always been really to be that of a catalyst to sort of demonstrate what's possible for everyone. And so over the last year or so we've been taking a deep dive to inform our next leap forward. We've spent a lot of time talking to four key groups of stakeholders. There's researchers, university librarians, senior university administrators and research funders. And in our interviews we found some unifying values that people still appreciate from publishers. I think the key one there is the vetting and quality control. In other words, it's really important for them and their paper and to be able to know that ultimately someone I trust is telling me that this is worth my time. And peer review sits at the heart of that, the ways in which we carry out peer review is I think changing, but peer review is an important part. And then a related part also is the sort of curation and discovery layers that are added by publishers, which again sort of help to filter out all of the noise in our online world. And surface content that's really worth the reader's time. But we also had pretty strong convergence across those stakeholder groups about the ways in which the current publishing system is frustrating or at times even failing all of those core stakeholders. I'll acknowledge that our sample was probably a little biased because we focused on conversations with people who were actively engaged in this transition to open. But there are a couple that I'll highlight here. First from the researchers, we heard frustrations on multiple levels, but a lot of them sort of centered around this core issue of being required to publish in ways that didn't really align with their core values. And in many ways didn't always reserve their research well. In other words, they felt that they were sort of forced to publish in these sort of high impact vanity journals, if you like, because that was what was required to secure jobs and funding. And then there was a frustration we heard across many stakeholders that something I touched on earlier, publishing processes haven't kept pace with modern research and haven't kept pace with modern technologies. And we still have this very traditional fixed idea of publication that doesn't account for the pace at which modern research moves and the need to sort of update it in the ways that we often do. And so, you know, a plus, we sort of come away from this with a clear vision for a much more radical reframing of how research is shared that's built on the principles of open science that's built on rigor, on openness and on equity. It's something we're now starting to make concrete as we move into the next phase of developing a model that incorporates the following principles. So we're looking to sort of shift away from the article as being sort of the sole centre of attention. I don't think articles will disappear entirely at all. But we want to make sure that we're surfacing sort of other research objects and that those are appropriately shared and assessed at different points in the research lifecycle. We're also interested in demonstrating that the article itself doesn't have to be immutable and that conclusions may shift over time and we need to find ways to be able to take account of that. And we want to see the ways in which experiment, the ways in which peer review can evolve away from being this binary decision of accepting or rejecting the publication to being a much more nuanced assessment that's more useful to everyone. And all of that needs to be underpinned by equitable and sustainable business models. And finally, I want to tie all of this back to some direct implications and questions for research libraries as I see them. So my first question brings us back to business models again. European library budgeting and negotiating is still pretty heavily linked to the legacy of APCs and the cost per article model. And I know that there's been some discussion of the recent disk review through the conference. But clearly that's found a number of problems with TAs in spite of the growth of open access output. As I read the report, there were a couple that hit me. One was this sort of recent resurgence in or at least sort of retention of closed articles. The fact that the UK's propulsion of hybrid articles is more than double the proportion in the rest of the world. But I think the key one for me was that based on the journal flipping rates that were observed between 2018 and 2022, it would take at least 70 years for the big five publishers to flip their TA titles to fully open. And so the report's recommendation includes this renewed focus on equity and non-APC models. There are a number of models emerging from that, not only the ones that PLOS has been developing but subscribed to open. And in some cases diamond open access. I say in some cases about diamond because I think there is still a sort of significant question mark for me over the sustainability for diamond models over the long term, unless we're able to sort of deliver a much more radical shift in funding flows. So, for libraries, I think a lot of the focus has been on working with APCs and switching administrative infrastructures to support APCs and TAs. But as more publishers are coming into the marketplace with non-APC based models, I think there's some struggle to be able to sort of turn systems and so on in the direction of some of those more inclusive models. And that takes me to my second question, which is, how are we really going to deliver on global equity? You know, as gold open access has become the dominant model, I think that the inbuilt inequities in the system have exploded exponentially and they've shut out large communities, especially in the global south. We've seen waivers emerge as a solution, but they're not only unsustainable for small and mid-sized publishers like PLOS, they fail to meet the equity standard. And what do I mean by that? I mean that simply put, I don't think they address the systemic structures that lead all of these authors to need waivers in the first place. Waivers themselves are structured to ask most of those in need of systemic change to jump through a whole series of hoops that those in privileged communities never see. And back in the subscription model, the lack of transparency around pricing meant that it was easy for publishers to roll all the costs associated with publishing into a single opaque price. That included the cost of reading for those who didn't pay a subscription. It included the cost of publishing for those who couldn't afford to make a direct contribution. But as we're locking into this transactional per unit pricing model, it's blown that apart. Understandably, often due to institutional and budget pressures, libraries have been really focused on value for their institution. But this old model in which the richer institutions and countries ultimately subsidized reading and publishing for those less well off has gone and we haven't yet found anything to replace that. My final question is about the role for research libraries in planning for and shaping this open science future. I think there's some more transactional answers to this question. The work that we see libraries doing around supporting open data policies for example and data management is a great example of that. But I think what I'm really thinking of is how libraries need to be a really key player in making sure that in 20 years time, we're not having the same conversation about open science that we're now having about open access. And this was a real concern that we heard across all of those stakeholder groups I was talking about just now in our research. There was a real fear that as we move towards open science, it's going to lead to further land grants, if you like, by large publishers who are seeking to control yet more of the research enterprise. And so I think we really have to have our eyes on the long game here. Libraries are key partners in advocating for a shift from commercial control of scholarly communication to models that are really better aligned with the core values of research and science. I think you're all in a position to be advocates at your institutions and beyond in a number of ways. Looking at expanding your role in upstream in the research life cycle, advocating for responsible research assessment initiatives that open practices and supporting conversations about open research practices too. As I mentioned at the beginning, I've been with the MIT Libraries for the first part of this week and I'm sure some of you are aware they're moving forward with a pretty bold move in this area. They've now been out of contract with Elsevier for four years and they're preparing to reinvest those savings in line with the goals of the MIT Libraries framework, which seeks to, and I'm going to quote this directly here, remember it exactly, it seeks to move the entire scholarly communications landscape closer to the scholar led open and equitable environment that promises to enhance opportunities for collaboration and speed in the accumulation of knowledge and insight. And so I fully understand that not all libraries have the same position or the same privilege that MIT does. At the same time, I think this is a pretty courageous initiative and it's one example of what might be possible with a more radical rethinking and something that will hopefully pave the way for others. And so as I noted through this talk that the scholarly publishing industry itself, in my view, has to embrace more radical change. And that ultimately means letting go of our distorting incentives of profits and prestige. I think we're now at the perfect time to catalyse on the progress that we've seen through open access and open science through the COVID years, and to think about how we deepen interconnections, we align our infrastructures and our policies. The pandemic showed us really what can be possible when we move together with this real sort of clarity and singular purpose. And I think as I've been clear today, publishers have an opportunity and a responsibility to act at this moment, but I think it's a responsibility that we share with other stakeholders in the system. And, you know, in my mind, libraries have a broad scope of skills, expertise, and I think from the librarians I talk to a real passion to partner in the ambitions that I've outlined today. So I think that if we're all able to act boldly together in this way, we can ensure that, you know, the legacy of the pandemic isn't just about what science can do, but it's really a reminder for us of how science should be done. And with that, I will, I will close and stop sharing my screen and we can open up for questions. Thank you, Alison. Gosh, I've written so much down. Thank you for allowing us to end this conference on with such a breath of fresh air. Actually, particularly since the last session, somebody in the room that I was listening to, it talked about it being a bit gloomy and it was very much about transition agreements. And, you know, it's just so nice to be reminded and told about the PLOS guiding principles and the interventions and the policies that you've put in place. I could have done without the three questions at the end because that brought my anxiety. Sorry. No, that's fine because they are, you know, they're things that we've been asking ourselves throughout the conference. I think, you know, you know, the transition agreements, yeah, we need to walk away. What the we is is interesting. I think how we walk away, how you define that, whether it's consortia, individual institution at a national level, at an international level. But what next is also needs we need to think about, you know, what, because, you know, on a purely practical level, we don't all keep the budgets that we might save. You know, it's not as easy to invest. And these are all the conversations that we roll around all the time. But actually, I'm already going down into the transition agreement issue in the UK. But actually, I think it's important that we try and think, you know, it's important that we try and think globally, as you mentioned in your presentation. So that's great. Thank you. There's a couple of questions that's come in. So I must ask those first for you. One of them is around the comment you made earlier in your session around trust insights. Let's get the AI out of the way. So with the rise of generative AI and some fairly high profile examples of it clearly being used in journals because I think you referred to that. What risks do you think AI posed to this trust in the robustness and reliability of science moving forward? Yeah, I mean, I think that's a really good question. And it's one that we are spending a lot of time talking about and thinking about it plus. And I mean, I think there are sort of two sides to the point here. That's the challenge. So, you know, we have adopted the policy that has been really put together by the Committee on Publication Ethics that most publishers belong to. And so, you know, we allow the use of AI in authorship, but it has to be declared and the AI cannot be an author because there's no accountability there. And the reason I think one of the reasons why it's really important to allow use of it is for authors whose first language isn't English. And so it really helps to level the playing field when it comes to review for those authors. So I think it's important not to lose sight of the fact that there are some positive aspects to what AI can do for us in the publication process. That said, I mean, we have certainly, you know, most people have probably seen the picture of the rat and all of the other things that have happened recently with sort of AI and so on in paper use. And so I think what's happening there is ultimately something of an arms race. We're trialling a number of different AI tools in our publications ethics teams that are really looking at sort of screening papers when they come in so that we can get a better idea of has AI been used. There are some new AI tools that are looking at image manipulation, which is a big issue in some of the biomedical sciences. And so there are ways in which we can sort of start to screen some of that out upfront. But I do think there's a real risk there. And it's one of the things that really is going to require publishers to invest more money in research integrity and publication ethics. We added four people to our team this year, which is, you know, for a publisher our size is not insignificant. But a number of, you know, there are quite a few cross industry initiatives on this as well. But yeah, I think it's a risk for us in scientific publishing, as it is, I think, much more broadly and understanding, you know, the trustworthiness of the content that we're looking at. OK, so I wonder if I could just ask you a quick question. I realise, you know, you used to work at the University of California, California Press, so this is a bit arts and humanities. But one of the things that's just come out in the UK here is a consultation, UK or I consultation, and about the ref. And one of the areas of disquiet come from our arts and humanities colleagues, and that's about what their material might be useful in an open world. And it's come across, it's coming out quite strongly at the moment in terms of the distrust of open. And I just wondered if you had any comment on that. I know you come from, it's a science background, but whether there's anything that you have come across in terms of that argument against open science. Yeah, I mean, I have heard, I definitely heard that argument, and I think, you know, I mean, it's come up for us at PLOS as well recently. One of the things we discovered in the New York Times is suing chat GPT. And one of the things we discovered in the filing of that lawsuit is that PLOS content is the seventh largest body of content in the training data for chat GPT. And I sort of had kind of mixed feelings about that. I mean, in some ways, thank goodness it's high quality research content, most of what's on the internet. But I do think that there's then the question that we've run into is one of attribution because we use a CC by license, but that requires attribution. And what is attribution mean in a large language model? There's some interesting conversation about this at the MIT libraries meeting that I was at earlier this week, where they were talking about their collections and what their responsibility was in terms of allowing their collections to be used or not as a large language models. What do citations look like? Is it a responsible use of content? And how do you define what a responsible use is? So I think there's still a lot of open questions about that. And I don't think there's a single one size fits all on that. It was interesting. I mean, the other thing during that meeting was we had a panel of faculty to talk to about sort of, you know, how the library was doing and so on. But there were a couple of people there, one in particular, who, yes, she's a big advocate of open science, she's on the Centre for Open Science Board with me. But her view as a scientist was that she realised that by making all of her content open, there would be some misuse of it in an open world. But she's far rather that than keeping everything hidden. You know, that not everybody is going to feel that way. And I think we're still so early with these systems that I think there are a lot of unanswered questions. Yeah, I didn't expect you to answer that one, but I'm interested. Insign. OK, right. Let me encourage anyone to put questions into the Q&A, please, or indeed any comments, or if anybody would like to come up and ask or make a comment with Alison here on this online. OK, so you did bring this up. So I think I know what the answer is, but I'll let you answer it far more eloquently. Is there a danger that the preprint platforms offering new services, such as journal agnostic peer review will be brought up by the big publishers investment entities? Yes. Right. I mean, yeah, and that we've seen too much of that, right? And I think it's one of the. The things that sometimes frustrates me about our little scholarly communication world is that there are, you know, there are so many relatively small organisations, all of whom operate on a shoestring budget. You know, and take something like orchid. I mean, orchid is critical infrastructure for many of us and they operate on absolute shoestring. Yeah. So, ultimately, they, too many of these because they're not properly funded, end up being acquired. And I do think that there has to be a better model for us collectively to think about funding of these organisations that are sort of really critical. And I don't think, you know, that whole issue of infrastructure is one that we're making some progress on, but I think it's been too slow. You know, the big publishers are anything but. You know, the links to diamond as well because that's one of them. It's one of the barriers to investing further in diamond is the sustainability of structures in those. Okay, we have some questions from Michael who thanks you for being such an eloquent articulation of the problems and potential solutions. There are many different threads in the UK like there are many different threads that UK actually libraries can pick up individually and collaboratively. What do you think should be our number one priority? I think probably the number one priority is figuring out where you go from TAs. Yeah, yeah, because I, you know, all the problems that there to me there are just so many problems with the sort of the APC based model. Whether it's cost, whether it's equity, whether it's driving a business that leads to research integrity problems. I mean, it's fundamentally flawed. And so I think figuring and figuring out what the answer there is is. And I mean, the other thing I would say there is I don't think there should be one answer. I think one of the problems that we've had is that we we often tend to try and sort of answer all of these problems with a single answer. The right model probably isn't the same for humanities monographs and science journals and other things. I think we sort of figure out at the same time, you know, multiple different models is impossible for anyone to manage. So there's, there's got to be some alignment. And that's why I think you know that the working group that we have with coalition s and just as a good example of sort of the community trying to come together to figure that out so that we do it in a way. That's manageable for everyone. I think, as I said earlier on, I think that we is a difficult one. Now, what is we? You know, what do what do we do? What's the way that's doing something boldly together? Is it a national institution? Do we do this via Sconell, which is the order we do this in our consortia? Or is it higher than that? Is it is it UK? It's a difficult one to work out because it takes it, you know, it takes it does take a certain amount of courage and bravery and being able to be aligned as well. And you may I must say you did mention values in this. And I think values is incredibly important to help us stop to stop thinking and stop be so obsessed about the financial side of it because it is about it is I think going back to what we are as libraries. And that's the other stuff. That's just my personal view. We've got loads of questions coming in. Let's go. So, Martin, our plus and others engaging in the likes of QS and other time, the THE rankings machinery to push up less reliance on citations and commercial databases. So they they both use scopus at the moment. Yeah. Yeah, I'm not aware that we're specifically involved in that one. I mean, there's a lot of work that we. So, you know, plus we sort of see, you know, we are a publisher, but we're sort of not just a publisher. And so I think, you know, we sort of think about our, our work in sort of three buckets and sort of publishing in policy and in open science practice, trying to build open science practice. So sort of in the policy bucket, we do definitely engage in a number of areas with, you know, the issue of research assessment. So, you know, we're involved here in the US and the National Academies has a round table that we're quite involved with. That's trying to sort of shift the way in which universities assess. So the number of areas we are involved, I don't think we're involved in this particular one. But yes, I would agree that that's, that's a kind of problem. The rankings just in general, and it's like impact factor, right? It's helpful. And it's so as, you know, as Suzanne says, our universities are so pressurised globally, I think, aren't they, by the rankings? And do you think they impede, do you consider them to be impeding open science? I think potentially they do. I mean, it was interesting when I, the first MIT libraries meeting I went to a couple of years ago. Chris Berg was saying that one of the things that amazed and frustrated her when she first moved from Stanford to MIT is the amount of time that MIT spent in meetings trying to figure out how to get above Stanford in the ranking. Yes. They were like three and four. It was like being an enormous amount of time. I tried to figure this out. So, yeah, I think it's, I think it's. We have a set, we all have benchmark institutions that we all spent, I spend a lot of time looking at graphs. How our institution is doing against our benchmark institutions to the money to do and lots of people spend a lot of time getting that data. And it drives so much of what we do, including our research. Yeah, where our money goes. Yeah. Okay. I'm going to ask Jeremy's question, which is do you think it's possible to create a set of value driven criteria that would allow libraries to be clear about publishers and partners, which would work with and that will not be viewed as acceptable? Okay, so do you think it's possible to create a set of value driven criteria that would allow libraries to be clear about which publishers and which partners we would work with? I think it probably is. I mean, I think it's sort of complicated and I don't think it's always as simple as sort of for-profit, non-profit. I think the world is, there's a little bit more complicated than that. But there are ways. So, you know, for example, when we were working to choose the partners who'd worked with us in that sort of just coalition state stakeholder group. There was a clear set of criteria that developed for people to be part of that, in which they had to have made a sort of clear commitment to some of these things. So I think it's possible to do that. I mean, I've actually been talking to a couple of people over here who have a couple of librarians who have been so concerned about research integrity that they've been asking about, you know, could we develop a measure of sort of publisher integrity? Yes. I don't know whether publishers really are doing all of the things they should be doing. I mean, no, she's ever going to have, you know, there will always be the occasional retraction, right? But it shouldn't be that big. I mean, we were just doing an evaluation of plus one recently. And, you know, it looks as if we have more retraction there, but you have to look at the size of the journal. If you've got the percentage of articles retracted, it was 0.008%. But there's data like that out there that I think could help with those kinds of models and so on. So I think it's an interesting question. I can see why Jeremy's asking that. I mean, it's something that we potentially need to think about in terms of how we choose, you know, it's a procurement. It's used within procurement anyway, generally, isn't it, like you're saying? So I think it is probably because values we hold them so strongly in libraries. And it's one thing that we could use alongside the financial side of things, definitely. I'm going to ask you a big question. And I wonder if this might be our last question. Yeah, it can be. Let's ask the big question. It's a truly open science future realistic. I mean, I think there are always going to be exceptions. So I don't think we're ever going to get 100% open science across the globe. I think one of the reasons for that right now, if you look at what's happening geopolitically, we're moving towards potential science nationalism. I mean, just looking at the US-China relationships is a good example of that. The return of a Trump administration would definitely sort of lock down a number of things there. I mean, things that are happening in other parts of the world as well. So I think there are definitely issues that are well beyond our control and remit that sort of act against that. But I do think there's sort of significant moves. We've just finished the year of open science here in the US. It's been a big priority for this administration within the European Union. There's still sort of a pretty heavy focus on the value of open science. And I think they're understanding that it's not simply about all of the good things that we've been talking about, the values of science. It's about good science. It's about good science that we can trust, open science that others can build on, which allows for sort of innovation and economic development. So it's not simply a sort of values conversation. There's a pretty pragmatic economic conversation about the value of open science as well. And I think that's what sort of plays like the European Union have really sort of picked up in their reason for pushing it harder. So, yes, I mean, I think it's, you know, it'd be hard to get to everything totally open science, but I think we can make a lot of progress from where we are right now.