 We are moving on at this point down the role of standards, policies and mandates in driving change and as we've heard today and Brian pointed out this morning, there are some risks associated with this if we move too quickly and we've heard a lot about the importance of the other levels of the pyramid in terms of driving the cultural change so that mandates will be accepted more readily. That said I think that making it required ac mae'r cyhoeddus yn rhywbeth cymdeithasol yn ymgyrch yn gweithio ac yn gweithio'r cyfnod. Felly dewis y gallwn yma yw'r idea yma yma o'r cyfrifennu hwn yn fyddion am ymddi, ac mae'r cyfrifennu hefyd â'r cyfrifennu hefyd yn ymddir, ac mae'r cyfrifennu hwn yn fathau fathaf oherwydd yn gorffodd a gweithio'r cyfrifennu hefyd. Ond yna'r cyfrifennu hwn nad yw'r cyfrifennu, ac mae'r cyfrifennu hynny yw'r cyfrifennu? Yn yw'n ymlaen i fyfyrdd ymlaen nhw'n ymweld, mae wedi'i ei ddweud o gweithio ar y rai'r ymweld ar 2014. Yn ymweld, dywedeg yn y ddeifftr y ddweud, rwy'n ymweld i'r cyffredig, fe fyddwn i'r cyffredigau, ond rwy ffrindio gan ei ddweud yw'r ymweld yn gweithio ar gyfer y syniadau. Gweithwyr ymweld i'r cyffredigau ar y mae'r bywyddaeth yn syniadau i'r cyffredigau a'r cyffredigau a'r cyffredigau. Nid oedd y gallu ddim. Yn y gallwn, ond mae'n gweithio ar y cyflawn, nid o'r negatifeth negatifethau. Rwy'n credu bod yn ddweud 10 ymlaen nhw, mae'n ddweud ffawr o ffawr positifethau. Mae'n ddweud o'r cyflawn o'r cyflawn yn ymlaen nhw'n ddweud, mae'r cyflawn o'r cyflawn o'r cyflawn, o'r cyflawn o'r cyflawn, mae'n ddweud o'r cyflawn o'r cyflawn. Mae'n ddweud, ond yn ddweud a'ch ei ddeud o'r cyflawn o fawr o ffawr o ffawr o ffawr o hynencef �ur polwyddiol. Mae'n rhan o ond, mae'n ddweud o'r cyflawn o ffawr o ffawr o ffawr. Mae'r ddweud o ffawr o ffawr o ffawr o ffawr o ffawr o ffawr o ffawr o ffawr. Mae'n ddweud o hwnnw mhag ymlaen for different personas at different points in the life cycle. So any of the things that we're doing that might be a little more out there, a little more cutting edge, a little more innovative, we're really sort of focused on those early adopters. Whereas those who may not have engaged in open science behaviours before or a more skeptical of change, changes associated with journals, a format they're more comfortable with are likely to be more successful. And so I think somebody mentioned when we were talking about preprints this morning, the opportunity when you submit an article to PLOS to deposit your preprint at the same time. It just makes it easier for people who haven't been involved before. But we've got a great panel this afternoon to explore these issues in more detail and really think about what is important in making it required from different perspectives. I'm not going to read through everybody's bios because you all have those in the programme. But David Meller from the Centre for Open Science is going to start us off. Samine Verzier from the University of Melbourne will follow up. And then we'll hear from Phil Bourne at UVA. And as with the past ones, we'll have time for questions at the end. So, over to you, David. Thank you very much, Alison. So policy reform is the capstone of our theory of change. Policy reform is necessary for starting and scaling and especially for sustaining culture change, but it's far from sufficient. So without active investment in everything we've talked about today and infrastructure, without integrating them with existing workflows to make it easy, and without building norms and communities to create these standards across disciplines, behaviour change is likely to be unsustainable. This is because the burden for monitoring compliance and enforcing these new behaviours can be extremely high, especially for new behaviours that are perceived as being purely administrative burdens. So consider, for example, the case study with requiring clinical trial registration. So this bold policy action addressed a critical need for connecting patients with ongoing clinical research. It's been an effective tool for increasing transparency and rigor and it's associated with better outcomes and better standards in research. And in many ways it's been an enormous success. Simultaneously, it's easy to recognise that there are gaps between that ideal policy implementation and policy implementation that's falling a little bit short of its expectations and aspirations. So for example, metascientific investigations over the past 20 years, looking at clinical trial registration has seen a substantial portion of retrospective registration. Furthermore, we see that only about a fifth of trials that are due for results reporting have actually done so. Even though such underreporting can result in fines of up to $10,000 per day, notices for non-compliance have been going on since about 2021, but non-reporting continues. And currently the unfortunate situation is that the threat of shame is the only force that's leveraged to help alleviate that. And let me just make sure that's clear, we don't want that situation to be the norm. Finally, we see that even in the best of circumstances when outcomes are published and reported, there are a substantial number of switched outcomes. And even when there are outcomes reported in the registry, there are differences between what's reported in the registry versus what is reported in the published literature. All these actions undermine the effectiveness of registration to improve credibility and trust. More generally, unfortunately, the process of clinical trial registration is seen as an administrative burden. There are teams of administrative staff who are responsible for filling out and checking all the boxes. And that's unfortunate because it's a missed opportunity for that point in the research lifecycle to be about designing good interventions and taking the time to create the analysis plan as an embedded part of the registration process. So what's the path forward? Well, it's a massive coordination problem. There are a lot of journals, that's an understatement. There are a lot of funders, there are a lot of societies and there are many universities. Each of them sets their own standards in their own way for what they expect their authors, their grantees, their researchers, members and staff to do. Each plays a very important role in this process, but getting them all to change in a coordinated way is a massive problem. If every policymaker solves their open scholarship in their own way, it would be devil progress because researchers would be forced to navigate just a crazy ecosystem, sometimes contradictory ones. Moreover, each policymaker may only be able to influence a small portion of the research lifecycle. A funder could be very interested in the results of an ongoing project of outputs and papers, but has a very limited capacity to monitor outputs over time, especially for years-long studies. A journal could care very deeply about the design and the size of a study, but can only give an up or down assessment once the project is completed. And a university could benefit for more collaborative research, but they have no say in what gets funded or published, of course. The solution is to have a common framework that's applicable across many domains and many stakeholders, while having the added benefit of consistency and clarity for researchers who are navigating this complex ecosystem. And the top guidelines can provide that framework. The solution for this is to have that aligned framework that is flexible enough to adapt to these different domains and stakeholders. That has to be specific enough to a certain very unambiguous standards and expectations for behavior and research practice. And it has to be progressive enough so that it creates a durable and long-lasting roadmap for community growth and improvement over time. The top guidelines, the transparency and openness promotion guidelines provides that. Top guidelines consist of a framework of eight research practices for transparent and rigorous conduct of work, and policy makers can implement them in one of three levels of increasing rigor, roughly disclosure, requirement and verification. So for example, authors can state whether or not data sharing underlying a reported results or a report paper are available. That's a data availability statement. They can be required to make data available as long as it's ethically and technically feasible to do so. Or there can be steps to computationally reproduce those reported findings prior to acceptance in publication. Top provides similar standards for material sharing, for analytical code, for use of reporting guidelines and for pre-registration. And finally, top provides a specific guidance for replication studies and the use of registered reports. Top solves several problems in the policy landscape that I'll get to right now. So in some fields, like in political science and economics, we see computational reproducibility being more and more normalized, more and more mainstream. In the life sciences, some disciplines, some corners of the life sciences use cell line authentication to verify that material sharing is actually sharing materials that they think they're sharing. In several corners of psychology, pre-registration and registered reports are becoming more and more mainstream to test how replicable key findings are. So top solves several players, several problems in the policy landscape, providing a common structure across domains. Here on the left, we see a histogram of journal policies evaluated based on the top guidelines framework. So the horizontal axis is an evaluation of journal policies. The higher the number, the more policies that are being implemented by any given journal. And the lowest numbers on the left side grant show that the vast majority of journals are implementing very few open science policies, zero or one. However, over the past three years, we started to see change in journal policy evaluations over time. So this figure shows a sample of 317 journals that were evaluated in early 2020 and in early 2023. And 73 of them, represented by Datsun, the figure showed changes in policy over that three-year period, 65 of which are in the correct direction, I would say, and a few of them switched publishers and we saw a little bit of a drop off there. So what I want to show you now is how policy makers themselves exist in the ecosystem and how we're thinking about policy change in a systematic way, just like we're talking about culture change among researchers in a systematic way. So first, the very existence of the top guidelines framework is what we're going to talk about. I'll just pass over that. It makes policy change possible. And then the rest of the theory of change, I'll indicate how we're making policy reform easy, normal, rewarding, and eventually it could become a required framework for disseminating research. So the top guidelines make policy change easy because it provides very specific language for author instructions that's licensed in the public domain for maximum reuse. Additional language tailored for grantees and funders exists and upcoming work for academic institutions will provide additional adaptations for making policy adoption easier and more widespread. Growing community support also serves to make policy change easier as well. Years of effort with grassroots campaigns that we heard about earlier and champions for reform have prepared the community for these changes. The Open Scholarship Survey that we've talked a little about before shows how the community is ready for many of these types of activities. So we see amongst most open science activities there are strongly positive attitudes towards these behaviors. Even though recent activity on them is relatively low, so we hope that policy change will be a way to close that action gap. With top, we help to make policy change more normalised through a years-long signatory campaign. And this has been supported by representatives of more than 5,000 journals and funding organizations. These signatories are supporting the principles that underlie the top framework and they engage with us and their peers in the process of adopting the policies. Top signatories increase awareness and engagement as they go through the adoption process. And as lessons are learned from individual players, those lessons are shared to further ease adoption. Finally, top adoption is rewarded by a system that evaluates journal policies. Top factor. Top factor is an alternative to the journal impact factor, which we know is a highly dysfunctional metric. The journal impact factor accounts mean citations to articles in a journal that have been published over the past two years and it was created with the best intentions. It was created by librarians as a tool to help them make subscription decisions, but it's morphed into this sole proxy for journal quality. The misuse is widely seen as problematic and it incentivizes reporting only the most publishable, sounding types of articles in a quest to get into the journals with the highest impact factor. Despite these well-known problems, it's difficult to abandon the impact factor because in part there's no other alternative, there's no other metric to decide what is quote unquote good without deep knowledge of the discipline that you're looking at. Top factor itself is an alternative and it's a simple evaluation of journal policies, rating the degree to which it complies with the framework outlined in the top guidelines. Does a journal require data availability statements? That's one point. Do they require data sharing as far as ethically possible? That's two. And do they computationally reproduce reported analyses? That would be three or actually maybe six because that also require code. So top factor evaluates these journal policies based on factors that actually affect scientific rigor. Top factor is transparent with an open data set behind it and top factor is a valuable alternative to our understanding of what journals are doing because it assesses how their policies align with these ideals that we've mentioned again and again. Finally, government and intergovernmental agencies are coming out with stronger and stronger support for open science over the past several years, such as data sharing and open access publishing. And the top framework could eventually become a required element of the scientific conduct and dissemination. So through efforts of a lot of people in this room and down Pennsylvania and Constitution avenues, the US federal government has initiated dramatic transformation in the landscape over the past several years. In 2022, the obvious, the Nelson Memo from the Office of Science and Technology Policy directed all research supporting federal agencies to grant immediate access to research outputs, such as papers and data sets. NIH has begun implementing that and they've requested feedback on their proposed plan that we've provided expressed support for. Also in 2022, the government accountability office specifically called out open science practices, such as replication, pre-registration and registered reports as part of good practices that government agencies should encourage or incentivize. These next three years are critical for ensuring that these policy aspirations are converted into good quality, high quality policy implementations that meet the promise that we hope that they'll meet. And I'd be happy to now to pass it on to Samine Vazir who will talk about her experience implementing policy in the journal ecosystem. Thanks so much. So I'm going to argue today that I think that optional transparency is not good, is not equitable and is not sustainable and that we need to move faster to making transparency required. So imagine that you're a journal editor and you're evaluating two papers. Paper one is transparently reported, so for each claim in the paper you have the information you need to poke and prod at it and see if it's warranted claim. Paper two is not transparently reported but makes strong claims and asks you to take their word for it that the information if you had the data, if you had the information about what decisions were made when, what else was tried that you would agree that their conclusions are warranted. And paper one, you can tell that the conclusions actually aren't warranted. By looking at the information that they provided to make their work transparent, you can tell that actually the results aren't robust, maybe they don't hold up to different specifications, there were undisclosed deviations from the pre-registration, things like that. So what do you do? One option is that you can hold paper one accountable for their overstatements and ask them to tone down their claims or reject their paper because it's not well calibrated. And then with paper two, give them the benefit of the doubt and say, okay, if you say so, if you say that's what you planned and you followed your plan, I guess I'll take your word for it. That would obviously be suboptimal because you're punishing paper one for being transparent and therefore incentivizing people not to be transparent, right? But what's your other option? Your other option is to tell the authors of paper two, no, I'm sorry, I just don't believe you. That's not going to make you a very popular editor. And I just don't think that's possible. And moreover, if you're not going to believe people if they're not transparent, then you should mandate transparency, right? It's misleading to tell people, yeah, pre-registration is optional, data sharing is optional, but if you don't do it, I'm just not going to believe you, right? So it just doesn't work to make transparency optional. And here by transparency, I just mean giving people the information they need to verify the claims. So that can mean very different things in different fields. I don't mean more than that. I don't mean everything has to be made transparent. But I once heard a colleague who was arguing against transparency say that we can't require transparency because transparency means asking people to give their critics ammunition. And I think said, yeah, exactly. And so I think it's not reasonable not to ask scientists to give their critics ammunition. But I think that we really need to move towards mandating transparency as fast as possible because otherwise we're creating the system where you're in a bind for journal editors, but also for people on search committees or award committees and so on. It's just impossible to fairly and equitably evaluate and compare research outputs or researchers if transparency is optional. Whereas Christie put it earlier, I think she put it really well, that it's not a fair contest, right? So the Templeton Foundation said that we're going to impose open science requirements on the different researchers who are trying to find evidence for their theories in order to make it a fair contest. And the same applies for science at large. It's not a fair contest. Okay. So in addition, making transparency optional makes life really hard for those who choose transparency and it impedes progress on realigning incentives. So we've heard a lot today about misaligned incentives and how researchers with open science values can get selected out of science. And keeping transparency optional exacerbates that. So just a few days ago, I read a paper by Tom Hostler called The Invisible Workload of Open Research. And in it he argues that, and I quote, open research practices present a novel type of academic labor with high potential to be mismeasured or made invisible by workload models, raising expectations to even more unrealistic levels. I think this is very true, especially if transparency is optional. So again, imagine that you're on a search committee or a promotion committee for researchers and you have to compare one researcher who's transparent and has produced fewer outputs and another researcher who's not transparent and has produced more outputs or maybe outputs in more prestigious journals. You're in a bind again unless you punish the second researcher for things you can't show, right? You can't show that they weren't careful and that they didn't follow their plans and so on. But by giving them the benefit of the doubt, you're punishing the more transparent researcher. So if transparency is optional, researchers will be compared to their hyper-productive peers and those who opt out of transparency will be able to crank out more papers and like the journal editor dilemma, it'll be very hard to determine whether their outputs are robust or shoddy. So very likely they'll get the benefit of the doubt. But if transparency is required, this problem of the invisible labour of open research practices is partly solved over time by comparing like with like. Now everybody has to provide the minimal information you need to verify their claims and so it slows everybody down. Actually, I would argue that it doesn't slow us down. It just makes us feel like we're slowing down because we're publishing fewer papers but if you actually care about getting things right, I think it speeds things up. But the point is that this kind of arms race of having to produce more and more papers and being at a disadvantage if you're slower and more careful will go away if transparency is required. Now of course I think that Brian's answer to a question earlier that if we rush too fast to making things required, that changes brittle and we can't get there without hearts and minds. And I've experienced that as a journal editor too. To be honest when I choose which journal to submit my own work to, I don't put a lot of stock in the journal's official policies because those are just words on paper and if the editors don't have their heart in those policies, they're not going to be implemented in a way that really is true to the spirit of the policies. So I care a lot more about who the editors are, what their track record is on valuing open science and so on. But I do think that the longer we wait to make transparency required, the worse we make things for researchers who opt in to transparency. Okay. So I have one more point that I want to make and it's also when I'll go on at length about in my talk on Wednesday at the Metascience Conference. And that's something that hasn't come up that much today but I think that we've talked a lot about transparency for research practices, for individual researchers or teams, but not for journals. It's come up a little bit like the idea of open peer review or things like that. And I love the top guidelines and the top factor but even in a ranking of journals' openness policies, we don't have any measure of how transparent they are about their practices. And that's crazy. We need to expect a lot more from journals in terms of their transparency. And it's the same kind of problem that we just give journals the benefit of the doubt. So if one journal doesn't disclose very much about their practices and policies, if they don't make their peer review history transparent with the published papers, if they don't audit their work to see how they're doing on computational reproducibility, but they claim that they're doing, trust us, we're doing a really good job of picking the best papers, we still hand that reputation to them, we give them that prestige for free. And that's disadvantaging the journals that do show all their work, that do show the evidence for the claims that they're selecting the best work. So that includes publishing the peer review history with every published paper. It includes not only making your policies transparent and explicit but making sure that internal practices actually match those policies and disclosing any kind of informal practices that only the people in the know would know about. And it also means investing in audits of your own journal to make sure that you're actually doing the things that you're claiming you're doing, right? So if you claim that you're selecting the best work in terms of impact, you could do audits to see how you're doing an impact. That's easy enough. I think we already have many of those metrics, but many journals also claim they're choosing the best science, the most rigorous, the most reproducible, the most generalizable. Well, they should invest in audits for that work. Or at the very least, they should not impede people who are volunteering to do those audits for those journals. And very often, not only will journals not pay for that work and incentivize it themselves, but they will even be annoyed when other people do that work for them, right? So the amount of transparency we expect from journals is basically zero. And there's very, very little that we demand that journals do to earn their right to bestow prestige on research. So when we're talking about the importance of transparency and mandating transparency, I think we need to be thinking both about the research itself and how it's produced and how it's evaluated, the peer review process and journals' reputations and claims. All right, I'll stop there. Thank you. Shall we? Thank you. Hi, good afternoon, everyone. I'm really pleased to be here. I have no slides. I'm just going to tell you a little story. Before I do that, I just want to say what an honour is to be here, being involved with costs almost from the beginning. And I'm not saying that just because they happen to be up the road for me as I'm at the University of Virginia, but that is part of the story. But just to put the story in a bit of context, I was trying to think about, we've got the four levers. I'm a big fan of levers. When I was a kid, I used to go to Clapham Junction and watch the trains come in and there was always somewhere pulling a lever to change the point so that train went into the right station. I thought, wow, that's such so powerful. I went to the NIH to try and do that. The train moved slightly to the right or left, depending on your viewpoint, but it's not been straightforward. And then I've been involved with journals, again, with Plos in particular, and that's been quite a journey. And also with societies, particularly in structural biology, which I think those people have had very significant impact on how we think about open scholarship. Now I've had the absolute privilege to do the same thing in an academic institution by being able to start a school from scratch within a university. A 200-plus-year-old university that's quite conservative but is open to new ideas, particularly as it relates to a new school, particularly when you bring in the largest gift in university's history. But there's two messages here. One is that the school is in data science. And so one of the messages is, I think what's going on in data science represents a very distinct opportunity for all of us because it's just in the 40-plus years I've been in academia, I don't think there's been anything quite like it in terms of just the interest in it. So, as an example, we're running a minor right now, an undergraduate minor that has 600 students in it. It covers all 70 majors in the university. So everyone from religious studies to economics to biomedicine is interested in doing this and they're all trying to get quantitative skills. When we do surveys of the undergraduate population writ large, they say we actually have a major study next year but half of those students would be potentially interested in doing a major in data science. So they see what's happening in society and they're responding to it. They also see that they also come to it with a real altruistic viewpoint. Basically they look at people like me and they say, you screwed it all up. And we want to try and fix it and then we want to develop quantitative skills to do that. So that's sort of where we're coming from with this and so the idea of pulling all this together has been really exciting. But Sadiq from Scratch gave us an opportunity because we have a situation where we have no culture. There wasn't a culture when I started. I had to be awarded tenure in the new school by the provost because there was no P&T committee to award me tenure. And so at that point I could do whatever I liked. So we basically, at a very early stage, when we just literally had less than this many faculty, we actually developed promotion and tenure policy that actually includes open scholarship as a very distinct part of it. And all I can report is three or four years later, we're hiring the order, we're trying to hire about 14 faculty a year. We usually get about 10 and so we're now up to about 40. Every one of those, the moment they come before they even come and interview, have you read our promotion and tenure policy? Because if you haven't, you really need to buy into this because we expect you to sign on to it. And it's essentially an opt out policy. You're opting in to essentially put all of your research products in the public domain. And you can opt out for reasons of IP and publishing one word journals if you absolutely insist, but you still have to have preprints and things like that. And what's happened even in that short period of time, I would say that there's a real culture that's developed. I can't say how happy it makes me. I'm hearing one of our young faculty members who's responsible for our online masters in data science program, which we're now revamping, saying, let us put all modules from that program, that whole degree in the public domain. And because what we do, I mean, we're a business, there's no one getting away from it, but our value is in our faculty interaction with the students and what we add in student support. So the fact that this is coming from them, it's not coming from me, I think is a big deal. But there are some challenges. But the interesting thing was that that policy went before the faculty senate. Right, we've been dealing with this for years. But what was interesting about that is it didn't come from our School of Data Science because we have 12 schools. If the other 11 schools got wind that one of the deans was putting forward a policy for the whole university, that wouldn't go down so well. So it came sort of anonymously. But by some miracle, the faculty approved it. So they've essentially signed on, unbeknownst I think in most cases, to, it's not a policy at this point, it's a set of guidelines. But what is now, over a period of years, moving its way towards university policy. So this is at one institution under almost perfect circumstances. So I think it's just an exception rather than the rule. But I think there are lessons to be taken away from this. So subsequent sort of an alignment with that, we formed an open scholarship working group which Brian and others are on, including the Dean of Libraries, John Unsworth. So there's the role of the library in everything we're doing in establishing the infrastructure to support this. It's really important. It's going to still be a slow process to actually, even though the faculty didn't know what they actually signed on to, to actually get that kind of adoption is going to take time. And I think there are many steps, many of which I've learned about from what I've heard today, which I think are going to be very valuable in the process we're putting forward. But there is a general, in the leadership of university is the general sense that this is the direction we have to move. And it's not just being driven by mandates from funders, it's just this sort of sense of this is what we should be doing as an institution. So and we're trying to help move that along. So thank you.