 All right, let's get started. All right, welcome everyone. Thank you for joining today's for our Lightning Talks E, MetaScience. We're going to have five, 10 minute Lightning Talks ready for you. And we're gonna go ahead and start with Teresa, who was from the University of Nevada. And her talk is Science Journalism's Usage and Understanding of Open Access Research. So with that, go ahead and take it away. Thank you so much everyone. I hope you guys can see my screen. So again, my name is Teresa Schultz. I am a librarian and an associate professor at the University of Nevada, Reno. And I'm going to be talking today about a research project I conducted recently called Science Journalists, Knowledge and Views on Open Access Research. So as far as why I wanted to do this research project, I'm sure all of us are aware, when the main benefits that open access advocates always talk about is enabling access to scholarly research for everyone, not just those in academia. However, a large portion of studies that have looked at the impact into open access have focused on the usage by others in academia. I wanted to kind of break outside of that. And I was particularly interested in news media because there's been certainly studies in the past that have shown that often, the general public, they'll never actually view articles themselves. They will rely on the news media to learn of them and to learn about what was being the findings of scholarly research. So the news media, I think, plays a really important role here. So I had four research questions in my project. What factors affect a science journalist's knowledge of OA research, including preprints and postprints? What factors affect a science journalist's willingness to use OA research in a news article, including for specific types of OA articles? How does science journalists feel the views on OA research have changed because of COVID-19? And what do science journalists think about the idea of predatory publishers? This was a rather large survey, so I don't have time to go into all of my findings today. I'm just gonna give some highlights. I'm really not even gonna touch on that third research question, but we'll touch on some of the main things. So as far as methodology, essentially what I did was I conducted a survey using Qualtrics of science journalists who were currently working at the time of the survey in the United States. I initially contacted three different professional associations for science journalists, and I did include environmental and healthcare journalist in this as well. I only heard back from one of them, which was the Association of Healthcare Journalist, and they did get me permission to email my survey to the listserv in late fall of 2021. Unfortunately, I only received about 15 responses from that, so not enough really for analysis. So from that point, I also started manually curating a list myself. So essentially, I looked for list of news organizations in the United States. I then searched their sites for anyone who had written anything science related in the past year. And as for this is included newspapers, TV, news radio, magazines, those organizations that are also particularly focused on science as well. And from that, I was able to curate a list of about 500 scientists or science journalists, I'm sorry, and I emailed them the survey individually in early 2022. And between those two attempts, I did end up receiving about 82 usable survey responses. And so that's what we'll be studying today. So there was a lot of demographic background information I asked, but some of them were particularly the questions of interest. This was a pre-educated group. The vast majority of them at least had a bachelor's, but 61% actually had either a master's or a PhD degree. So a majority there we can see have an advanced degree. The time spent working as a science journalist really was pretty broad. It ranged anywhere from one to 42 years with a mean of 11.6 years. So this was a fairly experienced group of science journalists. I also have several questions to kind of get a feel for how comfortable were they working with science scholarly articles? Are they familiar with the definition of peer review, things like that? And they were pretty familiar. One of the questions I asked was, how often do you actually use a scholarly article as a source in your news story? And 80% said they used a scholarly article as a source and at least half of their stories. And actually more than 60% said they did so with more than 75% of their news stories. So they are definitely going to scholarly articles as sources pretty frequently. I then was really interested in getting an idea of, okay, you use them as sources. How important, the first question was, how important is it that you actually had the access to the full text? Because other studies have shown that sometimes journalists will rely on, say, a press release or an abstract, right? But a large majority of people said, no, I have to have access to the free text, or I'm sorry, the full text in order to use it as a source in a news story. And then so of course my next question then was, well, how important is it to get access to the full text for free? And as you can see, a majority 50 out of 82 respondents said that was very important that the full text be free. And then another 18 said that was pretty important. And only seven said someone important, and again, seven said not at all. So definitely a majority here said getting free full access is definitely important to them. The survey then kind of moved into asking and trying to ascertain how willing are they to actually use OA versions as sources. And I'm going to use common language for the open access world. I'm green, gold, and hybrid OA, but the survey defined it and tried to avoid that language. So for green OA, I asked them, how willing are you to cite an article that appeared in an open repository such as PubMed or BioArchive? And as we can see, there is a little bit of mix here. The largest group, 29 people, but that is still a minority, they said yes, we would use this as a source, no concern. But pretty close behind, 26 people said yes, but it does have to be peer reviewed. So essentially they're wanting the postprint instead of the preprint, right? And then the third largest group, which is behind, but showing even more hesitancy, they said yes, we would use preprints or essentially a green OA as a source as long as it's been peer reviewed and published. Interestingly, there was another question in the survey asking them, when you use these as sources, do you note in your news story if they've been peer reviewed and or published? And definitely only about maybe a third said yes, they did that all the time. That was not a majority that were doing that. So they're concerned about it, but they're not always translating it for their audience. And then as we can see, only one person actually said no, they would never use green OA as a source. And then I asked them about gold OA. We can see there's definitely more openness towards gold OA. And again, I define gold OA as a fully open journal, such as plus one, something they would be probably familiar with. So 48, a majority of respondents did say yes, they would use these as a source, no concern. 14 did say yes, unless there were red flags about the journal. Eight said they would be hesitant, but okay if they were, if they were already knew the journal. Nobody said they would not use it as a source. And then we see even more openness for hybrid OA, 55 people said yes, they would use it as no concern. Seven said yes, unless we're in flags. And four said hesitant, but they would be okay if they knew the journal. So again, out of all three of these, only one person ever said they would not use, and that was for green OA. And then finally, I was interested in getting an idea of like how aware are they of the idea of predatory publishers? I know there's certainly disagreement as to what predatory publishers are and how concerned they are in the open access world, but I just wanted to get a feel for what was the feel for among science journalists. So I asked them if they had heard of the term, 77% said they had. And then I then asked them if they had heard of the term to define it. This was an open text qualitative answer, so I coded it. Most responses included two elements in their answers, which essentially got at the idea of charging for publication, and then no peer review. 73% of this group then also indicated that they were concerned about predatory publishing. And then so I finally asked them if you were concerned, how do you try and evaluate a journal or publisher to see if they're predatory? Reviewing a journal's website was the most common response, followed by asking researchers. And then third most common response was using a unsafe list such as Cabels. And then if they chose checking a journal's website, I asked them what were they looking for. And grammar was essentially the most common answer given, which is a little concerning because that has been shown to be not always a good indicator of a predatory publisher. So what are the implications? I found that their need for freefold text to be really interesting. They're very strong, which I think shows that there is certainly a role that open access can play in helping science journalists do their jobs and communicating research to the public. There certainly was a broad familiarity and a willingness to use OA as sources for news stories, which I also found positive, but there's certainly with some hesitancy around green OA, which I thought was particularly interesting considering, again, they're not really making those indications themselves for the readers as to whether an article has been published in or peer reviewed. And of course, you know, there certainly is concern with predatory publishers. I didn't really see anything too wild out there and their definitions they were giving, but I do wonder if maybe some more education could be helpful for science journalist. And that is all I have. Thank you so much, everyone. Not if I'm on mute button. All right. Thank you very much. Really appreciate it. All right. So next, we're going to go on to our next presenter, whose title is Reimagining Sustainable Publication Funding Models for Worldwide Uptake of Open Access. And that will be presented by, sorry, but I am not going to try to pronounce his name because I'm afraid I'm going to butcher it. Hello, everyone. I'm Anagha Nayar, Scientific Content Export from Inago Academy. Today I'll be sharing insights on the topic Reimagining EPC Models for Worldwide Uptake of Open Access. This is derived from our global study intended to derive greater global adoption of open access publishing. In our study, the key areas of our focus included perceived attitude toward APCs in open access publishing, knowledge of APCs on knowledge dissemination, receiving publication funds for OA publishing, and proposing solutions to address the APC dilemma among researchers and different industry stakeholders. OA publishing has observed a growth of over 75% since 2000 and over 50% of articles are published in some form of OA platform since 2020. While open access have been seen increasing traction, significant barriers remain, hindering its overall growth. One such significant barrier is the financial burden, article processing charges also called as APCs. The concept of APCs have often been cited as a financial concern by many researchers. At Inago Academy, when we conducted this study, it was revealed that over 48% researchers supported APCs when provided adequate funds. However, 25% opposed it entirely. This diverse opinions reflected the ongoing challenges and discussions surrounding the evolution of scholarly publishing and the role of APCs in achieving them. Now speaking of the perceived impact of APCs in knowledge dissemination, it prompts adoption of sustainable OA models. Not to a surprise, 60% researchers considered mandating APCs as a threat to open science reflecting growing concerns over the financial burden placed on both researchers and institutions. Furthermore, 51% researchers partially agreed and 25% strongly agreed upon APCs having an impact on OA publishing. So, onishing is that 57% researchers refrained publishing on OA journals due to unaffordable APCs. This poses a sounding threat to fair and equitable knowledge dissemination. We also found that 22% researchers were against the APC models, wherein 13% opposed APCs and 9% preferred APC-free OA journals. 52% believed that open science can be maintained without APCs and emphasise the need for rigorous editorial and peer review standards to ensure trust and avoiding conflicts of interest. This highlights how funding influences researchers' ability to publish in OA journals. Furthermore, the reliance on personal funds or collaborative contributions can be challenging, especially for those with limited resources. 4% researchers refrained publishing in OA journals with APCs due to lack of funds. Furthermore, 80% researchers perceived APCs to be expensive and 57% expressed their interest to publishing in APC journals on funding support. Flexibility in pricing models and alternative dissemination avenues are suggested to address the form of perception. On investigating the trends in funding over publishing, it was revealed that the majority of funding was sourced from public and government funding sources. This underscored their responsibilities in sponsoring APCs. But shockingly, 67% respondents were either completely unaware of publishing fund support or did not have sufficient information requests to include in their grant application. It was further reported that 67% had never received a funding support for publishing. Of the remaining respondents, 36% researchers had received a fixed amount for capping APC funds and 24% researchers never received any capping APC funds. However, 17% researchers reported receiving constant capping APC support from their funders. This brings our attention that setting limits on APC funds can create an unequitable environment for researchers with limited financial resources. This may increase existing disparities and biases in academic publishing. Acknowledging these findings, we at Inigo are committed to driving sustainable open access through innovative platforms and educational initiatives promoting equitable access to knowledge worldwide. Our initiatives in these key areas focus on these aspects. We introduced an open platform to encourage researchers and industry stakeholders for sharing their research summaries, opinion pieces, articles, etc. We intend to facilitate open access knowledge sharing for researchers globally, especially those from low or middle income countries and boost their visibility. Here is the barcode to our open platform. You can scan the barcode and register an open platform to engage with our readers. Helping researchers to find the most relevant journal for publishing, we integrated Open Access Journal Finder in Inigo Reports, empowering authors to identify reputable open access journals aligned with their research while simplifying their manuscript publishing journey. Furthermore, we actively educate researchers and different industry stakeholders on the significance of open access practices through our webinars, workshops and other resources. By facilitating knowledge exchange, providing insightful tools and increasing awareness, we aim to accelerate the transition toward an equitable and ethical open access system globally. Through these efforts, Inigo strives to be a catalyst for ethical and sustainable open access ecosystem democratizing knowledge and respecting the needs of all the involved stakeholders. Here are the main takeaways as we conclude our presentation. There is a growing lack of awareness of article processing charges and the funding sources and this lack of knowledge can introduce barriers, especially for researchers with limited funds. We believe that educating researchers on available funding and providing a centralized platform with this information could help. Many researchers view APCs as a threat to open access, also considering it to be a financial burden on researchers and institutions alike. To address this, transparent communication is needed on APC's role in sustaining open access publishing and alternative economic models should be explored to distribute cost more equitably. Promoting transparent funding policies, alternative funding models and open discussion on available resources should be promoted to educate researchers about the ways for away publishing. Furthermore, a collaborative multinational funding approach redistributing costs equitably based on the economic indicators can be a potential solution. Should you have more thoughts on democratizing knowledge via scholarly communication, please reach out to us by sending an email to academyattheretinago.com. Let's come together to democratize vision through multi-stakeholder dialogue-building, consensus on viable, ethical, sustainable and equitable open access models worldwide. Thank you so much everyone. Thank you. Alright, we're going to transition to our next presenter who is Erica, whose presentation is results from a Springer Nature Code ocean pilot to support code sharing. Erica, come to you. Thank you so much. Make sure I find presentation. Here we are. Let me put it on screen mode. Is this visible and okay for everyone? Okay, so thank you so much for letting us talk about a story on our efforts to promote open code. So I work for Springer Nature. I've been working in editorial for many years and many of us are scientists. Initially we worked in research and then we transition to editorial roles in certain journals. I just want to explain that we bring with us that sort of passion for open science and for sharing research into the job. So in many of our journals, editors think very deeply about how they can move the needle to improve the science that we published and the way readers and other researchers can benefit from the science that we published. So one of the initiatives that we've had for several years is thinking about how code that is associated with publications can be serviced better to the research community as part of the publication. And the story that I'm going to tell you today combines three different things that we found are sort of a formula for us, which is the combination of policies, editorial expertise, and technological solutions and how those three things have worked for us to deliver and improve the code that is shared as part of our publications. So I'm just going to summarize here what we consider as a community best practice sharing when it comes to code. Code is a research object in its own right. It is often associated with publications, particularly in our case, which means the publications are either primarily using that code to retrieve the experiments, sorry, the results that they're describing, or the publication itself is very much that code. That code is what's being described. So it really needs to live alongside the publication and have the best platforms that suit that particular object. In the case of code, we think it needs proper documentation, so a reader can sufficiently know how that code needs to be accessed, shared, and reused. So that might include the right systems, technical requirements, licenses, dependencies, et cetera. There's also value when it comes to code in thinking about code as many... It can come in many flavors, so you can share it in many different ways. But for us, when we're in the business of thinking the code that's associated with the publication, we often think about code as being shared as the version of record. So the code that was used for that publication perhaps needs to follow a certain level of quality control and reporting that is on par with what we think about when we think about publications. So in that sense, we think very strongly that peer-reviewing the code, having the code be checked by somebody else before it's shared with the publication is useful, as well as ensuring that that code that lives alongside the publication can be permanently accessed in an open repository using, for example, a permanent identifier so that that version of the code can always be accessed in perpetuity. So this in a nutshell are best practices that resemble those of data. Those are fair practices that we're all familiar with. So I'm going to quickly go through the successes of combining these three things, the policy and editorial expertise and technology that we found through our initiatives. So the first thing I'm going to say is that we do believe a simple policy can go a long way. And for journals, our approach has been to try and unify all the policies across all our journals and books into a very recently announced policy where we focus on transparency. We want every article that has used code to include a statement or within a section within the article that describes to the reader where that code can be found. We think this is the minimal that we can aspire to that research is transparent about how that sharing is happening. In addition to that, many of our journals and in fact had been deploying that policy for many years. I've been thinking about how to integrate that code earlier. So if an article is requiring code to reproduce the results, to produce the findings or code is the main part of the article, we wanted to be shared with us, with the editorial team through the submission process as soon as possible so that we can make sure the reviewers are exposed to it. And at the time of publication, we ensure that sort of dual value of the article having been checked by peer review, but also the code, the central elements to it. Obviously having that code shared earlier allows us to find mistakes with it if there are any and the researchers can fix it. But we also can ensure that those standards for sharing are upheld through editorial expertise. So doing this is very cumbersome and code it doesn't quite have the same needs as a word file. So our submission system clearly was very limited in allowing us to embed code sharing as part of the peer review process and eventually also this is true for publication for the final article. So we thought back in 2017 when platforms were starting to immerse to enable code sharing in a way that was a little bit more sophisticated through for example docker files or these are container files that host the data and the code and allow any user, any reader or researcher to virtually run this code, play with the results and eventually also reproduce the results that are described. That we would benefit from partnering with these platforms so that in our submission system our authors were supported in sharing that code through the use of these platforms. Now we did this recently we integrated these platforms into our submission system with that idea to support authors but that is not the only way they can share code with us. So when an author is asked to share the code with us for peer review we also allow them to tell us alternative places where they're deciding to host that code. But importantly there is a service aspect there is a facilitating access for coming from us which I think it's an important role that we can play. It allows people that may not have knowledge or access to these things or experience to benefit from that being part of the submission experience that they get with the paper. In a nutshell what this does is that it gives us an opportunity to integrate the sharing of code into the peer review system into the life of the peer review for the paper as well and to make it very easy for reviewers to then have a platform that is ideally suited for them to check the code which is obviously an important aspect because peer reviewers are doing a lot of work already for articles so it was important for us to make it very easy for them to check code. So eventually this also leads to hopefully a better experience for readers because that code lives in an open platform that is directly executable we believe there is a benefit to the reader. So this integration had advantages for researchers, reviewers and ultimately readers. Again if the authors are choosing a different form of sharing that is also benefiting from a lot of these workflows and can be surfaced in the paper in some way as well. What we saw was this, I'm only showing results from six of our journals which were the initial adopters, we're looking to expand this but obviously as you can see the amount of code that is present associated with the publication depends on the type of community we're talking about with computational communities developing code at a much higher frequency than other journals. But what was encouraging is that authors do appreciate the service and take up these kinds of platforms for sharing to a quite sort of important extent. So the range varies from 15% to over 40% of authors choose to share in this way. We also showed or are showing here that reviewers engage with this platform and we find using these platforms advantageous and therefore we have a high engagement from reviewers during the process. We've heard good feedback from the community which is always good to see when we start doing this so there's benefits to both reviewers engaging in this way and authors sharing their results in this way. And the last thing I want to just mention very quickly is that this won't really solve all the problems this combination of policy and technology because there is a champion behind this. So our editors are our champions. They are the people that really care to make this work for researchers and for reviewers and ultimately readers. So they make sure these things eventually reach the level of compliance that actually matters. So this is an example of what a code availability can look like in that sort of best case scenario where all these practices are followed and we have both a version of record, open but cited a version of the code as well as more dynamic versions available to the reader if they want those as well. So I think that's it from me. Thank you. Thank you Erika. All right, so we're going to welcome our next presenter, Kate whose presentation is in the era of open science communication of research results to participants remains an afterthought. Thank you. Seeing full version of my slides. Yep. Okay, great. So hi everyone. Thanks so much. So glad to be here. I am an assistant professor of bioethics at Seattle Children's Research Institute in the University of Washington School of Medicine. And as mentioned today, I'm presenting a paper that my colleagues and I have written arguing really for a need to address a gap that we see in current open science efforts, which is the communication of research results directly back to research participants. So this image certainly does not do justice to the incredible breadth of open science efforts that we've heard about over the last two days. But think of it as a very rough summary of the broad intent of such efforts, which is really to allow the general public, taxpayers to have access to the products of federally funded research. So this basic information flow really applies to all types of research, but for research that collects data directly from human participants, we think there's an additional group of people who need to be included here. And that's research participants. So these are members of the general public, a subset of the general public who really went above and beyond the typical taxpayer contribution to contribute their own time, energy, access to their data, and sometimes even bodies for the benefit of the scientific enterprise. These generous individuals obviously contribute essential data to the scientific efforts, and they could in theory benefit from current open science efforts so they could access free OA publications. However, in practice, most research participants, most members of the general public, are not aware that OA publications exist, let alone know how to find them, or then are able to even read them with all the technical jargon. So because of these barriers, we argue that making research products publicly available is not sufficient to facilitate the direct contributions of research participants. And rather there needs to be an additional step in this dissemination pipeline to allow for that direct communication of results back to the participants. It's important to note here then what I'm talking about is communication of aggregate or overall results of research. I'm not talking about giving an individual information about their own specific data that might be clinically relevant to them and their health. But I'm not talking about giving an individual information about which bioethics literature about that return of clinically meaningful individual results, which I would suggest people diving into if they're interested, but what I'm talking about here is really a much more basic communication, just the overall aggregate findings from the project. And so why should we do this? I offer two reasons, though there are more. The first is that it's a very fundamental way of demonstrating respect for participants. The Belmont report, which is really the bedrock of human subjects research regulations in the United States written by the National Commission for the Protection of Human Subjects in 1979. It's also a component of a landmark paper by Emmanuel and colleagues where they laid out seven benchmarks for ethical clinical research and specifically mentioned return of research results to participants as a way of demonstrating respect and fulfilling that seventh benchmark. Another reason though is to satisfy basically the altruistic urge that motivated the research participation in the first place. So we know that among other reasons, one of the primary reasons people participate in research is because they want to help, they want to advance science, yet they rarely learn how they've helped. And this is really in direct contrast to what we see with even very small-scale charitable donations. Let's say you give $5 a month to your local public radio station, this small donation will be met with enthusiastic thank you notes, gifts, maybe a monthly newsletter giving you updates on progress towards a shared goal. We see no such equivalent in biomedical research. And this is really notable because we know that decades of empirical studies have found that participants want to learn the results of research they've participated in. And there's some basic psychology literature also showing that actually expecting to learn how you've helped motivates helping behavior makes people more likely to help in the first place. And so that really begs the question, is it possible that sharing research results could actually encourage people to participate in research, could encourage them to stay in a study over time and maybe even motivate participation in a future study as sort of a next step? So we don't know the answers to that, but finding out is really important because it's a persistent problem that too few people participate in research. We know that there's ubiquitous challenges recruiting enough participants with really meaningful consequences. Up to 86% of clinical trials don't meet recruitment targets on time and up to almost 20% in one study found that clinical trials close or terminate early due to poor enrollment. And really importantly, this chronic under enrollment does not impact all groups equally, but particularly there's under enrollment of participants from minoritized and marginalized social groups and that fact really threatens to undermine scientific progress as the data that we are collecting is not from a generalizable sample of the population and thus really limits our ability to make conclusions. So it's a problem that we need to figure out how to broadly motivate research participation and this idea of sharing results in a more systematic way is a potential contribution to solving that problem. So now that we've talked about the problem how do we fix it? I turn to this graphic from the Center for Open Research. I think it's a great strategy for cultures change, which I really like. And if we sort of move through this starting from the bottom, the first question to ask is, is it possible to do this kind of results communication? I'd say broadly, yes. Most researchers who work directly with participants have an email address or a phone number. It would definitely be possible to send an email with research results at the end of a study. And while that sounds easy, it's actually no, not as easy as you would think. A lot of researchers that I've spoken with and in some interview studies have really found that the uncertainty about how to do this communication ends up being a barrier. What's the right level to write at and how exactly should we phrase things? There's also uncertainty about whether IRBs need to be involved in this kind of communication. And so it's actually not as easy as you would think. It also tends to fall towards the bottom of the priority list when people have so many other things on their plate at the conclusion of a project. It's certainly not normative. There's no sort of standard practice. It's not the case that this kind of communication at the end of a study is commonly done. It is not rewarded or incentivized in any way and it is not required. And so if we were to think about, there's obviously a bulk of work done to create this kind of culture change, I think we could think about some ways of really baking into the practice of research relief in the beginning, even starting at the grant proposal stage. So you could, for example, write in a plan for sharing results with your participants to data management and sharing plans for NIH grants. It could be included as part of annual or final progress reports for grants. And just to give a bit more of an example here, so actually NSF and NIH both already have a component of progress reports that's called the project outcomes. And so this is really intended to write a brief summary in lay language of the results of the study at the end. For NSF, this is a separate report than the final progress report, but for NIH it's part of, it's a section of the final progress report. And just to show you some examples of what these tend to look like, these are intentionally small because the details don't matter, but the point here is that the NSF submission portal actually allows inclusion of images, inclusion of hyperlinks, as a consequence the resulting outcome reports tend to be much longer, richer, have a bunch more information in them. The NIH instructions in contrast actually say it should be less than half a page and so the outcome reports on NIH grants tend to be much shorter, less rich descriptors of the findings. And I think that's unfortunate because oftentimes NIH studies do involve human participants. You could think about potentially easy step would be to take these already written lay summaries and then just encourage researchers in their annual progress report, you know, send this out to your research participants. This text you've already written for us and have a little checkbox saying that you've done so. So those sort of small behavioral nudges I think could actually go a long way to really changing the culture towards doing this more often. But importantly, I think ideally the onus for this should not fall on investigators alone. Institutions already have a bunch of other structures that could really support such efforts such as institutional media offices. We heard about the role of science journalists earlier in this session, science communication scholars, librarians, biolethicists like me, other folks who really think a lot about how to do this type of public engagement could be involved to really support investigators in communicating their results. So in closing, we really believe that open science should include directly sharing results with the portion of the public who contribute most directly to research, which is the role of the community. We believe that we have a lot of community-wide study participants. And future work, a lot of future work I would say is needed to determine best practices for doing this as well as to evaluate potential funding and infrastructure solutions for supporting investigators who want to do this better. So I really encourage you if this is something that you have thought about, if you have ideas for solutions, please contact me. My email is there. And you can find our preprint at that link. So our final presenters, this can be from Teresa again and Matt, in their presentation is titled Providing Context and Transparency to Scholarly Journal Evaluations. All right. Hello everybody. Happy Friday. I'm Matt Ruin, a scholarly communications librarian at Grand Valley State University in Michigan. You've already heard from my collaborator, Teresa Schultz at the University of Nevada Reno, so I offered to do the talking for this round. Teresa and I are the lead editors and co-founders of a new journal which seeks to provide an alternative and we think improved way to evaluate and discuss scholarly journals focusing on context and transparency. We've listed our other co-founders and editorial board members here. There's a good chance you've encountered the term predatory publishing before, but in case you're unfamiliar the basic problem is this, the technologies and practices that allow scholars to create, evaluate and share more information more widely than ever before also make it possible for unscrupulous actors to make money by pretending to do so. So as author fees became a more common part of the scholarly publishing landscape, so too did the equivalent of the classic vanity press just for academic journals self-styled journals a little more than a website with fake or sloppy peer review offering rapid publication of submitted articles for a small fee. Sometimes these scam journals may deceive authors into submitting work. Sometimes they're a very rational choice, especially when research incentives prioritize quantity over quality. In any case predatory journals can spread misinformation defraud or financially exploit scholars or bury solid research in a flurry of poor quality papers. But none of these potential harms are unique to any particular publishing model. Traditional subscription based commercial publishing is equally vulnerable to sloppy peer review or editorial oversight or outright fraud. And if we're talking about financial exploitation what should we call a 30% profit margin on content that's freely given by the researchers whose institutions then have to buy back access. But that framing aside the most widespread response to concerns about predatory publishing are watch lists of bad scam journals or publishers to avoid and safe lists of journals deemed good or safe to publish in. The appeal is understandable a quick objective universal answer to whether a given journal is good enough for you. But this promise obscures the messy complicated reality. The right journal for one researcher can easily be the wrong choice for another. A new Ph.D. chasing tenure at a cutthroat Ivy League institution has different priorities than a long tenured professor wrapping up a long term project or a medical researcher who wants to put their discoveries into the hands of practitioners as fast as possible. There is no universal objective binary determination of good or bad. Watch lists and safe lists can also reflect their creators biases and assumptions. While many of the newer initiatives work to avoid Jeffrey Beal's notorious biases against non-western publishing, they still tend to focus only on open access journals and often only look at newer publishers. Taking for granted that a journal is fine if it comes from a big name publisher like Elsevier or Wiley. Current lists may also make their evaluation criteria clear but the actual evaluation process or sometimes findings may remain opaque or hidden behind a paywall as with Cabel's predatory reports. And inherently not even the best lists can keep up with the proliferation of journals, new legitimate publications, new blatant scams, and new everything in between. As academic librarians, Teresa and I and our co-editors advocate for a different approach building on nuanced contextual models that seek to assist researchers in conducting their own evaluations like the excellent advice of the website Think, Check, Submit or the principles of best practice developed by the Committee on Publication Ethics. In our view, the issues raised by the phenomenon of predatory publishing are best addressed with the same critical thinking skills scholars are trained to use in our daily work. Considering the context of a journal and our own publishing priorities, accepting nuance and ambiguity because many of the same characteristics can appear in both a scam journal trying to deceive and a sincere journal trying to do well in the face of language, technology or resource barriers. And then demonstrating the same transparency in journal evaluation that we advocate for with open science practices. Finally, we argue that we need space for a conversation. One of the many, many problems with the infamous Beals List was its closed nature. Journals and publishers listed as predatory or possibly predatory had little recourse to appeal or to demonstrate improvements. The best watch lists and safe lists today allow appeals but still present a closed result. A journal is good or bad, safe or risky, predatory or not. There isn't room to explore the messy middle ground, the gray area in between definitely good and definitely a scam. With these values in mind, we created Reviews, the Journal of Journal Reviews or RJJR. RJJR is an open access journal practicing open peer review which publishes reviews of scholarly journals emphasizing context, transparency and publicly available evidence about the journal in question. Our scope explicitly invites reviews of any and all journals. Subscription or open access free or costly new standalone journals or titles from long-established mega publishers. Additionally, we invite responses to those reviews as a venue for nuanced conversation between reviewers, scholars and publishers. For the reviews themselves, we developed a rubric drawn from our own experiences as librarians evaluating journals from the cope principles of best practice and from similar resources. Reviewers are asked to consider a journal's transparency in their policy and practice, observable behaviors, the people involved, any equitable practices by the journal, a non-expert but informed assessment of the journal's published work and how that relates to the journal's scope, and then the background and history of the journal including any affiliation with institutions or organizations. This doesn't result in a quantitative ranking or binary categorization as good or bad. Ultimately, a scholar is the best person to assess the scholarship of a given journal and decide whether it meets their own needs. Instead, our reviews summarize this contextual overview highlighting positive signs or potential concerns and leaving decisions up to the reader. Reviews are peer-reviewed and anonymized peer-reviewed reports are published alongside the review for further context and as part of our own transparency and practice. Meanwhile, the responses format provides a channel for conversation. A response can be an alternative to angry demands or legal threats if a publisher or journal disagrees with a review or an opportunity for the subject of a review to highlight improvements made in, well, in response to that review. Scholarship is fundamentally a conversation. Nothing is static and permanent and locked in one form forever. Responses, we hope, reflect this reality and can provide a more productive pathway for disagreements and debates about the quality of a given journal or publisher. Our big goals for our JJR are fourfold. Our first surface-level aim is just providing a resource for scholars. Assessments more detailed and more nuanced and a watch list or safe list can usually provide that can inform a researcher's publication decisions. Second and more importantly, we see our reviews as a way to model contextual nuanced evaluation of journals, a framework for a reader to apply to unfamiliar journals that they later encounter. Third, by focusing on transparency and publicly verifiable information about a journal's practices, we hope to encourage greater transparency from other journals. Fourth and finally, we are creating a platform for librarians and other scholars and experts to share the contextual evaluations we are already doing. Work and disciplinary expertise that often goes unrecognized or at least unshared. So, here's the part where we ask for your help. You can find us online thanks to support from the Texas Digital Library at rjjr-ojs-txt state.tdl.org slash rjjr. Take a look at our first three published reviews. Read through our rubric and let us know what you think. If there's a journal you think we should review, tell us or better yet submit your own review. Especially any academic librarians watching. You may already do this sort of evaluation to help researchers at your institution and you definitely have all the skills to do so. We're also actively building our pool of peer reviewers. So, if you're interested in the project but submitting a review isn't in the cards right now, consider volunteering to review reviews. With that, I'll hand things back to our moderator. Thank you all for your time. Thank you. I mean, all the presenters here have had impeccable timing because we are right up to the minute. So, to wrap up, a round of applause to all of our presenters. Thank you so much and thank you for those who have also been participating in the chat and I will see you guys in the next session. Thank you, everyone.