 Welcome to the next session. Please do subscribe to the conference Slack channel. We will pop the link in the chat. And this is a lightning, a lightning session. So each presenter is going to have eight minutes to speak. I am going to be timing people and I will crash into your talk if you go over eight minutes. Please be aware of that. Unfortunately, because the tight schedule, we don't have any room for questions directly in this session. But please do go to RIMO afterwards to join the discussion chat to the panelists. And again, I will pop the link in the chat now. So I think we have live captioning enabled. I'm hoping that everything is working. So without further ado, I would like to invite our first speaker to start. And that is Sabina Bischoff, who is speaking about openness and transparent communication in animal based research. Sabina. Thank you for the kind introduction. I hope you can see my slides in the presentation actually. And my topic today will be the openness and transparent communication and animal based research. I would like to present your own projects a code so called sales last. That means I'm critical incident reporting system in that animal science and I will repeat that in the further slides as you can remember the short word for that. I'm veterinarian and since 2003 I'm working with the lab animals. And through my web at the university hospital I know how important it is to talk about failure. And therefore, in order to improve the well being of patients, and therefore we implemented already in 2015 a critical incident reporting system for lab animal science and that is the topic I wanted to introduce it to you today. Sorry. During the last few months, we see a remarkable progress and transparency and animal science for example on the first of July this year. We had a great global media global social media campaign supported by the era and on the same day, the German transparency agreement for lab animal science was launched on by the DFG that means that German research association with together with the information platform understanding animal research. On this day, a lot of countries all over the world, we're part on this campaign and started talking transparency, transparency on lab animal science in general. And all these initiatives had the aim to transparently inform about animal research and to get actively involved into an open dialogue about that. But when we think about talking about failures, how far does our transparency go. How do we, how do we deal with unexpected events. Is it easy for us to talk about failure. And should we publish data from failed studies. Let us have a look on human medicine and learn from their constructive culture of error. In the last years, those so called failure databases and human medicine were launched in the 1980s already. And since 2013 it's obligatory for every hospital in Germany to provide a failure database in order to improve the patient health and welfare by pure learning from mistakes near Mrs critical incidents or even undesired events. So we think about openness and transparent communication and research we see a strong need for constructive culture of error in animal based research as well. So we think, and we support the central idea of a culture of care instead of a blame and shame culture. So no one has to blame or to be ashamed by reporting transparently about happened mistakes near Mrs critical incidents or even undesired events All these actions are important to improve our animal safety. How can you reach it as a researcher in an experiment. Well, it's not so difficult as you might think. Let's just start talking about unsuccessful experiments and that is the topic I wanted to introduce you today in the short talk. On this page, you can see the web page of the web based tears last portal, the critical incident reporting system for lab animal science. Even on the starting page, you can get a lot of information information about the project itself. And you are able to report a critical incident, which happened in your experiment. On the next slide, you can see the first part of the form where you are able to introduce or to upload the critical incident. And as you will see when you inform on our web page, you will see it will take you only a few minutes to give all the information which are needed to describe the critical incident. And so we invite you to convince yourself it's very easy to work transparently on the slide you can see some of the latest reports on the web based tears last. And actually we see on more than 200, sorry, 200 registered users on our web page and more than 50 reports are already uploaded. And all the registered users can learn from them from these uploaded reports. But when it's so easy as it may seem to be the progress on transparency and lab animal based research. Depends on a working failure management, which must be supported by the facility administration is very important. And in my opinion, the scientific progress also benefits from reports on unsuccessful experiments and very successful without wasting lab animals and resources. And now it's up to you inform us and form you on our web page, join us and become part of the CS last and transparent communication. And with slight, I wouldn't like to thank you for your openness and transparent communication and for your attention I'm open for your question. I'm happy to answer everything and I would like to thank my team. Our funding for the Ministry of Education and Research in Germany and the cooperation partner that we are sent to in Gießen, very copper from Norway and the animal study study registry from Berlin. Thank you. Thank you. Thank you, and Sabina, you were ahead of time. That's absolutely fantastic. If anyone would like to ask Sabina questions after her talk please do join the speakers in the remote session after the whole talk. So thank you very much Sabina. So the next person we have is Thomas Hostler who's going to be talking about open research and academic capitalism Thomas come to the stage and your time starts now. Can you see the screen here me. Cool. So yeah so I'm Tom so I'm a senior lecturer in psychology at Manchester Metropolitan University, but I'm also doing a masters in higher education studies, and that's led me to this topic of open research and academic capitalism. In higher education studies, the kind of objective interest is the university as an entity, and academic capitalism describes how universities are increasingly competing with each other in a market context. And they're competing over students but they're also competing over research which feeds into the worldwide university rankings. So academic capitalism influences how research is organized within universities, which areas of research they fund and who is employed and how to do the research. So you can think of this as being well who manages the endeavor of research as a, as a, you know, collaborative, you know, knowledge production. And we see a shift from this being managed by a self regulated community of academics who decide what is researched, who are generally intrinsically motivated to answer questions because they're interesting. They share those answers as a public good, and the output and quality of research is assessed by the community through peer review. And we see a shift to universities instead of just being places where these academics happen to work and pay their salaries to the universities being actors themselves and strategically managing the academics in order to be competitive and increase their capital. So universities start to decide what is researched. This is a picture from my faculty's areas of excellence, and the university compels us to only research, you know, things that fall into these categories. You know, wanting to do research in order to either move up the rankings, or to, if possible, commodify and make money out of the knowledge that's generated. And the quality and the outputs are generally assessed by reductive metrics such as citation counts and journal impact factors. So open research is, as most of us know, kind of initially a grassroots movement designed to make every part of the research process open available and free. So this includes practical initiatives to promote sharing plans data materials analysis and outputs as publicly and freely as possible. And then these initiatives then help to enable broader developments such as big team science and diversity in inclusivity initiatives. And, you know, the grassroots open research open science movement is a good example of something that's been developed by this community, you know, the community of academics and is broadly aligned with, you know, this kind of utopian way of doing science and Newtonian norms. So what I'm interested in is, well, how do these things, you know, interact with each other. And on the face of it, you think well, they're kind of in opposition right. Open research is all about promoting and sharing knowledge for free as a public good as openly and freely as possible. It's about sharing resources with other academics. So instead of hoarding data for yourself to get as many publications, you're sharing it for free, and you're reducing the competitiveness between researchers and helping, you know, your colleagues who might not have as many resources. So working in a really transparent way also disincentivizes people using questionable research practices like, you know, salami slicing and p hacking, you know, which are done in order to game the metric based assessment of research used by universities. And when we have a completely transparent and open project, then it's easier for the academic community to make a qualitative judgment of how good the research is. And then, you know, a lot of open research is also about collaboration and working communally, especially these, you know, big team science projects which we've had a few talks about. But then also sharing the social capital generated from that so pulling everyone who worked on the project on the paper as an author, and then using something like a contributorship statement. We're also interested in, you know, the areas in actually where these things might align. So, you know, there is an argument for open research that's made in kind of capitalists terms about increasing the efficiency of research and extracting the maximum value for money from a piece of research for the funders. And this is something that obviously universities are going to be very interested in for their own benefits. So increasing calls for open research to, you know, be kind of introduced from a top down perspective by funders by policymakers by universities. But it then becomes another area of research for the institution to measure and manage, you know, for their own ends basically. So open research practices, when they're promoted bottom up from a grassroots thing they're quite flexible in the way that people can use them. When they start coming from top down inevitably going to become less flexible, there's going to be stricter definitions of what openness is and what it isn't. Again, reductive ways of measuring it, and inevitably more bureaucracy and administration, you know, if a university decides we want all our academics to do pre registration, they're unlikely to say, oh, we know that takes time so we're going to give you less essays to mark. And my final point is that I also think that open research practices can inadvertently facilitate the increasing casualization of research. So if you're working a researcher working on a long term project, and you're working in a very closed way where you're the only one who can understand the analysis code and the way you've set up the, the, you know, the database, then you're kind of a very crucial part of that research project. If you start working in a way that's very open and providing detailed instruction so that anyone can come along and reproduce what you've done, you know, that's the kind of ideal aim for open research, then suddenly you're not quite as irreplaceable to that project or to the university. So that last point is a little bit speculative maybe but I do think it's something that people need to kind of consider perhaps long term with promoting open research movements. So that's the end of my talk I'd be really interested to chat to anyone who's interested in the same things here and if anyone knows anything, you know, research that's been written on this already that you can share with me I'd be very interested to read it. And I'll be in the RIMO room afterwards. So thank you. Thank you very much, Tom. Interesting to know if we're all just going to be replaced by open data anytime soon. Thank you. Our next talk is from Patricia Martin Covert who's talking about the topic does a zero inter-rater reliability mean grant peer review is arbitrary. Patricia, please take it away. Thank you. Thank you. Let me try to share my slides. Okay. So hello, and welcome to this talk on inter-rater reliability in grant peer review. My name is Patricia Martin Covert. I come from Institute of Computer Science of the Czech Academy of Sciences. I'm a student station. And I will be presenting a joint work with Elena Rocheva and Carol Lee from University of Washington. So the motivation for our research is the fact that the grant peer review allocates billions of dollars of research funding. And when selecting which proposals to fund, most great agencies rely on peer review for assessing the quality and potential impact of proposed research. So assessing the quality of the peer review is thus very important. One measure of the quality of the peer review is inter-rater reliability. In a simple words, this measure assesses the consistency among creators. And there are some accepted kind of values such as that inter-rater reliability above 0.6 signalizes good inter-rater reliability while inter-rater reliability below 0.3 is low. One recent and highly cited study by peer and colleagues found exactly zero inter-rater reliability in mock peer review of funded NIH proposals. Since then, this study has been cited as evidence of a complete arbitrariness in peer review. So the main aim of our study was to answer a simple question. Does a zero inter-rater reliability really mean that grant peer review is arbitrary? To answer this question, we took real data from a complete range of submissions to the National Institutes of Health and to the American Institute of Biological Sciences. And we calculated inter-rater reliability for complete range of samples and for restricted samples of different ratios of proposals. The figures on this slide are of three types. On the left, we have the complete ranges of ratings, each bar representing one proposal ordered from the best on the very left. And we see that the patterns of ratings are similar for the two grade agencies. In these middle figures, we have estimates of inter-rater reliability together with their confidence intervals for the complete range of samples on the very right and for restricted samples of a given ratio. We see that for both grant agencies, the single-rater inter-rater reliability for complete range of samples is about 0.3, which corresponds to multi-rater inter-rater reliability of over 0.6, considering the average of three raters is used. And this signalizes good inter-rater reliability. However, local inter-rater reliability estimates from restricted samples will likely be zero under many scenarios, including the case of proportion of about 20% of funded projects as was the case in the peer adult study. Finally, on the right, we explored statistical reasons behind zero inter-rater inter-rater reliability estimates, even in cases when the true value is not zero. This can be due to well-known estimation issues for low number of raters. To learn more, read our paper published in the Journal of the Royal Statistical Society Series A earlier this year. Our methods and interpretations are implemented and available for you to try in an interactive software, shiny item analysis. New function, ECC restricted, provides estimates of inter-rater reliability in restricted samples. And in the interactive app, you can select the ratio of top or bottom proposals. Here it's 83%. Here you see the part of the proposals that is used for calculation of the inter-rater reliability. And here on the right, you will find the estimate of inter-rater reliability together with the confidence interval. The software then provides interpretation and also a sample R code to replicate the analysis in R. To conclude, we demonstrated that estimating local inter-rater reliability from subsets of restricted quality proposals will likely result in zero estimates, even then when the inter-rater reliability from the full sample is not zero. The question then is, is it valid to interpret range restricted inter-rater reliability estimates as indicators of peer review quality? When the reviewers were in fact asked to score grant proposals across the whole range of submissions, and our answer is no, at least not from the measurement standpoint. When reviewers are asked to differentiate among grant proposals across the whole range of submissions, we recommend against using restricted range local inter-rater reliability. If review scores are intended to be used for differentiating among top proposals, we recommend peer review administrators and researchers to align review procedures with their intended measurement. Finally, we demonstrated how interactive software shiny item analysis may be used to support dissemination, replicability and open science. Thank you for your attention and I'm looking forward to any further feedback or comments. And here are some references and my acknowledgments. Thank you. Super. Thank you very much, Patricia. That was absolutely fascinating. And as Patricia says, if you have questions, if you'd like to discuss with her or with any other of the presenters in this session, please do go to the remote platform after the end of this session. Next up, we have Cassio Amorim, who is talking about PsyGen.report, which is a platform for sharing reproducibility information. So Cassio, please can you come to the stage, if you can unmute, share your video. I cannot share my video apparently. Share your video. See if we can do that. Okay. Here we go. Okay. We try to share my screen just a second. I hope you can see. Give it a sec. I think we're not quite seeing your slides yet. Do you want to try and move them forward? Move a slide forward. Perhaps Cassio, if we come back to you, I'll have a quick chat with you and you can share your slides with me and we'll figure that out. So we will pop on to the next person who is Daniel Drevon, who is talking about a database to support aggregation of evidence from single case experimental designs. So Daniel, if you are ready, if you can hop on to the stage, and if you can start, you're all good. After you go, Daniel, please do start. Can you see me? Okay. Hi, everyone. I'm Daniel Drevon. Hi, everyone. I'm Daniel Drevon, associate professor in the department of psychology at Central Michigan University. I'm here to tell you today about a tool my team and I are building a repository or database to support aggregation of evidence from single case experimental designs. So I do want to acknowledge my two doctoral students, Allison, Kurt and Elizabeth Kovl, who have been a huge help with this project. So I'm a school psychologist by training, and that's a discipline that sits at the nexus of education and psychology. So I'm interested in evaluating the effectiveness of academic and behavioral interventions that are implemented in school based settings. Logically, I investigate that using single case experimental design. Over the last few years, I've become really interested in quantitative analysis of data fielded through single case experimental designs, and especially the aggregation of evidence from single case design and meta analysis. So along with randomized controlled trials and quasi experimental design single case designs contribute to the conversation about evidence based interventions and education and psychology and some other fields as well. So when they're designed and executed well single case designs are able to answer questions about whether an independent variable caused a change in a dependent variable or variables. This type of design isn't super commonly known. So I thought I'd talk about some of their core characteristics. One is that cases, and that usually means individuals are the unit of analysis so individual service their own controls. These designs are also characterized by research or manipulated independent variables, ongoing and repeated measurement of dependent variables before, during and after the introduction of an independent variable. And something that's kind of unique is that the data are displayed graphically and analyzed visually. So you don't see integration of traditional inferential statistics, typically in this type of design. And this design is used often in educational psychological research, often to study low incidence population. There's a strong tradition of visual analysis among researchers who use single case experimental design that has some disadvantages, namely problems with inter-rater reliability. So whether two people come to the same conclusion about the data that are displayed visually. So partially as a response to this methodologists and statisticians have developed several different quantitative approaches to analyzing and aggregating data from single case design. Take a look at this figure for an example of how data might be graphed in a single case design. So this is a common way to look at data. So where the y-axis reflects some dimension of behavior. In this case, the percent of observation interval that say a second grade student exhibits on task behavior, which we could operationalize further as reading, writing or orienting toward the teacher. So the x-axis reflects time, sometimes that's days, sessions, could be weeks. And then different phases of the experiment are reflected by the vertical line where there's baseline and intervention condition. So visually analyzing these data would involve looking at characteristics like trends, like variability, and then things like immediacy of behavior change when a phase change occurs. So this visual analysis can help us determine whether an intervention caused a change in behavior. So one problem we face for folks who are interested in quantitative analysis of these data or aggregating these data is, relates to the fact that we need the numerical data from the graphical displays included in studies. Researchers typically don't report the x-y coordinates of the data points included in the graphical displays, such as the one in the figure I showed. Yet they are needed to carry out most quantitative approaches to single case data analysis. In order to obtain these values, there's lots of different plot digitizing tools out there that can help us extract the data and then spreadsheet software can help us manage the numerical values that we extract. This is a screen grab from a plot digitizing tool called Web Plot Digitizer. And basically what these tools require us to do is upload graphs. So that's the same figure that I showed you earlier. It requires us to configure the axes to superimpose a coordinate system on the graph. It requires us to manually click each data point, and then it requires us to export those values to a spreadsheet for management. And imagine there's a significant amount of time and effort involved in extracting numerical values from graphical displays included in single case design. And that causes a considerable burden for researchers involved in the development and use of quantitative approaches to single case data analysis. So as a potential solution, our team is creating a repository of numerical values extracted from graphical displays included in single case design. The idea is that folks interested in quantitatively analyzing or aggregating these data could locate spreadsheets of the studies they're interested in, download those data, and then go on and analyze those data in any way they see fit. The data are formatted in a way that easily read into our studio or SDSS, and it's also compatible with a variety of different packages that are commonly used in analyzing these sorts of data. The idea is that the repository would facilitate more timely and less effortful evidence synthesis. It would also reduce duplication of time and effort across research teams interested in similar research questions, and it would standardize some elements of data management. This seems to be increasingly important in this time as evidence synthesis of single case designs then accelerating very rapidly in the last decade or so. And so ultimately reducing time and effort associated with evidence synthesis would allow findings to get into the hand of practitioners and policymakers quicker than they otherwise would. The week is underway. This summer we identified about 500 single case experiments published in school psychology journals from their inception through 2020, and we've been able to extract data from 265 of those studies to date. These are formatted and managed and housed on OSF. And so in terms of moving forward, the goal is to complete our data extraction and management for the single case designs published in the school psychology journals, but then also expand this to be bigger and include data from special education journals and perhaps even journals and behavior analysis or other disciplines down the line. So that's a quick overview of the project we've got going. If you're interested in single case experimental design or this type of tool, I'd be happy to talk more at my Twitter handle and email there. Thanks for listening. Thank you very much Daniel and yes if you'd like to talk to Daniel you can hop into Remo after this session. And now back by popular demands. Let's have another go with Cassio. So Cassio if you can come to the stage and I will share my screen. Sure. Hopefully can you see those okay. Yes can see that thank you. Excellent. Just give me a give me a shout when you want to change. Okay, I will tell you. So, once again, thank you for having me. I'm Cassio with CJS Inc and Kyoto. And today I want to talk about the Seigen report, which is a platform for sharing reproducibility information that I've developed. So, before we go straight into the platform, please next slide. Let's be on the same page here on the matter of the issues. So we're focusing here on reproducibility. And, well when we talk about reproducibility in recent years. We have several issues perhaps that come to mind but well certainly the foremost example, or the first thing that comes to mind is a so called a reproducibility crisis that happens in some fields where only a small fraction or perhaps not so small, but the fraction of papers seem to be actually reproducible. And actually there are other issues so, for example, scientists in general, they are not much willing to do replication work, it's not that much appealing. And then communication tends to be is not a communication regarding reproducibility of course, and not only in the academic medium in industry and well I actually work in the private sector. But there is also a lot of issues, the communication between industry and academics. It depends a lot of course but it tends to be quite difficult. Perhaps not so much if you are very rigid company but in general it tends to be actually quite difficult and this makes the dialogue regarding the stages of reproducibility quite hard for everyone. There are a lot of stages of fraud that, again, they tend to, they tend to take long to detect and to be corrected. So, please next slide. So here we deal in the center of all that we have the problem when you have a paper in our hands. Is this paper reproducible, and there are ways to tackle this question you may ask around. And while ask around has its own issues we have limited information. If you ask for your colleagues. Perhaps of course we may all try to look on the web for answers and even using websites like peer hub, which are great platforms, but still it can be hard to filter for the objectivity that we want in research. And of course you can try yourself which is great by itself and we should be doing a lot in my opinion. But of course there are also some issues related to that too. It is hard then to share your findings or our findings, and usually recognitions that do not come along with it. So, which links to what I just said but note that that in general scientists are not much willing to do this kind of work because of the lack of recognition is part of it. So, the platform please next slide. The platform that I'm suggesting is a suggested solution is this platform called cycle report which is a very simple platform. This is the, the opening page where you just look for a DIY. And what the platform does is just fetch the metadata of this DIY and you can go to the next slide to have a look at what it does, which is as I said very simple, you have here on the top left, pretty much what it does it takes this kind of title and authors publisher, etc. And the point here is actually that users can then pose their reports. So it is a no way away to tackle the questions that are the issues that I have listed in a way that you can then actually share it by yourself now you can share your knowledge with others and you have this register which may give you some recognition. And at the same time if you are looking for what other people did you have a place where you can have this, these comments like in the bottom right screenshot what you have is essentially what a report looks like for now. A short comment. And, and actually if they reproduced if the paper was reproducible or not. To what degree if it was totally reproducible partially reproducible not reproducible at all. And you can see this there is a circle. This might be a bit hard to see depends on your screen, but there is a circuit that gives the summary there together with the metadata of how many people tried and what was the overall result how many succeeded how many people failed on their attempt. So you have some simple but at least useful filter for objectivity. And not only that if you go to the next slide users also have their say public profile, where you have all the attempts of all the reports in attempts of reproducing replicating a paper, and what they will be tamed so well you could attach this to your CV or something like that. So, as I said it is a simple but quite useful platform, then if you go to next slide we can see, there are still, well, this is the, the questions or the issues tackled, and of course there are still remaining issues in the next slide. We is still, well, it is still hard reproducibility is hard and we may should ask ourselves well should it be hard. I think not but it's a whole big question that we cannot solve easily. So there are all the issues on how to engage people on actually reproducing research and whether this kind of platform is enough reward recognition. Is that enough to, to prompt people and invite people to actually sharing. I think such kind of platforms could be especially good for, say, students, grad students who gain a lot by trying to reproduce research and research papers and we can get a lot of knowledge from that in grad school. And, although special depending on your field it might be hard to publish early on in your career, or even if you're not planning for a career in academia, it can be sometimes hard to publish a result of a means to show that you are actually active, actively contributing somehow to the knowledge of the field by giving such kind of, of information to everyone. So, that's, that's it. If you are interested, please come and talk to me I will be in remote and as a last, as a last piece and advertising that we will have a panel next week. We will talk precisely about engaging the community research reproducibility, I'll be the moderator so we will actually listen to the other four great researchers that to give their knowledge on that and their opinion thank you. Thank you very much I think that was worth the wait thanks very much Casio. So, if we can go to our next speaker, we have a dragon, a kind of itch, who is speaking about unfold research, the web for science. So, let's get up and ready. Hello. Can you see the screen. Yep, do you need to go to present. There we are. Yeah. Okay. Thank you all for being here and big thanks to the organizers of the conference. It's a great opportunity, a lot of interesting talks. My name is Dragan today I'll be talking about unfold research, and I'm the founder of that project, and like one speaker yesterday noted. It seems that we are talking more about problems that exist in the science and meta science, but we rarely often the solutions, and so I'll be focusing on delivering a solution. Basically unfold research is a project that tries to achieve what open science was always about about openness and being able to access all the additional materials that other people could use to replicate outputs and other work. But they also kind of like the tools to do just that and they like the incentives to do that, because any additional work besides doing the actual research. Like nobody's bothered to reward that the way that we are doing that actually is by providing tools and services that measure different kinds of metrics that measure broader scope of activities and various different types of research work, whether it be replication studies or negative studies peer review, or perhaps data verification or just collection. We want to reward all of those. And the way that we're going to incentivize that is by measuring all of them, giving some value to all of those research outputs, and basically paying researchers for their contributions to the community. We will let the community assess how valuable and impactful those additions have been and based on that we're going to pay the researchers. More concretely what we are developing is currently a browser extension that you install and basically sits there for you. And as you're browsing the internet, you will be notified if there is some additional content posted by the community members. All the entries that are posted are posted for a specific paper or a specific URL. So all the content you can expect to be of super high relevance to what you're currently looking at. All the content could be of various type, whether it be something regarding replication or data, or perhaps a review. So we kind of offer various different modules that you can attach a link to the papers. So it's kind of very flexible. The community is able to cast votes on all of the entries, which makes makes sorting and filtering through perhaps a bunch of content much easier, and it gives you a very clear indication of what the community is, the highest quality content, and perhaps of the highest relevance. And for authors based on the amount of impact and kind of points that they've collected during a period of time, let's say a week, they will earn the money for those contributions. The collective research fund that we will be making grows through users subscriptions, which are kind of premium model. And we kind of expect always there to be more consumers of the content than creators, which creates this kind of balance where consumers of the content can pay for continued work of the content creators to continue creating new content and publishing and making it discoverable for everyone. And other than subscriptions, also, including individual or organizational private and public funders and investors. And we're also considering adding option for a Web 3 token, similar perhaps to some other projects that are working with blockchain technology, and perhaps even as a very simple and short demo. I just want to share that, like as you're browsing the web and reading specific papers or techniques. You have your browser extension, and it just notifies you if you have some relevant materials and you can access it and just browse it. There's a variety of various types to be something more rich media that you can never even fit into the papers that are very static in nature, or they could be something more just a textual and that summarize perhaps some of the things and link some of the additional materials. One of the benefits that we actually get from that is for authors, this is a completely new and an extremely powerful distribution channel that they never had before. So authors are able to link their new papers and new work and data to the older papers. So as people are reading older papers they are now able to discover newer materials and perhaps just like confirmations of all their work or rejections. And they can just make that those available. This is extremely popular because we had references from before but those were always able to point from older work to the from newer work to the older work but now we're completely finally able to point from the older work to the newer one. So discoverability is improved significantly and especially authors cannot target their audience, a very narrow audience, very directly, simply because the people that are reading specific papers are the audience that they want to show their new work to. And so any additional content, it could be also author provided annotation notes, any kind of data that you could not even fit in the paper, not even on a paper server, you now have the ability to post it and just let it be discoverable by anybody. So for authors kind of the biggest benefit and practical benefit is enabling them to do continued research and earn a salary on their for their contributions. For readers of the value proposition is perhaps a more direct one, they are now able to find relevant content directly access some things that perhaps not even search engines were able to index and show to them. And now suddenly all of the great kind of literature and all of the additional materials, whether it be Twitter threads that everybody found useful or YouTube explainer videos, all of that now has a home where you can actually host it and make it directly accessible and findable. And that kind of knowledge repository grows over time and is curated by the community so it's value is just growing over time. And we want to make this a tool that is your like daily companion that you use multiple times a day. We don't necessarily see as a replacement for all of the tools that exist now we actually think this works best as a in tandem with other tools for example using Google scholar to find the exact paper that you're interested in, but using these this browser extension to discover and browse the other relevant materials. So the project is work in progress but as you've seen in the demo. It's very much alive, and we do have a private testing phases so if you're interested in getting an early access, definitely reach out to us by some of the ways posted here. We are more than happy to hear any feedback requests or comments. So yeah, feel free to reach out with that. Thank you. And yeah, back to the moderators. Thank you very much for that. So for our last talk of this session we have a quadruple act. I think it's like the Avengers of open science possibly. So we have cassette coma we have Nathaniel Porter Jenny kersh and Matthew Cagle talking about evidence based training in transparency, replicability and evidence synthesis pedagogical review. So I will leave it to all four of you to do your thing. Let's see your slides are up. Thank you so much cat. And yes we have a lot of people and a long title. So, once are we good to go. Yep, they're all good. Go. Awesome. Thank you. All right, so my name is Cosette and I'm the evidence synthesis librarian at the University Libraries at Virginia Tech. I'm joined here today by my collaborator and co worker and Nathaniel Porter who is the social science data consultant and data coordinator. We are also very lucky to have with us today, our two students who worked with us over the summer on this evidence based training and transparency replicability and evidence synthesis, Jennifer kersh and Matthew cable. Before I pass it off to them. I'm just going to give a kind of overview of why we're doing this knowing full well that everybody at this conference probably already understands and, and feels the same way that we do about how important this sort of work is primary research ultimately at some capacity is intended to have real life application but it's best to have that primary research be filtered through a synthesis of replication or both sort of process. So synthesis and replication depend on transparent and open reporting. And so it's important that that becomes part of the standard sort of process for primary research. All of this is couched in a research culture, which has been dramatic throughout this entire conference right. The development of a graduate course is our kind of way of combating the metal meta bubbling that till Breckner mentioned during the lightning talks yesterday. There is a lot that we can do in this area but this is one small contribution we think we can make to kind of enhance this culture. We want to. Oh, so our, our like summary of our mission is to leverage the notion of synthesizing and replicating research as a means of demonstrating the importance and value of transparent and open reporting to budding researchers and research users. We want to both preach and teach and also live this kind of value. So we have a three part sort of approach, and we are currently at our foundational research kind of wrap up phase where we're moving into the course development phase. We plan to have our course pilot in the fall of 2022. Today, we're mostly going to be presenting the foundational research methods and also some findings. So with that, I'll pass it off to Jenny. Hi, my name is Jennifer Kersh. I'm a doctoral candidate and counselor education at Virginia Tech and I served as the summer graduate assistant alongside cosette and Nathaniel. I conducted a structured review of the social sciences literature, for examples of higher education pedagogy that used and address trace and open science concepts. The final product was an annotated bibliography of the literature that I found, which indicated predominant use of replication based projects and experiential pedagogical approaches in the field of psychology to teach undergraduates. So little literature currently exists that I could find discussing or demonstrating practical applications of contemporary pedagogical approaches to instructing graduate level students in the social sciences about open science subjects. And even less exist concerning pedagogical approaches to teaching systematic reviews and systematic review processes within higher education. Currently I'm working with cosette and Dr. Porter to write a conceptual article about the development of a graduate level trace course that would emphasize systematic reviews as well as the other open science concepts that we've talked about today. Hi, my name is Matthew Cagle. I'm an undergraduate student working along with the team. I conducted a course materials review specifically analyzing course catalogs and collecting relevant syllabi. First, I analyzed the course catalogs 24 graduate level research intensive institutes. I focus my structured search on the trust keywords. Once I collected these courses, descriptions that were semi relevant, I utilize quantitative content analysis on the descriptions determine the most relevant courses. What I discovered is that there was a lack of graduate level courses focused around evidence and analysis with the courses that did mention topic. Majorly, we're focusing on teaching general statistical research methods with the list of relevant courses I collected emails sent out to the professors to collect those syllabi, along with the OSF repository that we use to collect a larger portion of the syllabi. And then with the syllabi and annual utilize LDA topic modeling and epicenter network analysis to find patterns within them. Oh, that went backwards because that why don't you just show the entire slide then. It might be cool and avoid spoilers. So, as Matt said, we used both topic modeling and epistemic network analysis which is a network based quantitative approach. It's called part of what's called quantitative ethnography and detected there should be one more like I think I hope. There we go. We don't have time to go into all the methods but we did find some interesting findings as far as sort of a range of genres, I'll call them in graduate courses related to these three topics of transparency replication and evidence synthesis. So, under the general heading of primarily transparency and replication focus courses which is a good chunk of these. There are sort of two storylines right. One is science crisis, a true crime story where there's this replication crisis nobody trusts science what are we going to do and so on. And, but there's also the flip story, which is open science is great. Maybe this isn't so much as a crisis it's an opportunity. Right and so the top figure you see on the right. This is the epistemic network diagram of the relationship with those topics transparency goes with the replicability sorry it's a small, but you see sort of transparency is the hub for everything. And the other topics experiments in France meta analysis structured review are all sort of connected, primarily through that one hub of transparency and replication. As Matt mentioned, a lot of the courses that touch on these topics are really just methods courses in methods for psychology or sociology or whatever the discipline is health science. And there were two sort of sub genres in here. One is courses that focused on the history of science, how science is developing and how a this open science thing is part of what's getting better and then also the more classic sort of methods in my discipline. Okay, now we know that we can do these other things to make it a little bit better, we can do transparency we can do reproducibility so there's some discussion and you notice in these that's the second diagram here. There's a lot more connections between sort of the loosely tied concepts. So things that are important for transparency replicability evidence synthesis but aren't sort of at the core story that you might hear here at the science side. And then there's a limited number of evidence synthesis focused courses on evidence synthesis and along with that reproducible practices so you don't lose transparency and replication but there's a doctor. That was the closest I could get to a uniform, because these are extremely rare courses there's only a few of them we found in US universities. And they're primarily focused in health sciences. So this gives us an opportunity. As we develop the course, which is the real goal of all this research. This background research sort of helps us think not only about how to frame the course so that it can have the most positive influence in the lives of these young scholars and in science in general. But also to see where there might be challenges and things so one of our focuses will be to integrate systematic review meta analysis evidence synthesis with the classic transparency and replication concerns. So here is information in our OSF project and information for us. And we'd love to talk afterwards in the remote. Brilliant. Thank you very much. Thanks to all the presenters in this session, you've all kept to time fantastically. I'm just going to literally hit go. So please do join the presenters and other attendees in Remo afterwards I've just popped the link in the chat. And thank you very much for your attention for this session. Our next session is going to be on registered reports that will be in about half an hour so starting at 11am Eastern Daylight Time for PM UK time. So go and join everyone in Remo. Thank you very much for your attention and have a lovely evening if you're on UK time daytime, if you're over on in the States or I think I'm not even sure what day it is if you're in Australia. Thank you very much.