 Okay, so welcome everyone to this session of four lightning talks and so welcome wherever you are in the world, probably quite late at night for some of you in America or Europe, but it's a nice, bright early morning for us down here in Melbourne, Australia. So, these lightning talk sessions, each speaker will talk about maybe five to 10 minutes. And what we'll do is we'll wait for the end of each, each of the four speakers, and then we can open it up for any questions. So if you do have any questions you can put them in the Q&A, or in the chat or just raise your hand and we'll get to them at the end. So, first off, I'll introduce Cooper Smart, who can go ahead and present this one. Thanks Matthew. I'm just going to share my screen. Is that working? Yes. Let me know if there's any problems. Okay, cool. Hi everyone. Thanks for joining, particularly those of you that are staying up late for this. So I'll be giving a quick introduction to Project Freya Knowledge, which is a collective action platform for researchers. And this project is based on the premise that we're trapped in a giant collective action problem in academia, like there's some idealistic future, we can call that open science land that we all want to get to. But under the current system where people are primarily rewarded for publications rather than other open practices, it makes it difficult for us to make progress towards this future. And we know that there's not really a technological barrier anymore. Brian Nosek has made this really cool slide that I'm sure all of you have seen before. But the main point is that we've already achieved the infrastructure and the user interface, and there's really no technological barriers towards adoption of open science practices. And what we're stuck at is this cultural barrier at the community level. We also know through research that there's a high level of support for open science practices. These studies showed that people have over 80% support for open data and open access. But when it comes to actually doing the practice, there's much lower rates of adoption. The psychologist in the room might recognize this as being a bit similar to a prisoner's dilemma, where everybody acts in their own interests and ultimately this hurts the collective but also hurts the individual. But the main point I want to make here is that actually this is very different to a prisoner's dilemma because in that paradigm we don't let people talk to each other so that they don't have the capacity to communicate their intentions to act. But we can do that and we actually have the internet which could facilitate this on a global scale. And in recent years there's a precedent for this type of platform which is called a conditional pledge platform, and probably the most well known example is called Kickstarter. And the way this platform works is by taking conditional pledges. Now these are pledges by people to act in a certain way, if and when you reach a critical mass of support. And so Kickstarter has funded thousands of projects and raised billions of dollars of capital to get projects off the ground, and probably a less known example is called collection. And this takes the same process but applies it to behavioral actions. So their focus is on environmental issues, social issues, that sort of thing. And so what Project Free Our Knowledge is trying to do is just tailor the same solution that has proven successful in other spheres for the research community. So the way it works is that anyone can propose a campaign using our GitHub repository. And this is basically just asking people to adopt a particular action, if and when some critical mass of support is met. This action could be something simple like posting a preprint. It could be sharing some data. It could be posting an open review to a platform. Basically any open science or cultural practice that you would like to see your community adopt. Then we go through a bit of a process of developing it on the GitHub repository. And once it's ready we put it on the website to put it out to the crowd. So this is the point at which anyone in the world can now pledge to adopt that action, if and when that threshold is met. And at this point people can remain anonymous, which means that they're protected from any kind of risk or potential repercussions to their career. And then finally, if we reach that threshold, then everybody is listed on the website and directed to carry out the action together. It's been a good chunk of last year, developing open processes so that now anyone can propose and develop a campaign using our GitHub repository. And using this process, we've got probably around 15 proposals at the moment that are desperately needing development. Some of those have recently gained a lot more momentum. So one of them is to one of these proposals is to share your journal commission reviews. So the basic idea here is that we spend a lot of time collectively reviewing articles, and a lot of the time these reviews just get wasted and locked behind closed doors. So what this campaign is asking people to do is anytime you propose, sorry, anytime you review an article that is also a preprint, then you just go along and attach that review to the preprint itself. And so this is a campaign that we're developing. This is Professor Waltman proposed, and we're currently developing on GitHub. Another campaign that evolved out of the recent HBM brain hack is to share your code in a side of the repository. So this basically just asked you to make all of the code that you use for any upcoming publications publicly available, and to also put it in a repository that has a DOI. So for these campaigns, this second campaign, what we're going to do is let people pledge to take action immediately, or they can also wait until some larger critical mass of pledges have been made. So the main point I want to make here is that if you're interested in developing campaigns, you think there's some actions that your community could and should be adopting, then jump onto our GitHub repository and check out what campaigns have been proposed. But you can also propose a new idea if you think there's something that we should be adopting. And we've got a few campaigns live at the moment. So we started the platform with some open access campaigns. And more recently we posted a pre registration pledge. And this basically just asks you to pre register a single study along with 100 of your peers. We are currently at around 75 pledges for the field of psychology so could really use a few more pledges just to get that over the line. And of course 100 pledges is not a huge amount of number, not a huge amount of pledges, but the idea here is to try and demonstrate this concept in action and then scale up over time. So that brings me to this slide which is a figure I just made, so I thought I'd put included here, but this tries to capture the grand vision for how this project could evolve over time. And the main point I want to make here is that this is not just a single campaign or a single pledge initiative like have come previously. The idea is to try and scale this up over time and build on the momentum that we create with each campaign. So this is where we're at right now we've got an ambassador network of around 10 people who have agreed to support this open code campaign when we launch in a month or two. And the idea is that they will reach out to the community and connect with these researchers who would be willing to share their code, but might not feel comfortable doing so on their own right now. At that point we're collecting these conditional pledges and if and when we reach that threshold of activation, then everybody starts sharing their code together. And this means that we're all now supporting each other we can chat about what the best practices are, we can help each other through this process, and we can act in solidarity to make this the moment our field. Now this first campaign is only asking for a few hundred pledges so it's not a big deal. It's not yet normative throughout the community. But the idea is that this these few hundred pledges that we capture for this campaign can then be used leveraged to increase the size of the next campaign that we run. So in some theoretical future campaign, we might collect enough pledges where we actually make this practice the norm. And what this does then is if the majority of people are now sharing their code or doing whatever open science practice the campaign is about, it means that those that are not sharing are now the outliers. And so instead of seeing potential risk to people's careers, they would actually see it as a benefit because otherwise they're going to be frowned upon for not sharing their code or, or whatever the practices we're targeting. And so particularly relevant to this community I thought is to just try and highlight that what we're trying to do here is develop replicable processes over time that can be used to enhance and improve future campaigns. A lot of previous initiatives have had great success and they've created change in various domains like the cost of knowledge peer reviewers openness initiative. And these are all fantastically successful in motivating change, but unfortunately over time the momentum that they created gotten lost. So the idea here is that if we can try and capture these processes, learn from each campaign, then at the end of each campaign, we can also analyze what's happening. Find out what made the most impact what strategies were successful, and then use that information to inform future campaigns moving forward, so that we don't lose that momentum that we've created. So this is the grand vision. I would love for people to get involved. There are ways that you can get in touch with the project. You can email follow on social media. And of course, the main thing right now is to pledge and develop campaigns if you're interested in that. And with that I'd like to thank everybody who's been involved in the project and all of our partners who are helping us increase our reach throughout the community. Thank you. And as I say, we'll take questions at the end, but really inspiring work you're doing there. And we'll chat about that more at the end. So thanks so much for your talk. Thanks. Next up, we have Alex Holcomb. So share your slides with us. Wait till Cooper ensures. Okay. Thanks Matt. Thanks to all the organizers. Can you see that and hear me. Yeah, perfect. So I'm talking about authorship versus contributorship. Everybody knows the term authorship, it goes way back 1600s. Then a scientist really, at least ostensibly worked on their own or sort of these aristocrats or people had time on their hands and they sort of did all the work themselves for all those early publications in the Royal Society journals you'll see that they tend to just have one author. But today, of course, science is quite different. And I think we need to shift our norms for how we attach names to people. Sorry, attach names to papers in order to accommodate the reality of today's science. So, over time, as you can see in this graph, the number of authors per paper has increased dramatically. And that's natural when something advances you need specialists to contribute together in order to get something done. But unfortunately, this is not really the ethos of science I would say when I went to graduate school and got my PhD in this ivory tower here. At one point I was actually chatting with a professor and the one skill I had as a first year graduate student which I was being able to do some computer programming. And as I've done that before, and a professor in the school, he needed someone to program an experiment. So I was excited that I could already contribute in some fashion. But then he quickly told me, Oh, but you know, I don't give authorship to people who do the programming. So it was in that kind of experience that where I learned where it was signaled to me that what's really valued in academia is something called like intellectual contribution sometimes, and not skills. And so what I did is I didn't actually contribute to the programming of that experiment I went away and focused on learning like every skill in order to be able to do everything myself. And I think of all trades, which is what I needed. It seemed I needed to do to be set up as a principal investigator. Now this is not a good way to have a system that's going to advance in as was recognized all the way back in the 18th century, for example by Emmanuel Kant, as the Industrial Revolution was just getting started. And he pointed out that if you don't have division of labor if you don't have specialization, where work is not thus differentiated and divided, where everyone is jack of all trades the crafts remained an utterly primitive level. So I think this resistance to specialization has been holding back many of the sciences. So, you know, we really need to be able to give credit to many different roles. But if you look at authorship criteria. They're sort of stuck with this kind of authorship that is who's doing the writing conception of which name should be attached to papers. So for example, if you look at the International Committee of Medical Journal editors who set the authorship guidelines for hundreds if not thousands of journals. The guidelines say that you have to contribute to drafting the work, or revising it critically for important intellectual content so it's writing base, you can't become and have your name formally attached to a paper as an author, unless you contribute to the writing. Also they always throw tend to throw in this kind of intellectual contribution thing, which by which they will use to exclude people who do certain tasks now maybe if those tasks were menial enough, that would be fair, but I've seen it happen that people say oh well yeah he knew it was only one who knew how to use that machine. So is that that's an intellectual contribution. So you know that person shouldn't be an author on the paper, which might be okay by by some kind of like pure idealistic view of authorship but it means that we're not giving credit to those technicians, and thus we're not able to see it with funders. They're not able to see all the range of roles that is needed in order to get modern science done. Now fortunately this has been changing. I called contributorship where you just indicate who did what rather than just having a list of names attached to paper without any differentiation. Now this goes quite far back in terms of writing just plain text authors were invited many journals for 15 or 20 years to indicate who did what in some kind of little author note. But that falls short of what we need in this modern era where everybody wants to tally up people's papers and their impact factors and all these sorts of things whether we like it or not we live in such a world of bean counters. So we need to have a machine readable or something that can be aggregated across multiple papers. So here's one example of that is the credit contributor roles taxonomy that plus and other journals have adopted you can see it in action here, where each author has a number of roles that are from a standardized list that when you submit to a plus journal, you'll be indicating for each author, what they did. So this was developed back in 2014. And this is just one particular taxonomy for contributorship for signaling the different roles that that people have. And I think it helps encourage recognizing the broader set of roles. In my case, I became a jack of all trades but and I turned out fine, you know I've got a good job. But what I hate is talking to people like a technician associated with my department call me, I mean it's not really a neuro imaging specialist who consults on lots of different neuro imaging people who are setting up the equipment but not only that you know various aspects of the design. And I asked him during his annual review process, whether he had seen many cases where he would, he would go in he would help the other researchers at the beginning but then he would see it would he see a paper come out you know two years later without his name on it and he said, you know yes that happens pretty often. So, fortunately, though, it's not just plus but a very long list this is only this is an outdated list of publishers and journals have rapidly adopted this contributor roles taxonomy so that we can formalize this giving of credit to where it's due. And some of the largest publishers even some for whom I'm typically not on the same side as them. In this case, I am it's been rolled out at thousands of journals. So I encourage all of you to go to the journals that you're affiliated with, and talk about adopting a policy for moving beyond the more antiquated authorship guidelines and which is going to both give more credit to where it's due, and also results. And as a result of that in better resource allocation, when funders and universities can see for successive scientific projects, what it took the range of people that it was required to make that happen. That's the only way we're going to see money come in to better fund the infrastructure and teams that we need. So here's just one. Here's a tool that I've been involved with to make this easier for authors. Richard Wynn of Reds Cognito is also here he's got another tool. So this one, which we call tensing is named after the one of the people who may not gotten the recognition they deserved. What we do is we, it's a tool that's supposed to help both researchers to plan in their team in their project, what different roles, the different researchers involved should will take. And for reporting that when it comes time to submit to a journal. So the idea is that you circulate this Google doc to everybody on your team, and everybody will check off, you know what things they're expecting to do. That way there's less likelihood of misunderstandings later on. And then when it comes time to submit your paper. This tool that we programmed will output some outputs that you can paste into your manuscript that can facilitate and hopefully reduce the burden which is constantly increasing it seems on submitting articles to journals. So, in summary, traditional authorship has a number of different problems only some of which I've talked about today but one of them is that they don't reveal who did what. So I hope we can all shift towards contributorship to remedy that. Thank you. Thanks so much Alex great presentation and really interesting stuff I hadn't actually seen this tensing app he created so looks like it'd be very helpful. Okay, we'll have some time for questions at the end, but for now we'll do is we'll move on to the next speaker, who is Alyssa Mickey took, who I think it's quite a link for her in the US. So we're really appreciating you staying up like the to give you a talk today. So I think if you can now share trying to struggling to turn off my screen sharing. I think the windows go crazy I might just have to I might it might be better if I just leave rather than try to spend time finding the windows and I'll come back. Yes. Give me one second I'm going to try to do it as the my slide background. Well, let me. Okay. You guys see me and I wanted to try this out is that it was a beta feature so Hi everyone. So, I'm going to be discussing scientific culture and how that can influence researchers motivation to share reusable research data. So this is one of the students work that I conducted with Sarah Nusser who has joint appointments at Iowa State and UVA, as well as gizem corkmas who is an associate professor at the University of Virginia. And it was supported by the National Science Foundation. So that is a necessary but not sufficient condition to enable reuse by new researchers documenting and processing data for others requires additional time and effort by the original researcher. And the culture in the field or discipline the researcher specific field or discipline can facilitate or impede their motivations to share publicly accessible and also reusable data. So let's talk about three cultural factors that we found through qualitative interviews with 20 researchers of various scientific backgrounds so that included, you know, biology and astronomy, as well as psychology and sociology. And we used a grounded theory approach to analyze the data. And I'm not going to go into detail about that but if you have questions about that. Feel free to let me know and I'm happy to. And what I want to talk about are the cultural factors and some of the findings. So the first influential factor that we we focused on is the practices and attitudes of notable researchers in the field. So a researcher and linguistics who was a director of one of the main repositories in that field, noted that they had commitments from some of the biggest names in the fields the most notable researchers and they had really big projects. So when they got that data in their repository, it gave them a lot of credibility, and then it motivated other people to share their data there. I want to note that that repository had, you know, high standards and expectations of the shared data, but because notable researchers with big projects were sharing there. It motivated people to take that extra time and effort to do that as well so their data would actually be reused. And you could have notable researchers that can impede motivations to share reusable data or just data period. So this sociologist noted that many notable people who have, you know, incredible careers have really valuable data, but they just sit on it and they publish paper after paper for themselves. And that's the model for success that they've demonstrated, but it's not a model for success for reusable and sharing reusable data. The cultural factor was the ability to receive credit and recognition for sharing. And so one of the researchers who was able to receive credit and recognition was not a faculty member, which most of our participants were. So they were a data science director so they had slightly unique position, and their position was really designed so that they could get credit for the amount of people that use data that they shared and the number of data management that they contributed to, in addition to the kind of typical or standard academic credit. And that motivated them to make reusable research data because it was something that they got credit for. Let me see if I can move myself. Okay, so I'm not blocking this. On the other hand, you know, other side of the spectrum we had a researcher in the bioinformatics and genomics field that noted that, you know, similarly people do I give them credit but that hadn't included sharing data. They focused their effort on publications grants other ways to support their lab, but that being a good member of the scientific community so you know sharing data for others didn't give them immediate benefits and that lowered their motivation to take the time and effort to share reusable data. And then the final factor was the field norms or expectations around data sharing and those could be communicated in a variety of ways and by a variety of sources. Another researcher in the bioinformatics field noted that, you know, the journals in that field mandated sharing and certain repositories they mandated software under open source license. And then the funding agencies were pretty strict and and all that communicated to them that it was expected it was a norm and it motivated them to take the time and effort to share reusable data. But we again had a sociologist who said that you know the norm is to hoard your data collected sit on it and I thought this part was crucial crucial that the expectation was that you own your data that it doesn't belong to this broader scientific community. Now I want to highlight that the same field can have aspects of their culture that facilitate sharing and other aspects that impede sharing. So of course fields that have more aspects that impede it like we see with the sociology field, typically have less robust sharing practices compared to fields that have you know several aspects of their field or culture that facilitate sharing. But most of the researchers that we talked to felt that they were in a field that was still developing a robust sharing practice so they were more likely to report mixed cultural findings. Some aspects supported it, some aspects hindered it. So just to wrap up, I just want to kind of summarize that we looked at just a few factors in your cultural factors and scientific fields that can influence the motivations to share reusable data. And that influences my meta scientific studies that reuse that data. And of course federal mandates to ensure researchers engage in sharing reaches a minimum level of sharing but in order to really foster reusable data. And you have to foster a culture that motivates the extra time and effort that it takes to do that. And that's all I got. Great. Thanks so much. Can you move on to the next speaker. And we'll be Bob Reed, so you can share your slides Bob. Hey, well thank you very much for having me here let me get my screen up. I can't do the talking head thing that elicit did which boy I really wish I could that was pretty cool. Okay, so my name is Bob Reed I'm at the University of Canterbury affiliated with a research group called UC meta. My talk is a little different than most talks most people most presenters, they present about the research they've done. And this talk is really pitching an idea hoping other people will do this. So, my talk is entitled why aren't replication cited more why don't we just ask. It's built around a couple facts the first fact is that replication is don't get published very much certainly my field and I think that's been demonstrated for a number of other fields. This is a bar chart that keeps track of replications and economics. The replications that have been published those only published replications in web of science, economic journals, and don't let that increasing trend to see view. These numbers are miniscule so in any given year be unlikely you would see more than 30 or 40 replications published in a web of science economics journal. There's about 40,000 papers that get published and web of science economic journals every year. So this is a tiny tiny miniscule number of replications. And of course, that raises the question, you know, why, why aren't more replications published. And there are of course many answers and that's still an unsettled question but an answer you hear a lot is that replications aren't cited. And since journals want to influence and upgrade their impact factors, their editors are not too keen to publish papers which are in general not so likely to get cited. So I've actually got some data on that from one of my colleagues at Tom Coupe is also at Canterbury and he took a set of replications that we kind of manage, and he took 300 replications and match them with the corresponding original paper. And what he did was he then followed the citations of those replications after the replication was published and compared that to the citations the original study was receiving after the replication was published. And I won't go through all those numbers but original studies are cited about nine to 10 times more than replications, even after the replication gets published. So, that's a problem so why, why, what's the reason for that why aren't replications being cited. Well that's, that's really the idea thrown out there and I'm hoping somebody picks up. There's a lot of possible reasons right so one reason is that people who cite the original study just were unaware that the replication study was out there. And so, they want to mention the influential papers and their discipline, they're unaware of his replication so they just don't mention that's one possibility. Another possibility is, they want to only cite papers that appear in top ranked journals. And you are who you, you are the company that you keep. And if you're writing a paper, and you're citing papers, all in low ranked journals, you're sort of guilty by association if, if this topic doesn't get published in top journals and all your references are low ranked journals or you belong. So people tend to want to cite papers from highly ranked journals and, and sadly, while there are exceptions, replications in general, get published in lower ranked journals at least in economics. And so that could be a reason perhaps that authors don't cite the replication. Another possibility is that you cited a paper because hopefully that paper has something to do with your topic, which means perhaps the original study, the author of that original study is going to be a potential reviewer on your paper. And perhaps as replications frequently do that paper was replicated and they did not confirm the original study. So, you know, you're an author of a paper you cite the original study, you're aware of the replication was done but it was a negative replication. Are you going to put that in your paper knowing that the author of the original study might be a reviewer in your paper will, you know, I can see why you might not want to do that you don't want to get off sides with a potential reviewer. As a result, you sort of just stay away from that sensitive topic, and don't mention the replication. And of course, there's other reasons as well. Right. So how would you do this, how do you do a study like this well, we have a really nice archive at something called the replication network. It's publicly listed we update it pretty regularly. Currently we got 509 replication studies there, all identified and easily located. And the idea is that you would take these replication studies. And match them to the original studies. And then what you would do is you would find people who had cited the original after the replication had been published. But they cited the original but not the replication and survey them. We'd want to survey the people who cited both as well but we're most interested in the ones who did not cite the replication study. And so that's a very doable project. All the replications are out there, just got to find somebody go ahead and find the citations of these original studies. But if this is such a great idea, you know, why don't we do it. Well, there's two, two explanations for that one is money funding, but probably the more reasonable one is, we just don't have expertise at my little group with doing the kind of survey that that would contact academic researchers. Obviously you just don't come out and say, Hey, why didn't you cite this paper. You want to be nuanced and sophisticated and how you got this information. And we don't particularly feel that we have the expertise to do that. But I think it's a really important question, and we'd be really happy to help. So we got data, we got these replication studies we have them all matched to originals be glad to give that free of charge to ever who's interested in working on this. And to help in any other way that we could. And why would a person want to do this. Well, I personally believe that replications are the single most effective way at addressing the reproducing the problem and scientific integrity issues in literature. If there's a if somebody writes a paper, and they know there's a good chance that someone's going to go out there and replicate their study and publish it that that produces a very strong incentive effect to make sure that your work is reproducible. So I think replications are really important and the fact that they're not being cited published and cited is a problem. And so we want to fix that problem if you agree that replications are important, but we can't really fix it. Until we know what the problem is, you know, why people aren't citing, then perhaps we can come up with some solutions about how to improve things. And that's it. Thank you very much. Thanks so much both great presentation interesting. I guess advertising for linking with other people might be out of it. I embark on this survey with you so we do find some people who can help out. So, I want to thank all the speakers who's talked in this session so far it's really exciting interesting different pieces of work that has been occurring. Melissa had did have to run out so if anyone did have questions for her you can put them in the chat and I'll make sure that I email those to her and maybe link you with them if that would be of interest. But I anyone who has questions if you want to raise your hands and then I can let you speak but I know we do have some in the Q&A. So I might just pick out some of those to start. And then I'll start with this question from Jenny Bern to Cooper, which says, Hi Cooper, thank you for a great talk mentioned previous initiatives that have experienced real world issues in sustaining momentum and activity at the time. Can you comment on why this has happened and what can be learned. Thanks for the great question Jennifer. The question because the short answer is, we don't know. And I guess that's kind of the goal of for our knowledge is to try and find out and use that information to inform future campaigns. I could take some guesses though as to why momentum doesn't tend to get maintained on these initiatives. And probably the most obvious example is the cost of knowledge boycott which has around 17,000 pledges right now and started in I think it was 2012 that one started. And one of the potential reasons these campaigns don't maintain momentum is a lack of incentive or reward. And potentially, there are some ways that we could highlight people's pledges and get them to keep pledging in the future. So one idea we're kicking around at the moment is publishing pledges in the journal so that we can actually get recognition for the pledges that we take, and sort of incentivize those behaviors. And another main problem is just that everybody's busy, you know, academics are busy people and typically these initiatives will be started by one big figure in the field or a few big figures, and over time they get distracted or they have other interests that they prioritize and move on from promoting that and so I think a solution to that is potentially building a community around these ideas rather than just relying on one or two people to be driving the campaigns. And I think another problem is that all of these initiatives tend to get started as their own thing so in that process you have to build up a community you have to set up a website you have to establish a mailing list. And ultimately this means that all of that momentum is siloed within that individual initiative. And I guarantee that there's a lot of the people that have signed cost of knowledge that would be interested in signing other pledges that already exist like the peer reviewers open in this initiative and so on. So I think a solution to that problem is to try and bring all of these pledges into some kind of common banner some kind of common format, so that we can capitalize on the momentum that each campaign creates, and then feed that into future campaigns down the track. So, I don't know, I don't know if that answered your question but I tried to take a stab at it. Okay, I'm going to hand raise from Alex. Just to just to build on what Cooper was saying I mean for communities, of course, a natural community that are around some journals are in these scholarly societies that actually publish some of the journals and you know if we reflect on where reforms have been most successful. Sometimes it's because there's this institutional support from scholarly society where they form an open science committee as the APA and the APS both have and then that results in reforms at the associated journals so I think you know we should build on the opportunities that we already have although many of us are unhappy with our scholarly societies that they seem to be the only way to get to get traction with with with certain journals and of course we've also got these larger institutions forming like the UK reproducibility network and we try to start one in Australia and things like the Dora declaration so try I think we should try to latch into these existing institutions and grow them. Yeah, 100%. I mean, I imagine how powerful we could be if even just a single society. All of the members agreed to act in a certain way that would be incredibly powerful but all of these communities are currently not coordinating their actions in an effective manner, and I would absolutely support and love for societies to get involved. So I see a couple of questions were directed at Alex and you've answered them in the Q&A which I assume it's able to see. But one other if not people can please let me know. But one thing I was wondering Alex was, what do you think it will take to get to a essentially a uniform adoption of this contributorship model. And I know people in the past have kind of advocated for turning papers into kind of like something akin to film credits rolling at the end. And that's kind of been raised several times and then it's raised again and it raises its head and then disappears. So what sort of thing, what sort of momentum needs to be, what you think can actually make that happen, I guess. After a long interval in which as you say there were those ideas raised but not much progress for some reason this thing movement has gotten a lot of, gathered a lot of moths and has a lot of steam behind it now where as in that list of publishers I presented, there's hundreds, it's not thousands of journals that are adopting the credit taxonomy and you know, NISO National Institute of Standards Organization of the US is turning it into an ISO official international standard. So it seems to be really happening, but I would say instead unlike almost all other science reforms I've been associated with, the danger more is that it may be adopted too quickly in that we need to go to our scholarly societies, go to our journal editorial boards and make sure that it fits what should be happening for our discipline and shape the future of it. NISO is working on policy, and I encourage you to check out their website. And, you know, you mentioned getting it uniformly adopted and I don't think it should be uniformly adopted because, you know, like, at least the credit taxonomy isn't best suited for certain disciplines. And so we need a lot more development work on that so it's difficult because we need a taxonomy that allows all the bean counting and Italian across papers which is really key to this even if you're not into counting the beans. And at the same time, we need flexibility where scientific roles are going to change as different sorts of disciplines and are boosted like bioinformatics has totally changed how people work in biology and so on. So it's a constantly evolving beast we need to make sure it constantly evolves. I just want to give Jennifer Ben, I'm going to allow you to talk because you're raising lots of good questions and comments and rather than me reading out your words. Oh, thank you. I hope you can all hear me. Look, I think, Alex, I know as a laboratory scientist, I guess I've always really tried hard to include everybody who's involved in laboratory research, which often, you know, often there are a lot of sort of unsung heroes, I guess, in that kind of research. And the issue that my question sort of pertaining to is there's another issue which is, you know, the people who were involved don't get acknowledged, but then there's the other tricky issue of people who actually were involved, you know, getting acknowledged. And that can be really, I think, a hard thing for particularly ECRs to navigate, you know, trying to make sure that the people who did the work get credited and the people that weren't involved don't. You know, that's kind of what my comments have been about. So anything that you can add to that I think would be probably really great. Thank you. It sounds like you're talking about what sometimes called honorary authorship or even worse ghost authorship so you know, and that's very common in certain contexts, certain disciplines. For example, the practice of, you know, the big lab head or the big Institute head who says, you know, his name needs to be on every paper. I've been associated with not directly but with labs like that where a postdoc is hired and they're told that every single paper that comes out of this is going to have my name on it, it says the lab director. And, you know, they have a, they actually have a reason for saying that which is that, you know, in that lab I think they had something like 14 postdocs. The reason there was so successful for the only way to be successful was for the head of it was great lab was for the head of the lab to just be constantly writing grants all the time, and you know the only way his lab was going to get the next generation the next years of funding was if his name was on those papers. I, so for because of system pressures like that I think we can't win against honorary authorship. And, in fact, we just have to, if we can't beat them join them and so I think that the credit thing actually provides an outlet for that because if you look at it's got this supervision category, and it's also got this funding category. So it actually provides a way for the many researchers around the world who have been sort of lying for a long time the sense that they've had their names on papers even though they don't fulfill the authorship criteria of the journals they've been publishing and it allows them a now a way to attach their papers in a way that's not misrepresenting their contribution and thus maybe that, you know, if funders, maybe they realize that they should be valuing sorry valuing this role of bringing together an incredible environment and keep writing the grant applications that these funders are expecting and that's a role in modern science so I tend to think, you know, we can, there's been, if you look at authorship criteria like ICMJ e and others they keep adding like paragraphs saying, Oh, but you know you shouldn't do go honoring authorship like trying to combat this behavior but I think it's it's a losing battle so we have to incorporate it and recognize it into our systems. And I'm just still on this topic is another comment from Richard. I want you to talk Richard if you want to. They have topic that point to Richard win. Oh hi, thank you. Well it's Friday night in Boston and I've had a beer and a half already so I hope. Yeah, the key point Alex made was that for credit to be useful, it needs to be aggregated across journals and in fact the way a lot of journals are implementing it today. They're actually destroying the data in that they just put initials. And then the credit associated with the initials. So from a from a sort of data science point of view, it's actually useless. So associating credit with orchid ID I think is the minimum needed for useful implementation so it's just a comment rather than a question. Thank you. I'm going to bring bold into this and I noticed there's some question here for Bob by Michael and draw this a few I'm going to allow you to talk to to ask your question as well. Michael, you still there Michael. Well I'll read out for you. So, Michael says as replications always come after sometimes too long after the original publication couldn't be, couldn't that be the explanation for the high citation rate of those original studies compared to the wreck locations. And is there a way for controlling for that factor of time publication in this work you plan to do. Well, no Michael, but I want to thank you for for planting that question for me because it's a great lead in to a talk that I'm going to give next weekend, where we do actually that so little different topic but this trick of matching replications to sort of control papers is not a simple exercise. And so, again with work with Tom coupé. That's what we do we basically find I think the final list is some 400,000 papers that we match with replications and we match with the original. And we follow that the point that Michael's making is that how long a paper has been out is an important factor and how often it gets cited. And so what we do is we, we control for that with some very careful matching techniques and I'm not going to go and, you know, go through the whole process here but the point is well taken. If you want it and that's why actually some of these questions are kind of hard to answer about, you know, our replications really cited less than original study though, you know to answer that question you have to know the counterfactual. So what's the original study, you know what's the, what's the paper or journals going to publish. That's not a replication. That's a fair comparison for that replication study to see whether or not it would get more or less citations. So those are not such easy questions to answer but they can be answered and the trick is, we got scopus we got Web of Science, you can cast a huge net, pulling lots of papers and by some hopefully halfway intelligence sophisticated matching procedures really try to identify counterfactuals pretty well. So we'll go through some more details next weekend but it's a great question and is a really important thing to consider. Well, to the presentation next weekend. Yeah. Hi, I have a question that's probably best for Whitney and Alex, and it might be a combination of both of your talks. And it seems to me that a lot of the disincentives that Whitney highlighted in her talk was to do with inadequate record recognition of say an open a data set that you might provide. So one of the examples was someone squirrels away their data and then gets multiple publications out of it and so that's a disincentive for sharing that open data set. So what Alex's talk is about is trying to improve the way that we give credit to previous work. So I guess I'm wondering is there a crossover there because if we were to not just start rewarding people who write code and so not, but also start rewarding people who have contributed data to a study that we run, then it might be a way to overcome that obstacle and try and give recognition to people who actually make their data sets open. So either of you have comments on that sort of mesh. Just to check to say, yeah, it's actually Alyssa who is speaking and she's actually had to leave. Oh, sorry. Whitney's an amazing co host lead of this. Making this all happen. Oh, sorry. Alex to Alex. Yeah, well, I mean there's two. In my mind, your comments bring up two roles of contributing data one is like collecting the data and there is the, and then the other is actually participating in a study or, and for the first one, there is in credit and invest. Maybe it's not as broken out as well as I might like but there's this investigation category as well as a, I think a data management category I can't remember what it's called. So that helps, you know, recognize these people who are focused on the data collection. But of course that's within a larger paper that probably would only be accepted if it has a lot of other components. But you know there has been also this rise in data, pay data papers. And I associate with this journal home journal of open psychology data hasn't been successful at all. But I know in some where the idea is to publish data sets, but I know in some fields, you know, this is much more been part of the culture become part of the culture maybe with with trials and so on. I don't know, Cooper if you were thinking of on the other side of actually, you know, participating in studies and how you're contributing to science there so for example and citizen science. Some fields have harnessed that pretty effectively, let's say ornithology with with the Cornell University in the lead there and they do name participants in their scientific publications are on their websites but Yeah, I mean this could go so much further right I mean you can imagine. You know, really we ought to be naming everybody but the hard part is maybe going back to Jenny's question is deciding on the threshold and naming them in a way that is going to help them. Um, just to respond to your first point about publishing data, like it seems to me that is that is the key problem right now is that if you publish a data set. The most you can get in the future is a citation and one citation is is not a big deal but an authorship down the track is obviously a huge deal. And so it seems like there's a disjoint between the reward that you can get for publishing data. Because if you were to keep it to yourself you could potentially get all the ships down the track. So I guess that to me says that, I mean obviously the systems we have a developed from pre internet eras but there must be a system moving forward where we can give appropriate recognition to that data set, even if someone doesn't actually like contribute to the writing of a paper that if that data proves integral to the future study then they should get some kind of authorship or you know, contributorship system. Well, do you feel that the citation to the original paper doesn't handle it. I agree currently like the first paper that published data from it which should be cited. I agree that currently it's not doesn't do the trick because, as you say these data sets aren't valued enough but so I feel like we need to just lose, you know how much we're recognizing in our institutions and our grant funding systems, a citation to an original data set that leads to many other papers should be a more valuable citation or more valuable paper so like the, I mean for example the Hubble Space Telescope. You know, the people who first collected data from that. That's led that data set has led to hundreds or maybe thousands of papers but our citations perhaps don't reflect that very well, I think, you know maybe having a typology of citations is one way to address that. I think what I'll do is I'll stop us there because we're just moving us to the end of the session so I'm glad that we did it this way rather than directing everybody to a remote but I will also give that link again to everyone who wants to continue the chats further with the speakers. There's the link in the chat to the rim lose for metascience. So I want to thank all the speakers again for great session really informative and interesting discussion. And we are now going to have a half hour networking break coffee break that will be back here in half an hour for the final session which is one of the highlights of metascience 2021. So this symposium title reasonable questionable or inexcusable. Do we need to do more to protect academic publishing against editorial misbehavior would speak is Daniel Hamilton rank hookstra, Jimmy Baba and some mean busy and moderated by Fiona Fidler. So hope to see you all back here in half an hour. And thanks again for everyone's great participation.