 Well, good morning, everyone, and welcome to MetaScience 2021 and part two of What Is MetaScience? Just before I hand over to Fiona Fidler, who will be moderating this session, I would like to begin by acknowledging the traditional custodians of the Rundery people, of the lands I am seeking to you from today. I would also like to acknowledge the traditional custodians of the land on which each of you are living and working. I pay my respects to them and their cultures and to elders both past and present. I also wish to extend my respect to all Indigenous colleagues who are present today. Thanks, Fallon, and welcome everyone to What Is MetaScience part two. I'm Fiona Fidler and I'm really excited to be moderating this session. It is for me the highlight of this conference and I've personally pinned all my hopes on better understanding my own place in this field and the world through the discussion that we're about to have. We have five outstanding panellists in the session who will share their own reflections on what MetaScience is and what its future might be. We have Samine Vizier, who's a professor of psychology at the University of Melbourne. Dashan Wang, a professor of management at Northwestern University. Carol Lea, professor of philosophy from the University of Washington. Nicole Nelson from Science and Technology Studies at the University of Wisconsin and Brian Nosek, director of the Centre for Open Science, the organisation responsible for bringing this conference to us. Of course, this session is part two and What Is MetaScience part one ran at the very start of the conference and explored the breadth of methodological, theoretical and disciplinary perspectives being combined in MetaScience and it asked how MetaScience is connected and distinct from other fields that study the scientific enterprise. And we've seen exploration of these themes in other sessions throughout the conference too. For most recently, the most recent example of this that I have seen was in the session bolstering accountability and self skepticism within MetaScience that Samine moderated. So in this session, What Is MetaScience part two will explore how MetaScience is being institutionalised and how the different branches or strands of MetaScience should advance together or separately. How will we balance competition for resources and recognition with MetaScience's intrinsic need to collaborate? What are the future priorities for MetaScience over say the next decade? So to begin with, I'm going to invite each of our panellists to make some opening remarks, including reflecting on what our presenters in part one had to say and other talks that they've heard throughout the course of this conference. Each of our panellists will speak in these opening remarks for no more than 10 minutes and that should leave us about half an hour at the end for further discussion. So at any point, start putting your questions in the Q&A whenever you're ready and upvote the questions that you like so that they float to the top. Okay, first, I'd like to invite Samine Razia to share her thoughts. And Samine, we've heard a range of presentations on various topics over the last six days of this conference, including in the session you hosted yesterday that involved quite a frank discussion about whether MetaScience needs to be a field at all. So reflecting on the last week, what would you say are the future priorities for MetaScience over the next decade? I think so much, Fiona. Thanks for hosting this. Yeah, so since that session, which for me was up till about midnight last night, I managed to sleep seven hours and also completely rewrite my remarks for this morning. And so what both not just that session, but I've heard a lot of great talks and debates and panels over the course of this conference, that have provided a lot of food for thought about this question of what is MetaScience and how should we move forward? And I decided I want to talk today about my experience and how I came to MetaScience and how I see MetaScience fitting in with existing fields like HPS, SDS, Scientometrics. And my story is definitely not universal, but I think there are many parts of my experience that are shared by some other people in MetaScience. And I think sharing my perspective might help those in neighboring fields make sense of our sometimes bizarre behavior. So I'm a social and personality psychologist. And until 2011 or so, I was happily doing social and personality psychology. I was pretty interested in research methods, and I questioned some practices in my field. But I was all in. I just submitted my materials for tenure, I was getting involved in journal editing and in leadership positions in my professional societies. I didn't have serious concerns about the foundations of our field. But over the course of the next few years, starting in 2011, I started to question more and more of the things I've been taking for granted. My concerns grew even as I took on more roles and responsibilities in the field. And I found myself in the awkward position of becoming things like editor-in-chief of one of the field's main journals, while also becoming more worried that what we were doing wasn't really science or at least wasn't good science. And today, I'm still fully engaged in my field's conferences and journals and professional societies. But a lot of my time is spent trying to sound the alarm about the problems in our field. I think this trajectory is similar to that of many others here at this meta science conference. Perhaps they were earlier in their career when the disillusionment hit. But for many of us, we started out with a great deal of excitement and confidence in our fields. And at some point began to question some of the core assumptions of the field. I'll call this trajectory the disillusioned scientists path to meta science, though I want to emphasize that we're here in meta science because we still have hope. We lost some of our idealism, but we still care about our fields enough to want to work to make them better. I know people in many fields who feel this way, including medicine, ecology and evolutionary biology, economics, political science, archaeology, nutrition, sports science, criminology, and more. So let me tell you a bit more about my own motivation for doing meta science. Speaking for myself, the last decade felt like the scales falling from my eyes or realizing that the emperor has no clothes, or looking around and finding myself in the this is fine cartoon where everything is on fire. Pick your favorite metaphor. I was taught and I myself taught other students a very simplistic and wrong view of science and scientific progress. I repeated mantras like there's a scientific method or science is self correcting, or if it's peer reviewed, you can trust it. And if it's not, you can't. Or science is a process of getting closer and closer to the truth. And we might make wrong turns and missteps, but that's normal in science. In the long run, progress is inevitable. As if these things are givens in science, and not things that we have to work for and can lose. And this is related to what alien five talked about in an earlier session with regard to the present centered view of history, the assumption that things inevitably progress towards better and better processes and systems. And I'm only just now starting to realize it's not a given that entire disciplines can survive and even thrive while producing mostly noise. I know that will sound insanely naive and ignorant to anyone with training in history and philosophy of science or STS or information sciences, or even just a well read person. And that's totally fair. It is insanely naive and pretty ignorant. But I think that many, many of us working in science really are that naive. So it's against this backdrop that we can make sense of the epistemic trespassing of many of us meta scientists. I'm telling the story because I hope it will explain why we've often been clumsy and oblivious in our approach to meta science. We're accidental meta scientists. We're using meta science to tackle a problem that overwhelms us, that saddens us, that frightens us. To use a bad metaphor, it's as if we realize that our field is ailing in some cases has a life threatening illness. And we're on a remote island with no doctors. And we're trying to reattach a severed limb or wherever you want to take the metaphor. We often know that we're not the experts we need. We would love to have the right experts come in and fix things for us, but we don't know how to find them, or if they would care. And we're also often consumed with fights within our own field about whether or not a problem even exists or is serious. In the scenario, the experts are people like scholars in HPS and STS and Scientometrics. You're all potential lifesavers for us. Not only that, but you're a natural allies for us. When members of our own field continue to insist that these serious problems are just part of normal science, or simply deny the possibility that science can go wrong. It's the philosophers, the historians, the sociologists, the information scientists who get what we're grappling with. Indeed, I think many of us would say that we've gotten more support from outsiders from people in HPS and STS and Scientometrics than from within our own fields. Another important angle to this is that we also get a lot of benefit from talking to each other across fields. The experience of being highly invested in your own field and then being confronted with its very serious flaws and feeling like few of your own colleagues see or admit those flaws can be an isolating experience. Connecting with other people who have had similar experiences in completely different fields is extremely valuable. So part of the motivation for getting together for labeling our field and having conferences meeting up is to be able to talk to others who've had these experiences in completely different fields. We can learn from each other's efforts, share resources and experiences, and create a community around our shared goals. Of course, it's not to say that the typical metascientist comes to it through this disillusioned scientist path. I know it's just one way, and I know there are many people for whom metascience was an early interest and passion in itself, or people who want to improve science just for the sake of improving it, not because they think there's a piece of it that's in bad shape. But I do think that the fact that a fair number of metascientists turn to metascience because of their frustration and dissatisfaction with their home discipline helps to explain some of the clumsiness you've seen in metascience. And I also think it provides a unique opportunity. Maybe one way to look at the role that people like me can play is similar to the role of whistleblowers. We don't have all the expertise we need to fix the problem, but we have some unique insights and experiences that could be useful and interesting to those studying how science doesn't, doesn't work from a more disinterested position. Both approaches have a role to play, and by coordinating, we can learn a lot from each other. Indeed, the people in established fields at study science, like HPS, STS, information science, have been crucial to my own thinking and development. As an epistemic trespasser who felt like I was searching for grounding for my mounting skepticism and alarm about my field's practices, I was extremely grateful for the people in HPS, STS and Scientometrics who welcomed me and helped me. They were a lifeline. We did reading groups together. They pointed me to decades of literature on exactly the topics I was grappling with, told me when I was misusing terms or misrepresenting things or inadvertently stepping into controversial territory. And I want to especially thank Fiona and her research group, especially Eden Smith, Fallon Modi and Martin Bush, as well as Anna Alexandrova and Carol Lee, who's here in the session with us today. They've been extremely generous and patient with me in tolerating my ignorance and gently helping to reduce it. I still have so much to learn. And I'm sure that many of the people who've tried to help me still do face palms when I try to talk about social epistemology or if they see me try to tell other scientists what I've learned about the problem with the idea of a singular scientific method or if a paper I'm on invokes Karl Popper in an overly simplistic way. I can't promise I'll stop saying stupid things, but I promise I'm extremely grateful for the patience and generosity of people who have much more expertise than me and can help me understand what's going wrong with my field and how it can be fixed. To go back to the analogy of trying to cure an ailing field that stuck on a remote island with no experts, what the metascience community provides is a phone line and occasionally even an in-person visit from people with the expertise we need. Of course, people like me need to take the initiative to look for the relevant scholarship that's out there. And we also need to admit that the tools at our disposal are crude and problematic as Cassidy Sugimoto put it so well in an earlier session. We need to be careful not to further harm the patient in our attempts to cure it and to look out for potential side effects of our supposed cures. There's a lot of potential for things to go very wrong and for us to ignore and offend the very people who are in the best position to help us achieve our goals. But there's also a tremendous opportunity here to use the knowledge and expertise from longstanding established fields to make a real difference to many fields in crisis right now. Thanks to me. Last time, Samine gave this kind of autobiographical talk. The whole room ended up in tears. I'm not going to cry today. I'm not. But that was amazing. Thank you for providing that perspective of the that dissolution scientists path to meta analysis. It's been such a big part of where we are now and what we've done over the last 10 years. I think it's really important that we all understand that. Next, I'll invite Dushan Wang to make his opening remarks. Now, in what is medicine? It's part one we heard from Cassidy, such a matter about who wrote a critical review of Dashan's recent book. So this now is in part an opportunity for you, Dushan, to reply to that. But of course, to offer any general thoughts you have about how policymakers, institutional leaders and scientists themselves might use meta science. So over to you. Thanks. Yeah, thank you. Hey, everybody, it's a real pleasure to be here. So, you know, today I want to focus actually a quite simple question and that may actually impart also illustrates some of the confusion and also arose from part one panel. So the simple question is to think about, you know, as we are engaging in this kind of research making progress, I want to think about who is listening, you know, who should be the audience? In other words, I think the conferences are inherently for producers of the field gather as first and foremost as the purpose. But I want to orient some of the discussions through the lens of who are the consumers of this kind of research? OK, who are the consumers of the field? So I thought what I'm going to do today is to structure my remarks in two parts. First, I want to use my recent book that I co-authored with Lazar Barabasi called The Science of Science, which is right here. Now, I'm not citing the book. You should it's freely available online through my website. Don't buy it. Go to my website. You can download the PDF. That was the important decision or decide who to publish with. But I want to use this book as an example and tell you a backstory about a book which will speak to the point I'm going to make. Then I want to quickly transition into discussing solutions. So I want to talk about that in response, a few initiatives we're doing right now that could help further development of the field. OK, so first of all, we're excited that finally this book to the community. And let me sort of think about the genesis of the book is quite simple, you know, because this idea of turning scientific mice and curiosities on science itself, this idea is not new. It dates back to at least the past century, if not further back. And with several founding giants in the field, you can think of, you know, the great sociologist of science, like Robert K. Morton, thinking about the idea of Matthew E. Feig, single terms and multiples or, you know, Harriet Zuckerman, thinking about the ethnography approach to understanding careers or think about, you know, using Garfield, the idea of citation index, the Sala Prize, a metaphysicist turned historian of science, thinking about the idea of a cumulative advantage, the invisible college and power loss, you know, and of course, it's Thomas Kuhn, another physicist turned philosopher of science, thinking about the idea of paradigm shifts. And this is also by and large thanks to developments in fields like HPS, SCS, scientific system, you just highlighted. So and I think the recent surge of interest in this domain basically aims to stand on the shoulders of these giants, but then take two things that they do not have the luxury to have in their time. The first is the increase in availability of large scale data sets that now choices the entirety of the scientific enterprise and helping us capture these inner workings at a new level of scale and detail. And I think people often sort of not sort of misunderstand just how much advances there is. And my personal experience I often want to illustrate is when I look at my own lab, in terms of our own research agenda, when I look past the past five to six years, I feel like it's very safe to say every year our research agenda was driven by a new data set that we didn't even imagine could become available even a year earlier. You know, this was true this year was true the year before and the year before that as well. Earlier this year, we published a paper in science and I used the data that wasn't even available before last summer. So I mean, this is kind of the sort of the rate at which new data are becoming available. So I want to highlight that. And second, I will argue is the parallel development in fields like complexity science, artificial intelligence and data science over the past decades, which offer us a wide range of tools that help us make sense of this data with growing accuracy and robustness. Together, I think they can tell a very insightful and complex story about how innovative careers unfold, how people come together to contribute to discovery and how scientific progress may emerge through the disparate factors that were not connected. So against this background, this book is an initial attempt is one attempt, initial attempt to try to synthesize this rich historical roots and exciting recent developments and also point out some promising ongoing and future directions. So here's the backstory for the book, which I want to discuss, which may be useful for today's discussion. To be told, I wrote two versions of the book. One way to think about the first version, which has never seen this light of the day, but it was two years of my life. One way to think about the first version is that it's like a giant review paper. It was written specifically for my colleagues working in the field. OK. And then somewhere around the way, I realized that's a wrong approach. And there is a much bigger and broader audience for which the insights from the community could be useful. So basically, we then completely script the whole book and then started from scratch, which is the second version, which is the version that eventually got published. And then thinking back, I realized that there would be people in my own community that will actually very much prefer the first version and maybe liked it a lot better. But let me tell you, there's here are other reasons why I think the second version is better, at least for my personal taste. OK. So I want to think about, you know, the second version changed the audiences for which this book speaks to. And I think there's three audiences in mind in this discussion. First are practitioners in science, because I think a broader impact of the community lies in its implication for science, policy and decision making. So this book and the insights we offer maybe beneficial for academic administrators think about your department chairs, deans, whippies of research and university administrators, they often face very important personnel and investment decisions. For these people, they are often aware of a perfusion of empirical evidence on this subject, you know, the read journal, Psychiatric Science, but they like the cohesive summary that will allow them to extract signals from noise. So therefore, I hope this book will offer the knowledge and data and help them better take advantage of the insights that our community generates. And similarly, think about program directors with NSF, NIH or other public and private funding agencies. And, you know, for these people, they wanted to sort of support high performing individuals and teams. And this includes, you know, not just civilian, but also mandatory agencies, nonprofits, foundations. And for all these agencies, they are also collecting data on themselves by themselves as well. Partly thanks to the recent loud U.S. past in 2018, the evidence-based decision making out, I myself have advised some of them, including the Trans-Occupurabian Initiative on the NIH, through these experiences, I think, you know, what the community has to offer is to sort of help them to think about how to use the data in a way that best serve their own purposes. All right. So that's the first side of audience. I think the second next side of audience are scientists or students who are not meta-scientists, who are not in the field, but any scientists and students about the sort of learning about the mechanisms that govern science. I kind of teach an MBA class and that touches the idea of success and failure in individual careers. I often begin my class by emphasizing one core fact, usually is a joke, is that to remind people that we all just live once. The interesting thing about an individual career is that there is no reset button. You know, it's not like a video game. You say, oh, that turn didn't work so well. Let me go back to where I saved last time. Doesn't work that way, no matter who you are. So as young researcher, when I started my career, I remember there are a series of fascinating but consequential questions that are dear to the heart of every scientist. For example, when do you do your best work? What's the life cycle of a career activity? Are there signals for a scientific hit? What kinds of a collaboration time to succeed? What kind of time to go through disaster? So for working scientists, I think what community has to offer is to provide David driven insights into the intern workings of science. And in some cases, hopefully could also better their career outcome and choices and also pitfalls to avoid. And of course, the last audience could be people who are already working in the field, who wanted to step into the field and there's a very fast development in the field. So hopefully that will also be helpful. So these audiences then dictate the organization of the book and forcing us to make often painful, suggest choices about what research we should cover and how to cover them. Right. So so now I want to transition into solutions. I want to think about recognizing the diversity of audiences we speak and the diversities within the community. What are the things we could do? First, I think it's invitation to help us improve the work. You know, the book is freely available and it hopefully is a resource for teaching and education with time. Shortly, hopefully we can also add slides and data to make it easy to teach and do research. But the importance is all for free. And there is a specific box in a website that you can leave comments and feedbacks. If the things I left out there, we know a lot of things were left out. Some of them were aware of and some were not. So please add comments and feedback help us continue improving the work. And second, I want to talk about how do we build connections within our own community? I want to quickly mention two things. First is to recognize the inherent pluralism in methods within the community. You know, there are sociologists, economists, psychologists, ecologists, complex systems, physicists, information scientists, all these people bring their own discipline and methods into understanding this problem. But they often sort of can be fragmented. So this fragmentation then often makes it difficult for us to research it from one discipline to appreciate the value from another discipline, much less so to build directly on it. So that's why, you know, a year ago, a couple of us, you know, economists, sociologists and me and others get together and so let's put our head together. Let's actually reorganize the literature through the lines of methods. Let's talk through all these methods, machine learning methods, causal inference, regressions, the white hat, beta hat debates. Let's think through all this method and illustrates what are the contribution and limitation of each method so that we can all start with the same footing, sort of as a way to help the community sort of make use and take advantage of all these methods to help advance the understanding. So hopefully within next year we'll be able to get this piece out there. Second thing I want to emphasize, sort of build connections within our own community is to think about data, because I think the field will only thrive if everyone has direct and free access to the state of our data sets. So I know there have been a lot of discussions around MAG being discontinued. I'm in constant conversation with them. What we're building right now, hopefully will be released shortly as well, is the state of our data repository as well as code repository for everybody in the field would be able to take it and start doing research and also be able to contribute to this. And so together we can develop resources for making as easy as possible to do research, as well as to improve the reproducibility of our own research in this field. And the last one that I'll start up is to think about how do we build connections not within our community, also beyond our community and speak directly to consumers. So this is where sort of I think Brian and I talked about before. And I think sort of this meeting is fantastic. We should have more different kinds of meetings as the field developed. So one of the meetings I want to bring as people to attention is sort of to think about can we bring producers of the field with consumers of the field together on an annual basis. And so this is something that we have been building quite a while and hopefully we'll be able to meet in person next summer. And the details to be announced. And this is in close partnership with a National Academy of Science is to think about a conference and this is funded by Department of Defense and Sloan Foundation. Thank you very much for the financial support is to think about if we have a conference that bring together all the producers, the producers of the field as well as the main consumers of the field, program directors, people on the field, for example, all these people together in DC on an annual basis. And the first conference is joint organized with Heidi Williams from Stanford University, James Evans from New Chicago and as well as Northwestern University. And hopefully we'll be able to sort of do this kind of exchange. So get people sort of, you know, closer with this platform between producers and consumers. All right, so sorry, I took a longer than thanks. Thanks very much for the rest of the panel. Okay. Thank you. So next up, I'd like to invite you, Carol, to comment. I'm going to leave this quite broad for you and just ask for your general reflections on how meta science is being institutionalized, including perhaps thinking about diversity issues. Thanks. Yeah, thank you so much for the invitation, Dr. Fiedler and Fallon, thank you so much for starting with the land acknowledgement. I think it's a really important way of acknowledging the ways in which some of us are sitting on occupied lands. I want to add that I'm sitting on the occupied homelands of the Coast Salish people, some particular. Now, when I think about this question about how meta science is institutionalized, I think about it from a very specific perspective. I'm a philosopher who studies peer review. I've studied efforts to improve the transparency and openness of science through the institutionalization of reporting guidelines. I've also studied social disparities and grant review and funding. And because of the nature of my remarks, I should also disclose that I'm a coordinating committee member for the transparency and openness guidelines and held NIH contracts to research racial disparities in a grant review process. What strikes me at this moment is that often in the meta science circles I'm most familiar with through my work, a number of central topics are focused on improving how studies are designed and reported so we can learn the truth about some specific hypothesis or effect. How regularly are studies preregistered? Are the data encode available to vet and computationally reproduce the result? Are methods of material sufficiently detailed to undertake replications? Are there any conflicts of interest that may influence what information is disclosed? And how do these things relate to the quality of reporting, statistical strength, direction of reporting or future replication of the finding? It has been amazing to see the institutionalization of reporting guidelines as a way to improve study design and reporting. Things like consort or the ICJ and the guidelines and medicine and the top guidelines across a broader domain of fields. I think it's a feat of cross institutional organization to see so many journals endorse their adoption. Although as the symposium session on the trust initiative demonstrated, there's more work to do when it comes to evaluating the degree to which journal instructions, procedures and practices actually conform to those standards. Now what strikes me now is that these conversations focus on concerns about how we can vet the truth of some specific hypothesis or effect. The scope and ambition of these initiatives is broad and so far as science is made up of evidence for and claims about specific hypotheses and effects. But at this moment, we must also recognize that there have been content biases having to do with which hypotheses and effects, which topics have been valued and which have not. We need to focus on how we can shift our institutional practices to get at a more whole truth to include and amplify areas of inquiry that our scientific reward system has undervalued and de-centered, but that represents the needs and interests of a broader range of folks in society. It's great to see so many calls for this kind of shift in many places at this conference. And even though there have been lots of calls for this across academia, I'll just focus on these talks I've seen so far here. Dr. Maybank spoke about her research demonstrating that top medical journals do not publish papers whose keywords include race and racism. Drs. Buchanan, Princeton, Thurston, and Perez and their symposium on the white supremacy in science spoke about the need to add new keywords and psychological science to stay current with the evolving language used in diversity science. In my talk, I mentioned NIH's work showing that black researchers are disproportionately represented in institutes and centers that have lower than average funding rates. In particular, minority health and health disparities have the highest representation of black PIs, the lowest amount of appropriated funding and the lowest funding rate. Diego Koslowski presented work demonstrating that there are understudied areas of science that mainly affect marginalized groups. And Dr. Bassett called for science to explore a broader range of questions authored by folks belonging to underrepresented groups. Now, when I think about these content-related biases, I can't help but rethink some common metascientific techniques for measuring novelty and impact. Novelties often measured by looking at new combinations of keywords and then seeing which combinations have the highest impact as measured by citations. But it's clear that an important kind of novelty we need to be thinking about, one that helps us see whether science is getting at a more whole truth, isn't just novel combinations of keywords. It's the increased use and inclusion of some keywords. It's the introduction of new keywords that give us language to demarcate whole domains and needs of further exploration. And of course, when we think about how we should value different forms of novelty, we also need to rethink the metrics we use as proxies for scientific success or value because of inequities and commonly used ones like general impact factors, citations, publications, and federal grants. Indeed, Dr. Peset presented work demonstrating that racial imbalances and citations are increasing over time. And I presented work on continued funding and scoring disparities at NIH. So I want to underscore Dr. Sijimoto's point in the first, what is medicine session, that we have to be careful about how we design and interpret studies using various metrics and tools. But back to the question about institutionalization. We see institutional effort towards incorporating diversity-related meta science as forms of organizational self-assessment. At this conference, Dr. Gibbs outlined NIH's steps and future plans to address structural racism and biomedicine through its UNITE initiative, which includes launching a common fund initiative focused on transformative health disparities research and developing a process to gather and publicize the demographics of the biomedical workforce inside and outside of NIH. And there was a terrific panel that just wrapped up at this conference on why it's time to diversify the criteria we use to fund and evaluate research with interesting critiques of the notion of research excellence that gets invoked by funders for different rhetorical functions. And they had some interesting critiques about the way the concepts can unintentionally perpetuate systemic inequities. I'm heartened by the Royal Society of Chemistry's work to bring together 44 publishing organizations to collect data about the diversity of its offers, reviewers and editors, and to set minimum standards to build on. But I want to underscore that representation among editors, reviewers, and authors is not going to be sufficient. We also need to talk about shifting the contents of science, its keywords, questions, methods, and populations. Dr. Spickannon, Princeton, Thurston, and Perez recommended this in their talk and in the paper that I'll include in the chat. Likewise, the American Medical Association's organizational strategic plan to embed racial justice in advanced health equity also focuses on diversifying the content of science. They hope to be able to measure an increase in, and I quote, the frequency and visibility of research studies published in medical journals that center equity, disrupt dominant narratives, expose harm from racism, and offer paths for healing. I hope that these kinds of concerns about content these biases aren't overlooked in the institutionalization of diversity-related metascientific self-study across organizations. I also want to repeat what I mentioned at the last Metascience conference. It would be great for Orchid ID or some other cross-journal database to allow authors to self-identify along multiple social dimensions to make diversity-related metascientific studies more accurate moving forward. Finally, when I think about conversations about the metascientific study of diversity in scientific institutions, I think of Shirley Malcolm and just want to take a moment to share my gratitude to her for all of her work, supporting diversity, equity, and inclusion in her various leadership positions at the American Association for the Advancement of Science. She, along with Paula Cracall and Janet Welsh-Brown, wrote the double bind, The Price of Being a Minority Woman in Science, a AAAS report of a conference of minority woman scientists held in 1975. Even though the report was published 45 years ago, there are so many recommendations and insights there that are unfortunately still relevant today. The report recommends, and I quote, women and minority group members must be utilized at all staff levels and on all advisory and peer-review groups by scientific institutions. Conferees were displeased with the absence of women and minority persons at the highest levels of administration and scientific management in the universities, research institutions, federal agencies and laboratories. We can add journal editorial boards to that list too. The report also recommends collecting data that permits analysis by intersectional identity, for example, not just by race or gender, but by race and gender. These conversations have been going on for such a long time and this is such a critical moment. As Dr. Maybank said in her talk, it's a monumental time that sometimes these doors close and these windows of opportunity close. I hope folks will continue to hold these issues close and find ways to support efforts to translate ideas into thoughtful meta-science as well as sustained action so science can eliminate structural racism and other forms of oppression and move towards uncovering a more whole truth. Thanks. And I'll put some references in the chat for folks in case they're interested in following up on them. Thank you so much for that. And thank you for including those references to I think there'll be a lot of interest of people having such a great time. This is wonderful. Nicole, I'd like to invite you to speak next and perhaps to return to the question of how all of these different branches and strands that we're currently calling meta-science might advance together and how competition for resources can be balanced with a need for collaboration. And in particular, I wonder if you could maybe comment on similarities and differences, perhaps tensions between meta-scientific work being conducted by practitioners who are embedded in disciplines versus meta-scientific practitioners that are operating in their own domain, independent discipline. Thanks. Thank you, Fiona. Yeah, I'd be happy to. And I think in some ways, this picks up on the question of institutionalization again, which Carol just answered by talking about the institutionalization of different kinds of meta-science initiatives. But I'm going to be thinking more about the institutionalization of the people doing the meta-science. Where do they end up? And at the end of the last panel, the meta-science, what is meta-science? Part one, the discussion turned towards this question of, well, what would it look like to have not just conferences on meta-science or even journals, but say departments or funding streams that really were specifically devoted to this work? Would we want it to look like a proper field in that sense? And so what I'm going to do is to draw on some of my own work, thinking about methodologists across disciplines and some of the sociologists of science, my fellow sociologists of science, David Rebus' work on data science to describe two different sociological historical archetypes for what it might look like to continue doing this work or where it is that the people who do this work have been historically placed. And I'll describe these as two kind of distinct archetypes even though in practice they're going to be real-world cases that combine lots of elements of each. So some of you may listen to these two models and be like, yep, it me. And other people may be like, ah, those don't fit. But the point is more so to try and put out these archetypes because they help us think about the different pros and cons of each way forward as well as what specifically you would need to start building meta-science into existing institutional structures. So model one of where a meta-scientific work gets done, one existing model is to think of or have meta-scientists as experts in the methodologies of their own discipline. And Samin, when you were talking about your kind of personal pathway as the disillusioned scientist path, I think that's pretty well aligned with what I think this archetype looks like. What it's looked like historically is people who are embedded in the institutional structures of a discipline, meaning they come up through the grad programs of a discipline. They're involved in the journals and funding structures of a kind of classically defined discipline. But they routinely participate in opening up the black boxes of the things that their colleagues use as tools. What does that mean? So it might mean kind of looking at the inner workings of a physical instrument to try and see if it's measuring what it's really measuring. Meanwhile, your colleague next door is using it like a ruler to just kind of do his work. Likewise, in the social sciences this could be something like a non-physical conceptual measurement tool that people are using to operationalize the research but they aren't really questioning. They're basically taking it for granted. So these embedded meta-scientists are the people who are both doing the work but also complicating the work at the same time. They're often doing or producing results, doing science that look like the kind of typical scientific pathways or the typical results of their field. So they're kind of doing the classic work of the field part-time but then doing this methodological work sort of as a side gig. Although for some people it might grow enough that it becomes essentially their career but the key point here is that they remain really embedded within the institutional structures of their field. So that's model one. Model two is that we could think about meta-scientists as being domain independent experts. And here we could think about something like data science as an archetype or maybe science and metrics I think would also fit. And what this looks like historically is the development of techniques or tool sets which can then be applied, customized, tailored to specific domains. So the idea here is that the meta-scientists has some sort of tool approach, technique, conceptual suite that can then be operationalized to study specific questions. Now, as opposed to the people who are experts in the methodologies of their own discipline, in this model, meta-scientists are embedded primarily in the meta-domain itself, in data science, in scientific metrics. And they may work on multiple empirical domains. So they use the tools that they have to then touch down on real world questions in several different fields. Although some people may develop really close alliances with a single field and end up spending much of their time there. So this might mean, for example, taking a technique for analyzing a set of articles and then applying that technique to several different fields to look at comparisons at cross-fields. And it's often done in collaboration with the help of somebody who knows the specific field and can say, yes, the articles you've selected are appropriate or help interpret what the results might mean for the field. But the point is in that model too, what we have is a group of people who are largely embedded or oriented towards a discipline which consists of other people like them rather than people who are embedded in a classical domain or discipline. So as I said, some of you might see yourselves in these descriptions, other people might feel like they're a little bit of both of these things, that's fine. The point here is not really to sort people, it's to kind of point towards some of the pros and cons of each of these models. So in model one that I talked about where we've got embedded experts, one of the pros of this is that it's really great for using that field specific knowledge and getting the attention of other people in that field because you have the credibility of having been continuing to be a practitioner. Where it is not so great is that we have weaker links across fields and that methodologists who are embedded within fields don't necessarily or automatically gain the respect or appreciation of their colleagues because to sort of put a, not to find a point on it, people don't necessarily like the person who's they're picking apart the tools that other people are using thereby undermining the work that they're trying to get done. So the question of whether or not one can make a career as an embedded meta scientist sort of depends on the specific field itself and its configuration and how open it is to this kind of methodological work. And I have argued in some of my own work that there's a wide array. There are fields that are very comfortable and open to this kind of work. There are fields that are so hostile that basically the people doing this kind of work get shoved all the way to the side of the field and maybe hop into a new one because it's really not working for them. Now, if you think about building this kind of meta science what you might want are initiatives that keep together methodologists from different disciplines. Like Samin said, provide them opportunities for contact so that they can share ideas and cross-pollinate with each other. You might also want initiatives to try and increase the status of that methodological work within specific fields where it's considered either uninteresting or intrusive to make sure that people who want those types of careers and want to express those types of interests can actually get their stuff done. So there are some suggestions for if you wanna grow those kinds of meta scientists what you would do. In model two, having domain independent experts one of the pros of this model is that it's already by virtue of its domain independent setup great for having lots of communication across disciplines and that really high level view of what's going on in the sciences as a whole rather than grounding it in a particular discipline. But the cons of this model are that historically if you look at fields like data science and how they've gotten this stuff done there tend to be a lot of problems around deciding what division of labor should take place between the domain independent expert and the domain expert. Who needs to develop the expertise? Is it the meta scientists who really needs to know the field or can they remain kind of aloof from the whole scene and they just need somebody else to kind of catch them up to speed. And with that question about the division of labor between the domain independent expert and the domain specialist comes a question about whether or not the discipline is getting represented fairly and accurately. And that's something that comes up quite a lot in historical cases around these domain independent fields. So what kind of initiatives might you might want if you wanna grow more of these kind of people? Well, you're likely gonna need more robust institutional structures like centers or journals or department because this particular career pathway implies less attachment to traditional disciplinary structures. So given that these people are not necessarily strongly allied with a classic discipline you're gonna need something or some box to put these people in. And so thinking about growing a box called meta science then might become a specific goal for the collective of people represented here. The second thing you might need is you would need initiatives to help meta scientists either develop or retain this kind of discipline specific expertise or at least to develop collaborations with people who have that discipline specific expertise. So if people are gonna become full-time meta scientists who are oriented towards others doing that kind of work they may have grown out of a discipline but kind of lose their attachments to it and you would want ways to help maintain those attachments or as the field progresses they may come up in the field without any specific disciplinary attachment at all and really may need to form it. So those are the types of things that you might wanna think about if you wanna grow those kinds of meta scientists. So I offer those like I said as two archetypes there are gonna be both kinds of people I think sitting in the room as well as blends of the two but I put them out there as a way of structuring the discussion so that you can think about completely what type of initiatives you might wanna make to create homes for the type of people that are doing this work. And I also will pop a couple of articles in the chat one from myself and one from David Rebus that gives you a sense of that vision of what methodologists might look like versus that vision of what it might look like to have domain independent experts drawing from some historical examples. Thank you. I feel like I've been on a roller coaster during that talk feeling really worried. Sorry. No, but now I feel calm. I feel like I understood that. So thank you. Okay. You've settled. You got your two things to anchor your thinking. I think that's a very clear articulation of possible futures or a blend of futures ahead of us. Brian, your last and with that comes great responsibility. What I'm going to ask you to do is share what have been some of the new insights or perhaps new challenges that have been raised for you during this conference and how has this shaped your vision for metascience's future? Thanks Fiona. I don't think I received appropriate warning that I would be following Nicole. That was really great. And everything was great. I've been participating this as an audience member and just being filled with thoughts and it's been really impactful already. So I guess where my comments will be is in trying to consolidate things that I've heard over the last five days. And I think it'll end up being sort of the compliment to Samine's initial points that were framed in a very personal way. And me trying to think about how does this all sort of get put together in a more dispassion isn't the right word in a more general way. So I guess I will answer the question with beliefs that I have now that are strongly asserted and weekly held that there is lots of things that are up in the air. And so it is really worth us wrestling with the various dimensions on which we think about the answer to the question of what is meta science. And so I'll start with the conclusion which is I'll assert that meta science is actually a community of practice, not a discipline. And the secondary part of that conclusion is the feeling of frustration that many attendees to this conference or more generally have about meta science of, oh man, that those people from discipline A should really know more about X and Y from discipline B is exactly what should be happening in meta science. It justifies why we are needed and why you audience member should be here. The point being that it is precisely the differences in our disciplinary backgrounds, in our training, in the evidence that we know, in the approaches we take to accumulating evidence and making claims that is what gives meta science is strength as a community of practice rather than as a discipline. Okay, so that's the conclusion. So let me go back to why do we need meta science? And I think the problem that meta science as a community of practice can help solve is that there are lots of people working on related problems that are housed in different disciplinary silos. And functionally, this is not news to anyone. Functionally, it's very hard to break out of those silos and to gain insight and to collaborate and to share with others. And so having a community of practice of others that are also working on those kinds of problems fosters a sense of community of coordinate, potentially of coordination and of collaboration on how it is we can pursue, investigate those problems beyond the boundaries of our own discipline, whether it's an observer discipline or a practitioner discipline has come up in the conversations already. A social function of the meta science as a community of practice is that it can be lonely. There aren't as many people within a discipline that are interested in how the discipline operates as there are people in that discipline. Most people in my field care about psychology and the particular problems they work on that are about psychology rather than about how psychology does its business. And so a community of practice can provide affirmation of our identity. It can provide a sense of belonging of people that are interested in these problems, these challenges as Samin so articulately described that are an abstraction above the actual claims and things that we're trying to understand in the work that we do. And there's an instrumental role of meta science as a community of practice, which is that it can help accomplish shared goals. It can identify and consolidate resources, it can bring people together, it can provide ways to steer attention, et cetera. Okay, so in that context, I sort of in all of the things that I've heard across many of the sessions and Carol raised a bunch of examples from different sessions of insights or how we can think about meta science and people's inclusion and involvement in that and what role it plays. I sort of think about three frames for what is meta science. Meta science as meta science as the interaction between descriptive and prescriptive and meta science as the audience relating to Dession's presentation. So let me start with meta science as meta. I understand this from many of the conversations that have been happening across this conference is that part of what meta science is is a function of the interaction effect of researchers across disciplines. And that common interest is, as I said before and as others have said, is one level of abstraction above any particular disciplinary methodology or process. And that abstraction allows us to extricate ourselves from our particular disciplinary focus and ask questions that are recognizable and understandable across the disciplinary boundaries in which we are embedded. And the answers to those questions may not be applicable or they may be applicable across those disciplinary and methodological boundaries. But the question of applicability is, is the substance itself of meta scientific research? In the language of a classic paper in my field by Tony Greenwald and his colleagues is that a lot of research progress is made by condition seeking. Under what conditions does X hold? So claims usually start with very broad and general application. I think preregistration should be applied to all research under all conditions at all times because it will improve how we make inferences. And someone says, I can think of a condition where that doesn't hold. Say, oh, I didn't mean that condition. Of course, not that condition. That's not what I was talking about. I mean these other conditions. Someone else says, oh, wait a second. And it's that process of trying to understand under what conditions does this particular behavior about how scientists do their business help to advance how we understand and develop knowledge and everything else. And so a healthy knowledge system will have diverse perspectives but then also lumpers, those that look for commonalities and splitters. That commonality is not gonna apply here. And here's how we need to start. And we've seen lots of examples of that across the various sessions of the last five days. Okay, so that's frame number one. Meta sciences matters. Researchers talking across disciplinary interests. The second is the interaction between descriptive and prescriptive. The meta science has in my observation a character of just trying to understand how things work and drawing from those disciplines that are very much just saying, I am observing, here's how it's happening or here how things could happen without then injecting, here's how it should happen and simultaneously bringing the reform community who is saying, no, no, this is how it should happen. And meta science is the interaction term between these two, which is, yeah, you have some ideas. You have values here of how you think things should happen. Let's see what works. And let's see how those descriptive understandings of the ways in which our discipline operates translate into evidence about whether that actually accomplishes the goals that supposedly we have and share about how science operates, what science is trying to accomplish. So compared with any feeder discipline of any particular disciplinary interest, meta science tends to be more applied. We wanna take this knowledge and actually insert it into how research gets done. It involves more intervention and evaluation. Here's how people are doing their work. Let's see if it, or here's this intervention people have tried, let's see if it worked. It has more of the activist reformer influence on what questions are the questions to ask. And it has more prescriptive points of view on the outcomes. As we observe this evidence, this therefore means we should do X, Y, and Z. And so the, and then compared with our form community, reform community will say, no, our values are our values. We think everything should be open access because we believe that democratic access to knowledge is fundamental to how we understand what science should be. I don't need evidence about that. That's a simply stated value that I am advocating for and how science should change its ways. And so at the intersection there, meta science offers an opportunity to be evidence-based and question-driven to try to just understand how it is things work, informing those efforts of reform to improve how things actually work. Okay, that's number two. Frame number three is meta science as the audience. And I think Daschen raised a lot of these points in his presentation, which is it's the interaction effect between researchers themselves and the stakeholders in that knowledge. And when we think about disciplines, disciplines treat their primary audience as other members of their discipline by and large. But if meta science is operating well, then it treats its primary audience as people who are not in their discipline. And doing that changes fundamentally, how you think, how you write, how you present, what you present about. And the different audiences that converge in meta science are not just the practitioners in particular areas of discipline or inquiry. They're not the observers who are critiquing and understanding how those disciplines are doing their business from an oversight position. It's also the various stakeholders in science, funders, society leaders, publishers, institutional leaders, all of those who are thinking about how is it that we as people who set the policies, drive the incentives, provide the positions for science, how do we help to make this enterprise operate as effectively, as efficiently, as progressively as it possibly can. And so if meta science became a strong group, a discipline where it's institutionalized with its own journals, with its own departments, all of those features that create greater cohesion, then it will just start to talk to itself like all other disciplines. Susan Fitzpatrick raised this point earlier. The positive of that is that people who are engaged in meta science will stop getting frustrated with others at meetings saying, God, why don't they know this? Because we all will have a shared sense of understanding of a common interest that we in the discipline have. And that would be a damn shame. I'm done. Thanks, Fiona. Okay. Thanks, Brian. Can I just to remind people in the audience to please put some questions into Q&A? It's, we've got half an hour here. We need some discussion topics. But just while we're waiting for those to come in, I'm thinking of Brian's idea of meta science being the intersection between descriptive and prescriptive ideas. And I wonder how that aligns with Nicole's two kind of paths of meta science. It feels to me at least, I think, I think this is what I think, that embedded methodologists are more likely to be prescriptive, right? And domain-independent disciplines have arguably, perhaps at least since POPPA been more descriptive and do those need a bridge? I think this, Samine has asked a question as well, which I think relates to this too, which is do we think that people working in HPS or STS, those domain-independent disciplines, wish that there were more scientist whistleblowers who would talk more openly about messy or problematic ways in which their science is practiced? Samine, I don't know if I've rolled your question into something you didn't want it to be. So I'll stop now and maybe let Nicole respond and then you can ask your question again. Yeah, thank you Fiona for that prompt because those are exactly the things that I was thinking about with respect to both Samine's question, but also Samuel Fletcher has a question in the chat where he's asking where do HSMT people or STS people kind of fit within these two archetypes that I have described? And my answer was gonna be that I think that we need to consider the access that Brian just put on the table of thinking about the descriptive prescriptive or the degree to which people are willing to be normative interventionist prescriptive in addition to the degree to which they are looking towards other people doing the similar kind of work or looking towards a classic discipline. And to Samine, I think that one of the reasons why it may not always go as well when you approach a sort of HPS STS person is not necessarily because you don't have the background in the field to be able to sort of speak the speak but because much of the time those disciplines are not really oriented towards normativity, philosophy of science looking at Carol over here as an exception. So in descriptions that are a little bit more descriptive and I can use myself as an example here because what I just did was be like, here's two archetypes, go at her. I did not say choose one or the other like Brian did. And so when you run up to the HPS STS person and say, it's dying, my fields help it. I'm like, girl, well, I can describe for you what's wrong, but I am the book kind of doctor not the blood kind of doctor, right? We don't actually have a lot of tools of practice at being interventionist and normative. And this is where I think that philosophy of science is definitely exception. I work in a department of historians and bioethicists and that's one of the main axes of differentiation between us is the degree to which the bioethicists in the department are willing to say, do this as opposed to here's how we could think about this. I think there's definitely room for HPS and SPS people to not necessarily become more normative but be more willing to sort of spell out the implications and conclusions of the things that they describe. But I think it's a little bit hard sometimes for HPS STS people to feel as though they have the authority to say, hey, you should do this as a particular field because they're not embedded in that field and ultimately they're not responsible for it. So I think one of the partnerships I mean is in thinking about the whistleblowers, the embedded methodologists as people who see the need for reform can legitimate the need for reform and can partner with people who feel as though they don't have that moral authority to come in and be like, clean up your stuff but can offer some perspective from their disciplines that might be useful. Carol, you've been pointed to now as the philosopher of science who is willing to... Don't talk to nice me. I think that Nicole's characterization of philosophy is fair. It's a discipline that's not afraid of throwing around normative claims. And one of the risks in that kind of approach is if someone makes a normative claim without being properly grounded in the facts on the ground which is why I think in order to do sensitive and informed philosophy of science you really have to know how science actually works and how it's practiced and you have to be listening to practitioners and reading their work and coming to talks like this or conferences like this, for example. So I think it's a fair assessment. It can be done well and it can be done terribly and with a lot of horrible, ungrounded claims. Thanks, yeah, that's really helpful. It's really interesting to think about whether we're maybe sometimes expecting too much or the wrong things out of some of these fields. But I do often have the experience like especially on science Twitter where people will have discussions and I'm like, you know this is public, right? Like you know that the public could see it but also like philosophers of science and science studies, people could see it. And then like I always wonder are those people like, oh my gosh, all this fodder for our theories and things like that or is that relevant? Because I see these things happening more and more in public, like it used to be just in the back rooms and all that. And now like people are saying these things that I find quite problematic out in the open and I'm like, is anyone gonna call us out? Because I would love for people is it one thing to call it out from within the field but it really helps when someone outside the field is like, huh, that's how you think about your science. That's interesting. Sorry, go ahead, Nicole. No, no, go ahead Carol, go ahead. I was just gonna say when I think about Twitter conversations I actually think of historians of science and think about the kinds of archives everyone's creating from moment to moment. I don't know any philosophers of science right now who are working on that particular text. There may be, but I think that could be fascinating. Can you say Fallon, the historian there, just nodding her head about this? If I don't see another tweet of mine in a paper by a philosopher of science or historian of science it will be great. But no, it is, you're assuming you're totally right. It's like we're having this conversation and these debates and these things are unfolding and there are people watching it. Oh my God, people watching it, okay. And that puts a different tenor of, we're just talking amongst ourselves fighting about how it is that psychology or biology or whatever should be done to, oh, there are, there is context. There are ways in which this particular debate is situated in a variety of different debates. And in a history of how these things have waxed and waned and otherwise, that just is fascinating as a participant and an observer simultaneously to see how it is that all plays out. And to me, this kind of the convergence of meta science is the great opportunity to sort of have that conversation together. Much like anthropologists think about how it is that we have participatory research. It feels like a lot of meta science is that. It's like the people who are studying me are we're just having a conversation. And that's great, I'm totally getting for that. But one qualifier I wanna add to that when we talk about Twitter, from a social network perspective, it's also important to realize, yeah, we're having these debates, but only your own network is listening, right? So it's also interesting to think about it in that context of the echo chambers and thinking about your own social network that are more likely to see the debates that you put out and they respond in a way that sort of, especially when issues become polarized and heated. So that's another observation that people often don't think about, that you say, oh, we're having debates, people are watching, well, you think people are watching or that some people are watching, but they are non-random, some old people that are watching, right? So Nicole is watching. Yeah, and I'm gonna have to fess up here as somebody who just tweeted a paper at Brian today that involves some analysis of his own publications. Hey, oh, but as to the question of what this looks like, you know, to an STS person to sort of watch this and folding within specific disciplines, I mean, like you asked before, one of the things that is unique about our discipline is that the prevailing cultural image of something human independent and truth generating is so strong that it means that STSers, philosophers of science, historians of scientists, you know, we can still teach Kuhn, right? That's like at this point, 60 years old and people are like, what? So in some ways there is, we are really used to the idea that people are not going to at first pass, believe the idea that science as a practice conducted by humans and is therefore subject to all of the flaws and foibles of the humans that conduct it. Like that very foundational premise is something that's just super hard for people to accept. And so it's actually really a joy to be able to talk with scientists that are willing to be like, yes, I believe that the fact that you're a human makes an impact on your scientists. And we're like, girl, great, we can work with that. Because without that fundamental agreement, if you believe that, you know, you can be thoroughly objective and are generating truth, it's really hard to form an alliance. That's a great point. And I don't want to go into my own origin story is because it won't be as cool as how Samin described hers. But it is, you know, my disciplinary work is about implicit bias. It's about how we don't recognize our own biases and how they influence our assessment of people by age, gender and race. So it's mostly about social identities. But it really is, for me, the roots of how I got interested in exactly the issues that we wrestle with in meta-sciences, how we can deceive ourselves about our own inferences and how we gain confidence in the findings that we have. So I'd write with you on that point. Okay, I think we need to focus on these questions in the Q&A now, because people are very diligent and put them in. I'm just gonna ask, though, there's a question from Bob Reid in the chat, which is a multiple choice. Yes, no, too early to tell question that I think perhaps the whole audience can contribute to answering this question. Maybe if you want to help answer Bob's question, do you think meta-science has made the results from your discipline substantially more reliable? Perhaps answer in the chat for us. Tell us what discipline you're in and one for yes, two for no, three for too early to tell. We can do a little poll. Sorry, I don't know how to use the real poll thing. We'll just do it in the chat. Okay, now I'm gonna go back to the top of the list. There's a question from Rose, who asks, for those who are still early enough in their career to be kind of field agnostic, what do you suggest is the ideal approach into meta-science, particularly if they wanna stay in the nexus of applied work? There's a bit more to the question I can keep reading if you need a minute to think about it. So the more prominent issues within science and resultingly the domain of meta-science continues to increase. I imagine more and more individuals become passionate about it before having time to be fully embedded in any discipline. I think you actually alluded to this too, Nicole, when you were saying, which we've seen happen in HPS as well, that traditionally it was sort of physicists or biologists coming out and then discovering HPS later in their career. But as it becomes more established, then it's something people start majoring in early on. Right, so what I'm asking you for is career, everyone for is career advice for early career researchers. What's the best pathway into meta-science? Well, I think we could combine this with Matt Page's question too about like the risks involved in taking a path that doesn't fit neatly into a box. But I think that kind of thing about Nicole's two archetypes, I think what would be wonderful if there was funding and support for it would be to have people kind of at all stages between those two. So some people who are mostly within a field but a little bit, have a little bit of training in some of these disciplines that are more meta or people who are mostly meta but a little bit of experience within a field. And so I think it would be neat if there was a career path for different kind of combinations or different. At what stage in your career did you start being a meta-scientist versus having a discipline that isn't meta-scient? But I do think there are huge risks involved with that. I think that in terms of getting hired in an academic context, being in a long-standing discipline is a huge advantage. Same with getting grant funding and things like that. So until meta-science is its own discipline, I think it is a really big risk. Although I think there are non-academic paths for people who are trained in meta-science that are probably quite promising because I think the skills you learn in doing meta-science are pretty applicable in a lot of industries and nonprofits and things like that. But in terms of academic paths, I think it is a big risk. So I actually, I was wondering about that. So it's a question within panelists. Hopefully I can ask Brian, Samin, on this. This is my own ignorance. But I wonder, is there any data on this? Sort of the career pathways on meta-science, meta-scientists? Because in some ways, I also, when you're reflecting on your own story and part of the backstory, Brian, his own career, is you guys were tenured when you were starting this, but it's been a decade or a couple of years. So how has this played out in the field? Because I'm just really curious. And because this is the issue that I care a lot, I think it's very important to think about career pathways for junior researchers when they engage in this type of research. I don't think there's good clarity here yet. It is still too early to tell. But my perception is that people who come home grown from a disciplinary interest have to tie their meta-science interests to that discipline if they want a job in a department. Because if you can't make a case for how I am actually a psychologist, then why would a psychology department hire you? It's sort of like obvious on its face. Like, yeah, no, you're doing, you're solving problems over there. We solve problems over here. So in the, and that isn't to say that meta-science from within the discipline, right? People who emerge from within a disciplinary interest can't make those connections. I think it very much can, right? So the people that have worked with me that are more early career on meta-science have grounded their approach to meta-science in issues that are psychological. And so I think it is relatively easy to make a case that my area of psychology is about how scientists think about and do their research. But it still has to, at least just my perceptions, it has to stay grounded in whatever is that disciplinary domain. Psychology has a natural advantage here in that it's studying how people make decisions and otherwise, how a physicist or a biologist would make the case of those connections is not quite as obvious to me. Yeah, and this is why I suggested that in the type one methodologist model, this type of meta-science work for a lot of people is often a part-time gig. Because to retain the credibility within their field, they need to also be engaged in producing something that looks like the classical results of their field. And in most cases that I've looked at historically, the field can only sustain so many full-time methodologists and most of them are gonna be sort of this part-time type of model, which is where it becomes different to have sort of a large and flourishing amount of this work. I just, there's a couple of questions here that are asking you, the panel, to connect even more disciplines together than you have already connected throughout this session. So I'm gonna roll these two together. The first one from Luanne asks, what role, if any, can you see for academic librarians in boosting meta-science engagement, community building visibility, and then one from Samuel Fletcher just after that says, what's the connection between meta-science and digital humanities and the arts? Thoughts on those fields. So academic librarians, digital humanities, arts. Yeah, I mean, I'll say quickly about academic librarians. I mean, they've been really, really critical, I think, in the meta-science movement. And I know, I mean, one of the experiences of loneliness that Brian and I talked about is that being the only person or one of the few people in your department that cares about these things, but often there are other people in other departments who feel equally isolated. And one role that librarians have played for me, for example, at my previous institution, UC Davis was helping to bring us together. And so we literally would meet in the library and the library would provide resources for talk series and things like that. And they helped us just find each other. And often the librarians, I think, are the ones who know who in each department is interested in these kind of meta-issues or in publishing and peer review. There's so many domains that librarians have expertise in and their connections across different parts of the university can help create a community within that university, which can be really important. And there's many, many other roles they'll play, but that's just one example. Yeah, and just to build on Samine's points, my observation is that academic librarians have played a key role in the infrastructure-related and training-related aspects of meta-science and sharing methodologies across disciplinary boundaries because the role is fundamentally institution-based. We need to support all of the communities within our institution. That provides this convenient mechanism for that translation. And with the digital humanities, I think a lot of the issues that come up in meta-science are really about scholarship, not about science. And one of my regrets of eight years past is that we named it the Center for Open Science rather than the Center for Open Scholarship and the Open Science Framework instead of the Open Scholarship Framework. So I really want to KFC our names so that people don't realize that it isn't from Kentucky and it isn't fried and maybe it isn't even chicken. It's really just COS and OSF. Yeah, I'll maybe add on that in addition to what Brian and Samina have just said, academic librarians are really awesome about taking the very long view, right? Because they are kind of by virtue of their discipline interested in sustainability as a thing and in thinking about not just will this work for the next five years or can we make this workable in the moment? But what's going to happen in 50 years when technological platforms have moved on? How are we going to sustain this thing? And that long-term preservation this orientation can be really helpful in thinking about building not just infrastructures but sustainable infrastructures. So I mean sort of let me add on a digital humanities side in addition to the librarians discussion. I also want to add like what we are seeing in recent years, especially recent three years, five years is this growing convergence as we're studying more and more about scientists we naturally start to ask the questions about artists, artworks and digital humanities. And even when we sort of try to analyze the career trajectories of scientists versus artists sometimes this is obviously a very different setting more different careers but some patterns we actually see they're surprisingly consistent across these kind of creative careers. So I think sort of that's another touch point I think between the community as digital humanities now like we actually just published a paper last week sort of thinking about careers of artists, film directors and scientists were finding a broadly consistent pattern across all these careers but we had different methodology to understand their career outputs this paintings are different from papers but once you start analyzing them in the same computational lines then you actually start to see quite interesting patterns emerge. So in some sense that's another touch point I think that are now emerging partly thanks to the advances in deep learning for example, in thinking of connecting all these fields. Is it one remaining question that we haven't got to yet and then there's a question that I want to ask you all to end with two. So this question is generally speaking our scientists open to listening to meta scientists. Carol, do you want to start? I actually think that this would be a great question for the practitioners, the folks who are working within their own disciplines and bringing their meta science back home if you don't mind me redirecting to other folks. Yeah, go. I was going to wait out some mean but I could tell she wasn't going to touch her mic. Yes, of course. And no, of course not. A lot of these issues are of general interest to people who are head down focused on the problems that they're trying to solve. And simultaneously not at all of interest to people who are heads down focused on the problems they're trying to solve in their discipline. So it is very interesting as being a person in a field, psychology and working on these meta scientific issues to have high variability and engagement with people within the field on questions where I think, oh my God, you have to be interested in this. It underlies everything that we're doing. And so I feel my perception is that that is a very small slice experience of what people in philosophy of science or STS or other domains that are observing this field every day. Like, oh my God, what are people doing in this field? They're not paying attention to anything that's happening about this field. So, yes and no. Yeah, I would say I have days where it's glass completely full and overflowing. I can't believe the progress we've made in days when I feel like it's glass completely empty and shattered to pieces and people are not going to change. And it's, yeah, or it's window dressing the change. And I just waffle back and forth between those two positions. This has kind of run into the question I was going to ask you at the end, which is a version of Bob Reed's question about do we think that a science has made results from your discipline substantially more reliable. So I was going to ask each to answer that, but perhaps not thinking about making results more reliable necessarily because I want to ask this of STS and philosophers and all of you. So the way Bob's written it is about, I guess, scientific disciplines, but let's get answers from all of you about has it made STS better? Nicole, you start, we'll go around. All right, well, I would say that the meta science movement has made my work a lot easier in two senses. In one, in the first sense, in opening up people to the idea that science is done by humans with biases, like that has introduced that concept in a much more mainstream way. And second, by putting some data on it, it can be really helpful to have some quantitative data, especially that says, no, no, no, here's the impact of including gender having sort of a racialized name on your grant application or here's the impact of working within a particular frame and therefore you see a problem in a particular way. And it is hard sometimes to get credibility as a person who works largely with qualitative data with a scientific audience. And so the meta science movement has really produced a lot of good quantitative data to be able to capture the interests and sort of data wants of that particular audience. I guess I'll say, like Nicole, I think that meta scientific studies provide wonderful fodder for philosophers who are interested in how scientific practice actually works. And as a result, I think there's a lot of terrific research going on in the philosophy of science that has to do with publication behaviors, incentives within science areas that got a little bit of attention before, but that certainly have gotten a lot more attention with the, I think, increasing engagement with some of these conversations in psychology and medicine. Dushan, I feel like Brian and Sumena have already had a go. So you get about 20 seconds now. Okay, so I actually wanna sort of not think about the scientists. I wanna think about human society and in terms of the role of, you know, where you should be improving human conditions because I feel like, you know, today society is very hard to overstate the importance of innovation in driving prosperity and growth. And if what meta science is doing is to improve the reliability of science and to some degree can accelerate scientific progress, then I wanna think sort of broader roles we have in sort of driving and improving human conditions and maybe quality of life and standard living in thinking about sort of the role of the work that the field is doing. So I just wanna add a point there in thinking about, you know, going back to my consumer's point of view on sort of who can benefit from our work. Great. We're gonna have to leave it there because I have accidentally let this go over time and there's another session starting right now. But thank you to our wonderful five panelists, Brian Nosek, Nicole Nelson, Carol Lee, Josh and Wang, and Samine Vizier. This has been great. Everything I hoped it would be. And we'll see you at the next session which Fallon will be moderating. Thanks everyone. Thanks Vienna. All right. Hello everyone. We will head straight into our next session which is a series of lightning talks. I do hope you will all stick around. My name is Fallon Modi and I will be moderating this session. We will have four speakers and each will be speaking for no more than eight minutes. So with that, we're ready to start with Chris Aberson. He's a professor and chair of psychology at Humboldt State University. Over to you, Chris. Great, thank you. I'll be talking today about some of my experiences as an editor trying to move a journal toward open science practices. To give you a little background, I'm an editor for a journal called The Analyses of Social Issues and Public Policy. It's a journal that began in the 2001 and it's published by the Society for the Psychological Study of Social Issues. And it served the need for the society as the society's other outlet which is called the Journal of Social Issues only published basically every issue was a special issue. It was like an edited volume and it was like that we didn't have any space for standalone empirical articles. In 2017, I became the outlet's fifth editor. In my interview and all of the documentation I submitted, I emphasized open science practices such as open access, transparency and openness promotion guidelines, open science badges and things of that nature. Pretty early on it became clear that open access wasn't gonna happen because there are contracts with the publishers that we can't change. But all the other stuff everyone on the committee that hired me was quite enthusiastic about. I became the editor the first day of 2017 and I adopted the transparency and openness standards that day. I, as per my contract, I had cleared everything with our society's publication committee. They were on board with everything. Everything was fine until about six months in right before our yearly council meeting, I got an email from the society president that had directed me to remove our journal as a signatory. This was a new issue, so it couldn't be discussed at our council meeting that year so I had to wait until the next year to have a spot on the agenda. But one lesson that I took away here is that I failed to have an understanding of both the formal and informal power structures within a society. There was one influential person, somebody very high up in the society who is a critic of open science. And I didn't understand how just that one person could actually derail my work. In the second year, things came to a head at the council meeting where we had agenda items regarding TOP status for the journal and also whether I would be allowed to continue open science badges, which had just recently begun. It was a very contentious meeting. A lot of issues around data transparency statements. People made points that those articles that said that data were available were going to be seen as more important or better work than those that don't have data that's available. And then there's very good reasons for not having data available as I certainly agree. In the end, the non-signatory status on the TOP standards was upheld. I was also directed to retract an editorial I wrote on transparency and openness as kind of my introduction as the editor. Badges were upheld by a single vote. The lesson I learned here was a lot of people in the field just really don't know in my own field and this field would probably be social psychology. A lot of people in the field simply don't know much about open science and they kind of perceive practitioners as zealots. Everything is all or nothing. Lots of concerns about open data requirements, as I mentioned, but also lots of misunderstanding and basics. People didn't know what open access was versus open science versus open data. All of these things were mixed up in people's heads. So I think that there was an educational component that I really needed to deliver more on. In the third year, I worked with the Publication Committee to revise the Data Availability Statements as directed by the Council. This really just ended up in a situation where we changed, we require people to disclose Data Availability to language around it being encouraged. The Publication Committee wanted to let sleeping dogs lie on the retraction. They said, you know, nobody resetted those editorials anyway. And then very interestingly, a few months later, Wiley Publishers announced a new policy for all of their journals. All papers would be required to include an open data statement. So this issue that had been in the core of what I had been fighting for had now been completely resolved. It had now become incredibly mainstream. So the big lesson that I took from there is that it's really important to be patient. Change at high levels can be slow. They're often well behind where we want them to be, particularly the people who attend a conference called MetaScience. You know, they're behind what we want. But the field does catch up. And I saw another example of this recently, you know, APA, American Psychological Association became a signatory on the TOP standards. And that's a real dinosaur organization that's come around finally. All right, so where have we ended up? Well, comparing pre-2020 to 2020 and later, because this is about the time where badges became something that was available for the whole year. The median sample size increased substantially from 64 to 304. A lot of that is my own focus on statistical power, I'm sure. Since adding open science badges, about half of the articles that we publish have received at least one badge. We're trending closer to 60, 70% for more recent submissions. We've been able to introduce registered reports and our submissions have increased dramatically. We went from an average of 73 a year to now, we're actually, these numbers are inaccurate even of the assets today. We've got about an average of about 200 per year. And of course, 2021 is not even done yet. So there's been a big jump in how much, how many submissions we're getting. Anecdotally, some authors have told me that they selected us over other relevant journals because we were the only ones that promoted open science. What happened to my term? My term was about to end, but I was extended for two years on the urging of that person who was most negative about open science. I had really become a champion of my work. Again, change is slow, be patient, listen to those people in opposition and talk to each other, understand each other's views like I did with the anti-open science person. We found that we agreed on almost everything. There were just very, very, very minor differences. So I really do try to keep that dialogue open. Okay, thank you all very much. Thank you, Chris. That was fantastic. I will now move on to our next speaker who is Shakya Anand Bambak. Shakya is a PhD candidate in the Department of Psychiatry at Trinity College Dublin, giving a talk titled, Cross-Cultural Scale Validation. Ready when you are? Thanks a lot. Yeah, I'm just gonna share my video now. So, okay, please let me know if you can't hear anything. Imagine a bright-eyed PhD student starting off her first year. She's just finished a master's degree that involved the cross-cultural use of multiple scales and now she's eager to spring into the exhilarating world of clinical scale validation. Little does she know, she will end up questioning not only the very purpose and utility of the tool she's meant to produce, but also the foundations of the science to which she has devoted her entire career. I'm Shakya, a third-year PhD candidate at Trinity College Dublin, and the student I described was, of course, me. Over the past few months, I've faced some challenges while designing a cross-cultural validation of a trauma-related chain scale. While many of these have been logistical, such as keeping my PhD timeline in mind, finding collaborators in Asia and Europe, et cetera, there have also been some conceptual and methodological challenges to my perspective on conducting cross-cultural research and on recruiting from non-weird populations. So in the spirit of igniting a conversation on this topic, here are some of the issues and questions that I wish I had grappled with earlier on in the process of my own study. Sometimes research doesn't start out as cross-cultural. This was the case with my study. Once the seed of having more diverse samples was planted, I let it lay dormant while preparing my study for an Irish population. In doing so, I missed out on having the input of collaborators in India, for example, from the get-go. This input would have been advantageous in a few ways. Firstly, involving researchers from local target populations from the idea stage fosters a fairer relationship between PIs and collaborators, reducing the underlying power imbalance between PIs from largely Western, resource-rich institutions, and researchers from non-weird or not so well-funded ones. Decisions about level of involvement, potential compensation, and authorship can be made more respectfully in this way. Secondly, crucial choices in the study's design, such as ethical considerations, adapting scale items to the local cultural context, et cetera, should really be understood by PIs at a meaningful level. This requires time to learn about the cultural environment they will be entering and is where early involvement of local collaborators is key. I mean, imagine suddenly going into a community, barely been exposed to, and then hoping that they'll trust that you understand their culture. It doesn't even go here! Do you even go to this school? Ironically, if you're not familiar with early 2000s American pop culture, you might not understand why that clip was funny. Okay, so it's not possible for a scale to be valid in every subgroup of every cultural group you sample, but that's no reason to not try and improve. Even keeping the constraints of individual researchers in mind, there's always more we can do to get a slightly better diverse sample and cross-cultural research. The reliance on convenience sampling in so-called non-weird countries has resulted in many studies sampling self-populations that have weird trades anyway, think schools, universities, and so on. These are educated, often westernized populations who are not part of minoritized groups. So can they really be considered a significantly different sample than, for example, the white Dutch population of the original study? I'm exaggerating, of course, but it's worth putting in the effort to recruit from a wider range of people in a region and not just the easiest to access population if at all possible. I'd also like to direct you to this talk on the validity of the term weird, given by Sakshi Kai and colleagues at the SIPPS 2021 conference. On the other hand, if there's little chance that a scale will be adopted within a community in the long-term, at least in its current form, then maybe the most ethical course of action is recruiting from populations that have the most likelihood of using a validated scale later on. In other words, the weird trade populations I mentioned earlier. I would love to have participants from rural Indian communities complete my validation questionnaire on shame and child sexual abuse. However, given the sensitivity of the topic, particularly in small insular rural areas and the lack of psychological services there, it's pretty unlikely the participants will end up benefiting from my study. Perhaps the most crucial point I wanna bring up is how do we discuss the utility and post-study uptake of cross-culturally validated material? Well, when it comes to scale validation, we need to consider some sub-questions. What is the end goal of your validation? Is it to strictly statistically test or confirm the validity of a particular scale by replicating the original study? Or do you also aim to encourage the uptake of a scale in a new range of populations? Although we may go into studies unconsciously assuming or hoping that the former will result in the latter, encouraging post-study adoption of a scale probably requires adjustments in how the study is carried out and almost inevitably in the properties of the scale itself. There's tons more that I couldn't include here, but if you wanna continue the conversation, please consider following me at the T-searchers and consider following the Junior Researcher Program on Instagram and YouTube. Thank you so much for watching. Yeah, that's it for me. Thanks a lot. Thanks, Shakya. I'll just ask you to stop sharing the screen. Thanks. I did. Okay. Is it still sharing? No, I think it's fine, actually. Up next, we have Yuching Kai. Yuching is a Master's of Research student in Developmental Neuroscience and Psychopathology at UCAL and Yale University. Yuching is presenting a talk titled Assessing Flexibility in the Measurement of Socioeconomic Status and Meta Research. Over to you, Yuching. Thank you. Hi, Rowan. My name is Yuching Cai and today I'm going to present this research about the measurement flexibility of social economic status, which is by me and through other collaborators. So it is a matter of research. So what is social economic status? Social economic status or SCS is social standing or the class of individual or group of individuals. So it can represent for the accessibility of the resources for different units of individuals. SCS has been widely adopted in many different domains of studies. For example, in psychology or cognitive science, SCS has been found to be associated with many different outcome variables, including mental health, physical health, language development and brain development of the children. You may think that social economic status is a quite straightforward concept, but the measurement of it is quite complicated and flexible. To begin with, we can use different indicators or resources to measure for SCS. So the most commonly used ones are education income and occupation, but you can also use not so conventional ones like political resources and subjective SCS. Even if we choose the same kind of indicator, the scoring of them can also be different. So for example, when measuring education, we can use levels of education or years of education. That is a categorical variable or continuous variable. Another thing to consider is whether to aggregate different indicators into a composite score. So for example, Honest Head Index is a very popular aggregated SCS score, which combine education, income and occupation together. And all three of those indicators can be used as single indicator in other research. At last, we also need to consider what is the level of measurement for SCS in the study. So for example, individual SCS, parental and family SCS are the three of the most commonly used ones, but on a more extensive level, we can also measure for neighborhood SCS. For example, here is the neighborhood SCS of New Haven. We are currently living and on a higher level, we can also measure for things like gross national income of a whole country or region. So the current study wants to evaluate the flexibility of SCS of its effect on the results of cognitive neuroscience, specifically. So we first systematically review different ways of measuring SCS in this specific domain, and then we want to reproduce them using two public datasets, that is the PS from China and PSID from the US. Then we want to evaluate the impact of the flexibility of measurement on the possible outcomes in psychology and cognitive neuroscience. So we first did search, we'll first search the article and select the relevant ones and the ones that can be potentially reproduced. So we use variables from CLPS and PSID to reproduce those different types of SCS. Then we evaluate the influence of the flexibility using the variance that can be explained due to the measurement itself using the index of ICC. And we also calculated for the associations between outcomes and the SCS and also between different types of SCS. Our preliminary results has been pre-registered on OSF and if you're interested in this study, you can take a look at it. So in this part of the analysis, we selected 53 papers which used the 38 datasets and we found that there are more than 40 different types of SCS, which is even larger than the number of datasets in this part of the analysis. And we found that about 20 to 30% of the variance can be explained by the measurement itself and the correlation between SCS and targeted variables and also between different SCS also vary to a good deal. So here is the correlation metrics of the different types of SCS that is coordinated from CFPS and PSID and you can see that the number varies to a good deal. So what can we imply from the current results? As you may already know that the measurement issue in psychology has been discussed a lot recently. So for example, for depression and self-regulation, there are many different ways to measure for those concepts. Similarly, for social economic status, the flexibility of measurement has been found in the current study in the domain of cognitive neuroscience. So this could be explained by the complexity of SCS itself that they can be measured by different indicators, but there are also problems in studies using SCS as a variable. So many studies arbitrarily choose different indicators and they do not have citation or explanation over when how they choose those indicators. And this could have an effect on the reliability and reproducibility of the findings in the cognitive neuroscience domain. That's all, thank you for your attention.