 Okay, let us begin. Thank you, everyone, for coming to this event. My name is Brian Nosek. I'm Executive Director of the Center for Open Science and I'm the faculty at the University of Virginia. Today, this symposium is a preview event of the upcoming virtual conference called Meta Science 2021 that will occur in September. Feel free during the course of this event to use the hashtag Meta Science 2021 if you want to chat about this in social media. And sign up for the information about that meeting at the link that Claire will provide in the chat for you if you want to hear more about topics like these coming in September. The title of this symposium is a critical analysis of the scientific reform movement. And today we will have five presenters with very diverse perspectives in analyzing reform, its conceptualization, its approach, its impact, and its implications. As a social system, of course, science is always evolving and reforming. The last 10 years has been a particularly intense period of reform with an emphasis on open scholarship, open access, preprints, pre registration, sharing of data materials and code, open review, and examining the dysfunctions in the research culture for incentives, rewards, and inclusion. That reform is usually motivated by good intentions and idealism, we can do better, right, we can improve, we can fix things. And that idealism is a strength for motivating efforts to change and rallying stakeholders around common causes. But it's also a risk for blinding and blocking appreciation of the unintended consequences of change and who is affected by those consequences. For example, the ideals of reform might be superficially conceived in theory and never have a chance to meet their intended aims. The ideals of reform might be narrowly conceived and not translate well across research domains and methodologies that reformers wish to apply it. The ideals of reform might be conceptualized inclusively but fail to take into account the full range of identities and circumstances and points of view to which it's intended to apply. The ideals of reform might articulate the positive implications of the new behaviors, but fail to anticipate the negative consequences. It is very rare that there are no tradeoffs between sets of practices. And the ideals of reform might be perfectly well conceived in theory but fail in their implementation, their translation to practice. And in terms of culture and behavior change movements, knowing what to do is the easy part, developing an effective strategy is starting to do it do it well, make it sustainable is the hard part. All of these challenges inevitably play out in scientific reform, our initial conceptualization of how we think the system works, the solutions for how to make it work better will be upended revised and improved by the reality confronted when those solutions are implemented. If we knew all the answers beforehand we wouldn't need research. So with all of this complexity, a continuous critical analysis of reform efforts will maximize their chances for clarifying their goals and then achieving their vision. So with that let's turn to our outstanding panel of presenters. As quick housekeeping, I'll make very brief introductions for each presenter, each of them will speak for 15 minutes and then have five minutes for Q&A after their presentation. As you are listening please feel free to enter use the Q&A box and enter questions and upvote questions of others that you would like to see the speakers address. Claire and I will curate those for the presenter during the Q&A period. And also feel free to pose broader questions that will be relevant to the whole panel, because after each person has presented will have 20 or 30 minutes or so to have a open panel discussion for cross cutting themes across their presentations. In total we expect that this webinar will run for about two and a half hours. But if you miss any parts, don't worry. This is being recorded. We'll make a link available and there will be an OSF project with all of the presentation materials as well available that we will circulate. Okay, so that's it. Let's begin with our first speaker. Ivan Flis is a postdoctoral researcher in the Department of Psychology at the Catholic University of Croatia, and he will start us off. Thank you Ivan. Thank you very much for the invitation and for having me. Okay, let's share my presentation. Okay. Okay, so today I'm going to talk about my perspective on the reform, which comes from most broadly history of psychology and history of science. And I participated in the reform debates a bit with my paper that I published in the U.S. in 19 psychologist psychology and scientific psychology. I've been working for a couple of years now on a continuation of that paper, which is the function of literature and psychological science so this talk is kind of this continuation of thinking historians of psychology and historians of science are terribly slow to publish so it takes years to write papers. Okay, so what is the perspective of historians of psychology, a number of historians of psychology have been attracted to the conversations about replication prices to the reform movement. So here are three commentators who actually published on it as I said historians of psychology are slow to publish so it takes us some time even if something develops for a decade. And I identified three papers which really come from this perspective of historians of psychology here, my paper that I already mentioned. I'm going to browse his paper from 2020 in history of psychology. And our recent preprint that was just published by Martin Dirksen and a really brilliant young meta scientist, Sarah and field, which all kind of come from this perspective of talking about the scientific self. When historians of science talk about it. It's basically this idea that scientists develop a certain persona that's invested with values ethical values but also epistemic values in how they handle themselves and discuss with each other and basically go about in their communities and force this values and kind of train or educated pedagogically, their disciplinary members into these forms of scientific selves. And I think all these three papers that we wrote from very different perspectives kind of engaged with this idea of what kind of scientific self is being developed within these new reform debates that are happening. And what I'm going to try to do in this this talk is latch onto this idea of the scientific self and actually bridge it to the idea of what is the scientific literature. What kind of scientific self should we have to serve particular functions or goals when it comes to adding and expanding and maintaining a scientific literature of a of a of a discipline. So, my talk is going to be about polemics identity community and literature in a way. And I identified the argument to keep it simple and also to for us all of us to be on the same page. This is the argument that I'm basically going to make in the in the talk, the idea of polemical disagreement and compare this to Derksen and fields paper that I just managed where they call it manage the senses. The, the polemical, polemical disagreement about the overarching sources of problems in psychological science, generate new identities for a researcher so this polemical conversation that's happening under the umbrella term off of a crisis, basically, basically forms or forms the new ways of being a scientist, the new kind of a researcher is basically an answer to the identified source of problems. And in the case of the reform movement that started in 2010s. That's basically the, the replication is three former, as I call it, or her, the identity is performed through social media and attracts a community so social media is really crucial as a social technology that allows for this identity to be performed to be presented. And for other people actually to learn how to do it and start doing it themselves so to build a community around the identity. And what happens with the identity of community, this pair of identity and community is that it may have spillover offline, it's intended to have spillover offline to actually affect the research institutions and the systems that are trying to be affected. And the interruption presents a reimagined way for interacting with the psychological, we, a literature to a quasi global audience of researchers in psychology. And this quasi global audience is basically really important because considering the many communities of psychologists around the world, the literature is only the only point of interaction for everybody. So the ways that we form new identities that allow us to interact with the literature in particular way are the ways that we can participate about this. And in the talk I'm going to call these overarching sources of problems, myths, so things that were accepted before, but are identified as problematic now. The first myth is the myth of self correction. I think most of us are going to be familiar with this idea because because it's articulated the reform movement. And the second myth I'm going to propose is the myth of self organization. The first is coupled with the identity of the replication is free farmer. And the second is coupled with what I call the pre registration skeptic you'll see what I mean with that later. So the reform debates central to these community identity identity ruptures identified through the function of the psychological literature basically function something like this. In the 2000s, we have what in the reform literature is usually called status quo psychologists, who basically saw the scientific literature as self correcting and self organizing, if the methods that are used to produce that literature are sound and sound is soundness of method is something that's a product of the history of the discipline within the 20th century I don't have time to go into that. The application is free farmers did in the 2010s and late 2000s is actually criticize the soundness of method and also criticize the idea that the literature is going to be self correcting, regardless of the methods so the idea is that the scientific literature needs to be continuously corrected to self organize and the keyword here is robust methods so we need to apply robust methods, and we also need to police the literature in order for it to be corrected so it can self organizing, and the last break, I think happened within the last couple of years with a pushback from the reform movement itself or the people participating in the conversations with within the reform movement from pre registration skeptics so folks saying that maybe pre registration which was central for the replication is free farmers is not so successful in fixing the literature. And the idea was that the literature needs to be organized around the body of theory or some other kind of formalism to benefit from what the replication is free farmers would call robust hypothesis testing so without this core of something that's already organized for the communities, all the improvements by replication is free farmers are not going to succeed. So what is the literature for the community of replication is free farmers and this is something that I read off and I'm very open to criticizing it and saying you know you've got it completely wrong. So the scientific literature should be a collection of true effects in a way. And this really mimics the quantified dominance of the quantified perspective. These two effects are presented in argued for individual studies the studies appearing journals, the effects must be robust, according to the current reform consensus about about robustness. So the community defines this new robustness replicable reproducible developed according to principles of open science of transparency in data analysis publication. Additionally, when inferential statistics is used, it is argued that studies in this literature should conform to some type of hypothetical deductivism, or neopoparian falsification isms that seem to be really prominent and have fraction in the refer community. This reconstruction of mine of what is the literature in the view of the replication is free farmer basically comes from the debates about the meaning of the first large scale replications. So it comes from the wider open science movement and its practices for increasing transparency in psychology, and it's important here to mention that open science and replication is free for are not one of the same one and the same so they have their own social dynamics they're meshed together in psychology, but the open science movement is something with a different history and it's kind of connected, but it's a different social system so to say. So I think that actually informs this kind of reconstruction that I came up with literature is meta science that basically empirical branch of the reform movement that's trying to produce evidence based interventions to fix the literature right. What are the implications of dismantling the method literature self correction from the right replication is three farmers. So three farmers attempt to salvage the idea of the literature as a proxy for the substantive are trying to salvage the idea of the literature as a proxy for the substantive structured science. So they're really continuing this idea of status quo psychologist that there is a structure within the literature and that adding to the literature as to this structure and this structure really reflects what is psychological knowledge. In the field the literature is psychological knowledge so maintaining a good structure or an informative or true structure of the literature is maintaining psychological knowledge. And by way of that the literature requires constant policing and pruning by enforces of robustness, and those enforcers can be individuals, but we can also come up with ways to develop different institutions or prop up institutions that already exist to actually enforce this idea of So this is what happened, I think during the 2010s and in late 2010s in the past few years. I think the conversation experienced a really strong rupture I had identified the central paper I think for this rupture at the end I'm going to have a whole list of papers that you can connect to this these perspectives that I'm arguing about. And these conversations about pre registration generated, I would argue, a new identity community rupture from the reform movement that happened in late 2000s. And what is this rupture what is the literature for pre registration skeptic so just to sum it up I'm not gonna I don't have much time, but pre registration skeptics are people, basically folks scientists who are saying that pre registration doesn't actually solve the problem of criminality or or producing sound theory in psychology and the focus on replication is not the way to go to actually fix the problems in psychological science. So they're moving from the method of self correction to the, what I call the method attacking the method of self organization of the literature. So, in this view the scientific literature is a collection of scientific outputs these outputs can be varied empirical theoretical commentary opinion that period journals. And they can vary in quality and epistemic norm according to which they were produced. So the literature is actually on a place where a lot of epistemic regimes in different communities produce and add stuff into it's not dominated by one way of doing things or one set of norms basically, because why is this possible because the literature just represents fuel for substantive conversation among scientists. So it's not supposed to mimic the structure of science and it's not supposed to be kind of a kind of a homological thing for psychological knowledge. It's supposed to be fuel for something that scientists are doing somewhere else, then the literature. And what's that in the conversation for the most part, part formalized theories so formalized theories are supposed to feed on the literature and feed into the literature, but are not represented by it. This is my reconstruction of these this few come from the positions articulated in response to the institution of pre registration so these conversations that have been happening in the past few years. As I said, I cited papers that I think are relevant at the end of the talk. The objective of cognitive scientists cognitive modelers and mathematical psychologists were participating on some would say fringes of the reform but I think also I this French center is very really problematic to argue for but it's cognitive scientists cognitive models and mathematical psychologists for the most part participating in referral conversations and realizing that the epistemic norms of their communities do not mesh so well with what's dominant in the reform. This resulted in calls for theory formalization in psychology that are now already broader than just cognitive scientists cognitive modelers and mathematical psychologist so there's multiple calls for theory formalization within this conversation. What are the implications of dismantling the method literature self organization. As I said literature in this view is not a proxy for the structure of science the literature is not psychological knowledge psychological knowledge is somebody of collectively developed formalized theory, be quantitative or non quantitative so it's something cumulative that separate from the literature itself, it can be found in the literature but it's not the structure of the literature. We need to develop new practices for systemizing knowledge which are not just literature reforms. And I think this is truly revolutionary in that sense and that's why it doesn't mesh with this reform view of fixing up the institutions that are already there, because it means that many communities need to really fundamentally change the practices of their research research and the goals that they have in doing research to add something completely new and completely different to make the science cumulative from the mathematical theory. And by way of conclusion I have at the end some sociological aspects that can be drawn from this analysis. And also I'm going to finish with two questions instead of a conclusion of course historians never conclude anything they just ask questions. Okay, so sociological aspects of my analysis me. The first one is that social media are crucial social technology that's being used in this conversation, these conversations, and it has epistemic value so it's not only this formal thing that's happening on the on the edges, but what's happening on primarily Twitter or Facebook and other places in these groups of people. It's actually crucial to this forming of identity and forming of community and how this translates into practice. So this entangling social media from this conversation it's not going to be easy. Me as a commentator and I think all SDS scholars historians philosophers who work on this are really struggling with how to label things here because we know that labels have huge power in organizing groups of people and their thinking. We kind of struggle with do I name people radical off stream pre registration skeptics of these labels are in my analysis I don't mean any formal cart carrying that are former cart carrying numbers of these groups they're just kind of useful fictions that I use for doing the analysis. The interesting thing in this second rupture is also that revealed an interesting gender dynamic and a diversity dynamic, because the second rupture of the pre registration skeptics included basically ideas that that there's a problem with diversity within the reform movement and not only diversity in the political sense but also epistemic diversity. Okay, I don't have much time for the other ones so I'm not going to go into the last three, because I'm going to run over time. My two questions I would like to finish with are basically. Okay, time work is done. I'm going to start with a question for theoretical formalization lead. And what is the capacity of psychological science at large to employ theory formalization. I think that's a really big question that needs to be hashed out, because the history of psychology of the 20th century wasn't that people didn't think that they need formalized theory but that they really deeply struggled with how to do it in their subject matter specialties. So I think that the actual practicalization has existed, I would say for a century, maybe less, but it's really hard to do so I'm really interested with what's going to come out of this conversation with the actual practical applications of theory formalization. And my second question is, will mainstream reform stop centering replication so is there really a rupture in the myth that the dominant method myth is not self correction anymore but self organization, and move to another articulation of the ball of premulativity, or will we actually see separate communities of anti status quo psychologists who are centering their ideas of cumulativity, instead of replication. Okay, I ran a bit over time at the end of the talk you can find references related to history of science that informed my perspective, and also references to the papers within the reform conversations that are kind of I use to build up this argument. Thank you very much. Thank you for that excellent kickoff to this symposium. We do have a couple of questions that have already been posted. Everyone else should feel free to continue to add questions in the q amp a or in the chat, if it's possible to see that. I will pose the first one from Ignacio ziano in the chat. He asks, is it possible there is more nuance in the perceived relationship between knowledge and the literature. Perhaps some people believe there is not 100% overlap between the two but the literature represents some form of psychological knowledge, perhaps not all, for instance. Yeah, I think it's, there's quite a bit of nuance in that there's actually a really long story. Where do these two myths of self correction and self organization come from within the history of psychology, especially in the late 20th century in the US. So some epistemic norms and methods really informed the way that psychologists basically started behaving about their literature. There's quite a bit of nuance here I think that I kind of glossed over, because I don't have much time to go into it. But I'm trying to identify really like extreme ruptures that kind of gather positions right. So I find it really difficult to be in this position of ascribing positions to individuals. That's why I didn't cite individual papers in my argument, because I don't want to be in this position where people say I never said that right. So I'm trying to kind of identify general friends and we can, we can argue about that whether the general friend that I identified actually glosses over nuance that is super important for the argument itself. Right. Thank you for that Ivan. Another question comes from Matthias Lobo. He asks, Amazon invented self publication of books. What is your view about researchers in the field creating self publication journals eventually with post publication peer review. This in this case would be to publish correct results instead of important results pretty much like the publisher frontiers is doing for the last decade. I think here, I would really recommend reading a more recent history of scientific publishing, because it's actually full of ideas and it's going to it really opened my mind in how malleable through time, the publishing system is. So actually, as a historian now I'm always the most excited by really radical propositions coming from the reform so not the reformist one of propping up journals and peer review and stuff like that, but actually completely rethinking how does the item of publication itself. So yeah, I think those initiatives are really exciting. And I think we should really have more of them so I think there is space to be more radical in reimagining the literature and trying to translate that into mainstream research. Great, thank you Ivan. We are about at time but I'll ask you one more to get your 22nd response and those other questions that have come up we can revisit in the later discussion so keep posting those folks. From Bobby Spellman, do you have any thoughts about age differences and people's views about what they believe should be done, and what has been good or bad about these reforms. I think I'm not sure if I would ascribe it to age differences I would ascribe it to investment into the institutions that are being brought down or being criticized. So I think there is this really dynamic of an old guard who has a lot to lose, versus people who are trying to make a name for themselves in a really new landscape in that. And I think that generates a lot of antagonism, but I think that's also a dynamic process that's not unique to the reform movement. So I think that's a dynamic process that you see in the social functioning of all sciences. And we were generating knowledge in communities of researchers, right. So, yeah, I can see it, but I wouldn't center it in that sense I think this social media aspect is much more interesting than the one of old versus young. Thank you very much Ivan for the presentation and for those responses. You should feel free to respond to some of the questions that have come up in the chat and the q amp a dynamically, and things may come up later in the session. So, for now we will move on to our next presenter Sabina Leonelli. She is the co director of the Exeter Center for the study of the life sciences and a professor of philosophy and history of science at the University of Exeter. Sabina thanks for joining us. This had to happen at least once today. So as I was saying, I'm delighted to be part of this conversation. Thank you very much for inviting me. And I guess the perspective I'm bringing here is of a subfield of the history and philosophy of science called philosophy of science in practice, where there is a lot of work being done collaboratively with researchers in different domains to address what we are here calling the scientific reform and the implications that these may have and the different types of changes may have on scientific practice. This is very often mostly done from the qualitative perspective, or doing very in depth in studies of what is happening in everyday practices which range from publishing to lab work to in everyday embodied daily activities involved in research. In that sense, I would agree with one of the people that was asking the question in the chat it is true that I think this kind of more qualitative approaches tend to be sidelined. When it comes to thinking about how to develop in scientific reforms and also what role meta science has in this. I'm really happy that we can have this wide range in discussion today and thinking about how to correct for this. So what I'm going to be talking about is very broadly the implementation of open science and some of the controversies and some of the questions that are arising in relation to that. And I guess a one my starting points is to acknowledge the fact that there is been a lot of work, a lot of scholarships covering different types of trouble happening in science, particularly when one thinks about scientific research in a global context in a transnational context, which is highly diverse. And there's been a literature documenting the extent of inequity between research environments in which different types of researchers are working and how this in fact affects their ability to publish and to produce research, and very often also in cases where in in highly excellent research environments actually they're not as visible and as well noted as others already are by reputation and by location and by resources. There's a big problem with incentive systems and very diverse, even within the European Union, we have, you know, supposedly systems that are able to talk to each other and yet a huge diversity which very often is not acknowledged or even understood by researchers themselves, and sometimes even by institutions, and this can create problems various forms of discriminations of course there's been a lot of discussion around these particular in recent years. And to some extent a lack of accountability and a lack of public trust and understanding what that may mean in relation to research, and a general tendency to have a short term understanding of what benefits we achieve through the carrying out of research. So this is generating a kind of widely like it's been related to several problems with the communication of science. And this is partly to do with what has been dubbed a highly self referential academic discourse. The research outputs were valued in mostly in terms of where they would be published rather than their quality potential the reproducibility, the dominance publication in high impact factor journals and the fact that a parasitic industry was organized in a way that was widely parasitic on publicly funded and also a very strong emphasis on how the power centers in the research, particularly on the American word, at least so far of course we're seeing now the rise of other powers in research is creating all sorts of sources of discrimination and dominance in the making of research. And science has been widely documented within science studies is highly dependent on reputation and reputation cycles, how reputation is actually grown is one of the most important credit systems often in science, but this is highly rigged towards popular and well funded institutions. And of course, this also generates dominance of stem subjects, which are typically more visible, typically better funded for a source of reasons over in subjects which are more belonging quality traditions in the humanities and the social sciences and the arts. And generally, there has been work on the lack of incentives at the moment for the responsible sharing approaches of research outputs and components, the diversification of approaches that we might have in science and the importance of actually keeping this diversity and and acknowledging this, the transdisciplinary and transnational nature of collaborations and community building that are so helpful to developing science, public engagement and different forms of co production, and focusing on social challenging. And in fact, most importantly, having venues and opportunities to question what the social challenges that matter to science are, and to choose a wide variety of stakeholders that can help us engage with that questioning. So science has been widely seen as a potential solution to some of these issues. And particularly that I think focusing on open data for a moment will be helpful to focus on a mind of this data have acquired a new prominence as research outputs over the last 30 years or so and recognize as valuable in their own right, which is quite different from the idea of data has been valuable only because they allow us to provide evidence of particular claim. They've been made increasingly mobile, and they're increasingly being reused. And, and this is really central to the value of data in the first place the ability to move around and to shift context. And, in fact, that meant that the relationship between data and articles and also the credit assigned to the production of data and the production of articles as they need to be in very widely redefined. And also this emphasis on data and opening up and mobilizing data has meant a spotlight on the very significant amount of resources that are required to be able to responsibly share and reuse data. And that are very often lacking, particularly in places where research environments are not quite as well funded and visible and others. And, of course, is also brought in all of these is also brought an emphasis of how important it is to manage data responsibly and effectively and I guess particularly in light of the pandemic response and the COVID situation. We've seen this very clearly. And this can bring to a global transformation of how we do research and how we think about research in relation to decision making, and ensuring equitable participation in the creation of knowledge is absolutely crucial to then creating effective results and reliable results that can provoke a social change across different parts of society. And this also means rethinking policy funding evaluation practices. So under which conditions can all of this work. This is where the emphasis on implementation of open science comes Brian's introduction was perfect in that sense I mean the problem here is how to implement some of these and has been a wide amounts of disagreements around how this could be done. I think what people tend to agree on is the fact that these changes they reform proposed by open science as a global scope affects all stages of the research process. It has a systemic reach, and it really requires changing in all parts of the scientific system and needs to have a local implementation. So, depending on the discipline, depending on the type of researchers that have been affected their location, the ways in which they're working. And this is the implementation of the pen science and will need to change and adapt. And this is of course a key worry for the researchers particularly because the read attendance is still now to think about open science guidelines and policies as overarching and to some extent universal. And this creates a huge amounts of worries and resistance also among research communities around what this will mean in practice. So this is something that I've investigated with my group for a number of years now and doing empirical research on the meanings and practices of openness across different research communities, particularly biology and medicine. And to do this we used interviews and ethnographic field work, listening to researchers talking about their perspectives on openness, and this is really a variety of researchers across different kinds of seniority and different types of location. So talking about what are the perceived obstacles to openness and particularly open data. What are the existing problems in taking advantage of existing tools to implement open science and to carry out open science like data infrastructures for instance, and the huge amount of confusion that exists around the intellectual property regimes surrounding open science, legal ethical concerns in the very semantics of what we mean by openness. This is also being done in collaboration with a range of policy initiatives in this area for instance I participated in the open science policy platform that was organized by the European Commission between 2016 and 2020, we put together many of the relevant stakeholders and scholarly studies and also a mutual in exercise that was carried out by the European Commission, looking specifically at incentives for open science, and that included a consultations and over the course of a year with representatives of scientific systems from very different areas of Europe at various levels of resourcing for the scientific system. So here is a list of the key challenges that we found to existing this field and to require very urgent tackling. One of course is the idea of how does one enhance and even rethink the skills and the training needed to form researchers in light of this proposed reform. In terms of distributing the costs and accountabilities for the systems very often the weight of the reform is actually put on the shoulders of individual scientists, but in fact it turns out and I think this is demonstrated that many different studies that institutions and funding have in fact a very strong responsibility here in terms of making it possible for people to implement these changes if they wish. So this is a real question around how does one adopt intellectual property regimes and one confront the semantic ambiguity that is in three C to open science. In fact, we noted that looking at the variety of definitions that people provide of openness open science and open data is a source of wonderful ideas for how these ideas can be implemented across different practices in different contexts and trying to erase that diversity is really the last thing we want to do. In fact, that brings me to the key point that I want to make here the importance of recognizing and promoting diversity in research in relation to the open science reforms that are being proposed, which in fact would bring to attempts to counter the high resource bias that we see at the moment in how open science is being implemented, and also integrating ethical and social concerns into how we are carrying out these implementation systems. So I want to focus just briefly to conclude on the question around recognizing and promoting diversity in research. And this I think starts from trying to identify and think and debate to constitute relevant forms of local variability across different environments in research that may result in epistemic diversity. So diversity that actually is relevant and is constitutive to the making of knowledge and to the production of science. And this of course starts from differences in research assessment and credit systems. This really we found have the by far the strongest effect on how researchers end up implementing open science, especially in situations where in fact many researchers tend to be extremely sympathetic to the idea of open science, but then find that is very very difficult to implement this in their own research, given the kind of credit structures and promotion structures that their work is subject to. Another source of local variability which is extremely important is a geopolitical location. So what kind of political regimes are researchers working under, what are the constraints relating to that, and what are the potential conflicts between the institutions researchers are working for and with both at the national and international level and what their responsibilities are vis-a-vis their national context and vis-a-vis their collaborators. What are the values and the goals that different kinds of researchers are trying to implement in their work. There is a huge body of literature and philosophy science that addresses this and relates the values implemented in science to the content of the science itself. What are the capacities that research environments actually provide for researchers to implement these ideals, what kind of infrastructures are available and how are these actually accessed so very often there is an emphasis on open science providing wonderful infrastructures, which are indeed present and accessible through the internet, but many researchers in different parts of the world are having huge differences and huge difficulties in fact in having the kind of stable broadband conduction and access to software that allows them to make the best out of these resources. Also very often these resources are developed largely by researchers based in the global north with less consultation happening with researchers based in less visible research environments, which again creates a cycle of researchers being discriminated in the systems. Big problems around mobility and assumptions about mobility. I think the pandemic has brought this point home in a huge way because of course now many of us are stranded whether or not we are based in institutions, we will fund our travel and we'll fund the movement of materials we may use for our research. And of course again the question around what kind of institutional support is expected, what is the balance between teaching and research that characterizes expectations around researchers work in different institutions and so on and so forth. Obviously, questions around funding here are absolutely essential. And here we're thinking about long term funding, not just shorter term is also very important to note that there are many disciplines where external funding is not the key mode of supporting research. And these of course affects particularly the social sciences and the humanities. So reforms that are put forward by powerful funders such as for instance plan S actually do not end up affecting or supporting research fields which are less dependent on the source of funding. The field of study in that sense obviously matters enormously. The methods which are used also matter enormously how we interpreted irreversibility is itself hugely variable and again there's been a lot of work in philosophy of science looking at this notion I think I'm sure we're going to come back to this during the seminar. Which materials researchers have access to or not it makes a big difference to hear what are the target objects of researchers again make a difference, especially when the target objects actually are ones which are highly commercially valuable and therefore attract a different kind of sponsors and a different kind of constraints from a target objects that don't have the same commercial potential. And obviously there's huge implications in terms of the career stage and power dynamics within different fields, different types of seniority in different locations. And that brings me of course to one obviously recognized source of diversity, which is the characteristics of the actual researchers their gender their class their ethnicity and how these influences their ability of picking up on open science. This was of course a very broad overview partly because of course the time is very limited, and I'm very happy to come back to any of these points in discussion and in more detail. Generally, I want to really conceptualize the idea of open science itself as not so much a system of policy and guidelines and even values that need to be implemented somehow. But as a platform to try and instigate a critical informed and inclusive debates around this proposed ideals and reforms, so we can create something which is as diverse as possible and taking account of implementation in fact, putting the question of implementation at the center of how we conceptualize open science in the first place. And this is very much how many people in my field and adjacent fields are trying to think about these questions I'm going to leave it at that so that we don't run out of time. Thank you very much for your attention. Thank you very much Sabina for that overview. We do have a time for a question if others want to interject one I will prompt one while others are typing if they are furiously typing. That is the, you gave that overview of global scope systemic reach local implementation. And I'm wondering to what extent you perceive some of the worries that researchers articulated as being a function and part of the stakeholders presenting this as a global transformation and how data sharing has to be implemented all or none. Versus what sounded like what you were saying in the subsequent comments is a real incremental approach. There are lots of different issues and the way those issues play out will vary by person by local context. And any effective implementation want to embrace that I just wonder if you could comment on that. I think actually part of the problem may well be the very idea that this is a reform. So the very idea that this is something innovative that is being launched from up high as a top down shift. And that's what we need to do is bring this down to scientific practices I mean what we're seeing from the history of science is absolutely this is not the case. This has appeared in many different gases and forms across different domains since centuries. And in fact, the closure of science the kind of trouble I mentioned the beginning very briefly of my discussion are very often relatively recent developments of the last 50 60 years of the scientific system that has gotten professionalized in a very particular way, particularly in the Western world. This is very important as a starting point I think, because it counters this idea that what we're having here is a set of ideas that some, you know, enlightened people in research and some politicians have decided to embrace for all sorts of different reasons, and they have to be brought little by little to the daily, the daily practices of researchers. I think what we're having is a much more complex landscape where many researchers have been trying to implement some of these ideas in their own way, and they have been meaning to these ideas as a, you know, as a significant to their own practices and to their own methods within their domains, but they found that there have been many constraints in how one could imagine and implement some of these ideas. The problem at this point is that many of the policies that are being implemented in open science which are incredibly by the intention that many of them, I would say, are now appearing as a top down incentive. And this is partly also because they're trying to push the system as fast as possible towards a change in direction. Now this is all understandable and potentially even positive. So what is generating this backlash of thinking, first of all, oh, is this actually going to be some sort of top down universalistic approach, and also to which extent are these what now look like very top down rules. Take account of all the ongoing efforts in all these different domains to try and counter some of the worries that in fact are what these reforms are trying to tackle. In that sense, I think the incrementality of debates and consultations around this is absolutely essential. The role of social scholar society is really, really essential. And trying to involve early career researchers is very essential, because otherwise, we're going to end up in a situation where open science becomes yet another set of bureaucratic infrastructures that researchers have to deal with, really. Excellent. Thank you. We are sure in time but I'll ask you one more just because so many have come in, and then we can return to others later so your, your 30 second response to Elizabeth Julie Vargo asking institutions are by definition characterized by rigidity and for a version to change how can we envision the establishment of a rapport between institutions and open science which can be seen as significantly more fluid and diverse. Huge question. I think that's a key question. And one of the things that actually keep seeing to some extent lacking both from theoretical, but also practical discussion of open science is sometimes the roles of research performing institutions. And it's quite interesting to me that there's a lot of emphasis on the publishing and the publishing domain, the role of societies and government policies, but in fact, potentially both the biggest harbinger of change and also the institution of resistance to change our universities and other research reform institutions. Now, how does one bridge that I mean of course there are organizations that bring together groups of universities and are trying to do a lot of work in this direction. This is where a lobbying and insistence and work with management by a many people and who are working for research reform institutions, particularly people who are on a senior level and able to affect a little bit of change sometimes on the ways in which institutions are engaging what they pay attention to is really, really important. In the other thing, you know, it's very difficult to give a broad answer to this question but the other thing is, keep note in the fact that what we're talking about here are not some sort of high up ideas, but are ways to do reliable, effective and socially relevant science, and also ways of actually coping with a huge digital transformation that is at the background for some of the open science reforms in fact some people would identify with open science almost the idea that we're trying to respond to the shift in practices that comes from digitalization rather than from the trouble the science is experiencing. So I think trying to always make institutions aware that these are intertwined issues that cannot really be separated is potentially one way to effect a little bit of change and convince a few people that this is an important thing to think about. Excellent. Thank you Sabina for that wonderful presentation and brief Q&A. We will move on to our next presentation and after this one in its short Q&A we'll have a five minute break so that everybody can reset and not look at their screen for a few minutes. Next is David Peterson. David is a postdoctoral fellow in the Department of Sociology at UCLA. David, thank you for joining us. Let me see if I can get this set up. Thanks so much everyone for showing up today. Thank you for Brian for setting this up and Claire for having this all work. The talk I'm giving today is called MetaScience as a Scientific Social Movement and the impetus for this came a couple of years ago when I was presenting some early work from a postdoc at the 4S conference in New Orleans, which is the society for the students. And a couple of weeks before this conference somebody sent me an email, which is a link to the MetaScience 2019 symposium. And one of the things that struck me right away was that there was virtually no communication between these two fields, and it was clear that what was happening at Stanford was much more than simply an academic field. It really sought to transform science I think in a basic way. We lost him, didn't we? Well, we maybe we'll take our five minute break now if he doesn't reappear instantly. Or we can pretend that we know what he was going to say and start to present it for him here, which I don't think is likely. So take a moment, take a break. We will recover that time and skip having a break after that. If you want to walk away, I will start shouting when he's back so that you can come back to your computer. Claire, you can even put up the break sign and we will work it out with David. Hi, David, you're back. Great. So we improvised giving everybody a break and we'll just restart you again in like three minutes. Yeah, yeah, my computer started updating and things are shutting down. Hopefully be in such things happen. So I'm going to start hollering in about three minutes. We'll start thinking fresh. Okay, we will begin again. If you haven't read the book Recursion yet, it's a new fiction book I highly recommend it. And you will experience something just like this where you think something happened. It's just like it already happened that way but it didn't actually happen. So now we are going to have David Peterson give his presentation. He's a postdoctoral fellow in the Department of Sociology at UCLA. David. All right, sorry about that and hopefully this time. All right. So, yeah, so as I mentioned these two conferences are going on at the same time it was clear that something very different was happening at the meta science symposium a real scientific movement that was designed to change science at a fundamental level, I think. And I think in order to understand what was happening at meta science and what has been happening in the last couple of years, you can avoid this, this discussion of the replication crisis. I just found out actually that the, the first time replication crisis was used in the published literature was actually a 2014 piece in nature by Jonathan school or entitled meta science could rescue the replication crisis. So I think meta science and replication crisis were really born together in a, in a kind of, you know, very, you know, tied together way. And specifically specific is the particular way that meta scientists landed on a particular diagnosis for for their application crisis. Specifically they had this, you know, psychological sociological and philosophical view what was happening. Psychologically, there was this view of scientists as essentially biased characters right this is a behavioral economics perspective of of the scientists right we're, we're all in, you know, invariably biased. And so scientists is no exception. And if you let that go it's going to run amuck in the system. Sociologically there's this view of, you know, rather than a kind of martinian system, where scientists primarily respond to norms, there was this view that scientists respond to incentives that scientists are incentive followers and that again if you have, you know, a poor system of incentives it is going to show up and problems, you know, downstream in the system. Lastly, you have this philosophical view, which looks at truth, not as a, as a, you know, quality of an individual study or an individual finding or paper, but it's something that emerges in a statistical sense through through the collection of ever growing sets of data. And this diagnosis leads directly to a set of ideas for the cure, which is that if scientists are biased, what we need to do is we need to incentivize greater transparency and accountability in order to restrain those biases. If, if, you know, scientists respond to incentives, what we need to do is we need to incentivize healthy practices so we need to incentivize replications we need to incentivize posting data. And if the truth emerges not in individual papers but in these aggregates, and we need to develop the infrastructure to actually, you know, enable data hosting and aggregation, we need to, you know, enable the infrastructure of the future. But as a sociologist, it's important to note that, you know, we don't view crises as objective phenomenon right there are lots of problems out there, but only some of them kind of you know jump the moat from crisis to from problem to crisis. And this is the result of a political and cultural project and taken on by what Howard Becker or who Howard Becker called a moral entrepreneurs. who, you know, take problems and, you know, gather other people's attention to them right the media they gather, they gather, you know, important stakeholders, and they make problems seem acute and urgent in order to you know try to solve them. And this I think is really what meta science, what meta scientists did, which is why we ended up adopting this this framework of the scientific intellectual social movement. And this is a framework adopted from a couple of sociologists who were looking at how social movements occurred within the context of the scientific and academic infrastructure. And the pieces of the sim to them were situations where you had high status actors who diagnosed problems frame solutions foster a common identity and attract resources that allow the group to perpetuate. And so an example that we use in the paper. We, we talk about cognitive science. And in cognitive science, you had the scenario where you had this kind of dominant behaviorist paradigm, and then you had, you had, you know, a bunch of, you know, well placed academics, starting to diagnose problems starting to frame solutions and kind of cognitism and then attracting resources, actually some of the same from some of the same funders who would later fund meta science. And so I think all of these things fit really neatly into into meta science and meta science very clearly a sim in the sense, but meta science also an unusual sim in some interesting ways. One is that we're something like cognitive science is interdisciplinary where it is, you know, represents the intersection between, say, linguistics and psychology. Science is truly transdisciplinary, right, ultimately, they are not concerned with any, you know, specific content of science they're not concerned with a particular factor theory. They are really focused on the form of science, right how science is produced how it is stored how it is shared how it's communicated how it was evaluated. Ultimately, this puts it in a position of judgment relative to the other sciences because the kind of work a day practices that are happening in every other science are the object of study for the meta scientist, which I think in turn leads to an interesting ability for meta science to kind of straddle this academic policy border, which I think, you know, there's a lot of, you know, terrific very interesting academic work but at the same time like at the meta science symposium. There were a lot of funders that were there that weren't interested in this as an academic field. Their ultimate goal was, you know, how do we bring this knowledge how do we utilize knowledge and filter it back into our decision making apparatus. And so I think this this, you know, straddling issue represents both, you know, some, you know promise it's you know, good to have this back and forth but also some potential issues, which I'm sure we'll talk about later. So, although the emergence of this social movement I think is relatively new we can trace it back about 10 years. It is the outcome of some, the intermingling of some long existing trends and I think understanding these roots explains how, you know, despite the kind of ambitions of meta science, it ultimately emerges so far from a fairly narrow spectrum of academia. And specifically I think when you look at the birth of meta science and you look at its constitution. I think you can find three sort of independent groups open science activists, scientists of science and methodological and statistical critics and so each of these I think sort of found some some new engagement and found some new changes in both replication crisis and in their mutual engagement with each other, or as far as the open science activists go. Before there was open science activism, there was open access, which was a movement of librarians seeking to democratize knowledge. But what the replication crisis did, and especially the idea of open data as a way to help legitimize science and help protect against, you know, against frauds and cheats and, you know, add science and just enable data reuse, it enabled them to kind of expand this, a definition of openness and, and, you know, start to colonize some new roles. As a writer, who is a Cornell librarian and with the former head of archive told me, I think librarians are feeling like their role in discoveries really weakening they're trying to work force in this to assume new roles. And I think, you know, the, the open data infrastructures really provided them to a way for them to utilize existing skills and archiving into, you know, this new movement. So we have the science of science. And this is an old, an old field that goes back to the early work of Derek solo price, where you do, you know, quantitative studies of science, but in the last decade, a couple of changes have really have really transformed this field. One is you have this mass migration of everything online. And so suddenly the ability to do science of science explodes and you have this possibility of utilizing big data. You have this influx of a physicist and computational biologists and people with these, you know, particular quantitative skills into this area. And I think, you know, perhaps, you know, perhaps reflective of this of this influx is you have this new willingness, not to treat the science as an observational science, but to increasingly, you know, experiment and intervene on science itself. As a person auto at all right, engaging in a tighter partnership experimentalist science of science will be able to better identify associations discovered from models and large scale data that have causal force to enrich their policy relevance. So there we go. So, you know, tighter partnership experimentalist for the direct purpose of having that information go directly back to policy. And lastly, you have the tradition of methodological and statistical criticism which goes, you know, all the way back to, you know, when people started experimenting and using statistics. And I think a lot of the, you know, the present discussions you can go back to the work of someone like Paul meal and see, you know, a lot of very prescient, you know, prescient discussions, you know, from that work in the 1960s and 70s. You know, kind of a staid like, you know, almost musty field as Fiona fiddler, one of the presenters at meta science said, when I finished, she talked about her dissertation which she compared, and when she compared methodological and statistical debates in a couple fields. She said when she finished I thought this is going nowhere I've just documented five six decades of this going exactly precisely nowhere there's nothing left from here. Turns out I was wrong about that it was going somewhere and now we're all here in the middle of some kind of revolution. And so the replication crisis, I think really gave sort of new fire, new motivation to these old arguments and, and together I think really the groups to to organize and to intersect in ways that were mutually beneficial. So for something like open science, you know, they were, you know, people in open science are still interested in issues like lower costs which may not be of interest to the science of science, but both groups are interested in data reuse right and these areas of overlap data reuse research integrity and diagnostic analysis, I think really represent the kind of core hub which has managed to bring these diverse worlds and weave them together into into a force. Now, there have been a lot of, you know, debates about individual findings and a lot of you know self, you know, self legislation and you know periods of anxiety in different fields. And a lot of times they kind of rise and they generate a lot of, you know, a lot of heat and a lot of, you know, articles and they kind of disappear. And science I think is doing something really different which I think makes it important for the future of science and for people who are interested in the future of science policy, because they have been much more aggressive I would say at the institutionalization side of putting together institutions and programs that are going to outlast this particular, you know, these particular debates, and some degree this has to do with the role of funders. You know, who is the president of the Fester Foundation, who helped fund the medicine symposium he said the idea of medicine was powerful enough and it's time had come to where private foundation stepped up and provided the seed capital so you had foundations probably putting up a collective of 2030 million, then all of a sudden this thing got a huge amount of visibility because the university has been involved. And so you have the situation where the Arnold benchers and you know, Sloan Foundation more foundation welcome trust all these people putting money in. So it coincides with this very interesting shift from from a rhetoric of crisis to a one of efficiency. As I mentioned, you know, Jonathan Scholar who wrote this you know, medicine science could rescue the replication crisis. When I interviewed him later, I asked him, you know, is there a replication crisis is science or psychology and replication crisis and he said quote the word crisis is probably overblown. And I think in general you see this a lot in both the interviews and in in the, the actual, you know, a meta scientific literature is, although it's very clear that, you know, the raison d'etre of of medicine was a replication crisis. You don't see a lot of discussions of crisis in the literature and said you see much more, you know, many more discussions of efficiency. I'm not going to read all these quotes but these are, these are a bunch of passages from from a pretty well known and well cited medicine science articles, and you see just a lot of discussion about you know, threats to efficiency, you know, reducing efficiency more efficiency. And I think this is part of a kind of a savvy move that occurs. Becker notes, a moral entrepreneurs often find themselves out of a job, because once new policies are passed once new laws are passed, you know, the purpose for those kind of moral entrepreneurs goes away. But, you know, crises fade, but the idea of making science more efficient of efficiency is forever. And so I think, you know, shifting into this new role has been a way for medicine science to cement a kind of longer lasting influence in science. And I've got about a minute and a half and so I just want to bring up a couple points about why this matters. So I think this matters for a couple of reasons, one, I think medicine has largely been framed as a movement for scientists by scientists. And I think that's largely true I think there's a lot of well meaning people in here that are, you know, trying to do good work. But I think it is naive to not appreciate the fact that a lot of the products of science, they end up these kind of methods to observe and to, you know, intervene in science directly, they represent something that the managers of science people in the institutions and in the world have long wanted, which is, you know, more insight and more control over what happens in science, which is one of the reasons why I think as as Sabina brought up in the last talk that there is a sense that this represents a kind of bureaucratic, you know, creep, right, a sort of sense that that that they're having to do more and more things that maybe they don't understand or don't appreciate. And as a separate project project from this, we interviewed a bunch of working day scientists and this was a lot of people, you know, gave us different flavors of this there was a sense of like, we are getting more and more demands on our time in the name of these, you know, transparency and openness and, you know, and, and, you know, making science more legitimate initiatives but they didn't necessarily, you know, directly see see the value for themselves and it felt more like in positions. And lastly, I think there are big questions as to how much medicine really implies a kind of singular coherent science, even in this conversation we've been having today there's been a lot of focus on on psychology, because I think this is where a lot of information comes from, but I think there are there are real reasons to doubt and I'm just going to plug in another paper that Aaron Pinovsky who also wrote this paper with me, and I recently published. There's this article where we argue that that different replication cultures in these different sciences ends up producing very different, you know, very different modes of self correction and I think without understanding that, you know, I think scientists and other replication activists within science are at real risk of pushing a pushing policies that are ill suited to the specific and you know, experimental and epistemic environment. So with that I'll go ahead and I'll go ahead and stop. Thank you very much David for that presentation, it generated a lot of comments in both the chat and Q amp a so I'll present a couple and then there'll be time, hopefully at the end for getting back to some of the others. Steve Goodman, who asks one of the big problems of meta sciences that while it purports to transcend science and scientific fields. In fact, most of meta scientists are completely constrained by their home disciplines. Much of what has been going on in psychology and the social sciences was what clinical research went through in the 1980s to present as represented by the mass movement triggered by the Cochrane collaboration and resulting in the largest demonstration platform yet created supported by law clinical trials.gov. How can we analyze meta science without seeing that it has existed and evolve very differently and at different times in different fields. Yeah, I mean I think that's, that's the major question that I think that that's, I have been struggling with and I think this kind of this kind of critical, you know, critical analysis of meta science has been struggling with. I, my dissertation research started in psychology labs. And so I was doing observations of practices and psychology labs when a lot of the kind of replication debates emerge within those fields. And then I saw something really interesting which was this kind of transition from a lot of the people who are who are, you know, interested in these questions psychology, they suddenly transition to this larger, you know, meta science that, you know, these, you know, since the conditions that created the problems in psychology are not unique to psychology, then you know we can start to address science more broadly and so there was this, there was this generalization that happened. And I think in that generalization, there is a real danger, because I think when you do. And I think, again this goes back to Sabina's point that when you do go back and actually talk to individual scientists in different fields you find out that that, you know, even though the fields have a very superficial sense of look similar, that in fact they're dealing with very different sets of challenges, and that the same policies that may work very effectively in one field may not make sense at all and another. As a part of the paper that I, that I just mentioned that they just got published in the social science. There are very different like policies of self correction and this idea of, of what I've been talking about before this this myth of self correction. I think that may be true in some fields there may be some serious serious problems with a self correction mechanisms and certain fields, but I think you don't know that until you actually go in and see how the self correction correction mechanisms work in different environments and then, you know, until you do that I think it's very risky to starts to start assuming that that you know what works doesn't work in one field can be applied to others. Great. Thank you and for time I'll probably ask you one more and maybe two if we can squeeze it in. But there are a number of questions about efficiency, one here from bartenders efficiency is no upper bound and it's pure pursuit brings along the risk of endless pressure similar to excellence. How to dodge that as a structural problem. Yeah, I mean I think that's that's an interesting point here and I think this is one of the, the, the questions. I think if meta science wants to found itself on efficiency if that if that's the, the rock it wants to found itself on I think that is a not a solid foundation. I think the idea of efficiency and science I don't think makes makes sense at a philosophical level because I think the idea of efficiency essentially. It requires a scenario where you have a known and, and you, you rearrange the means to achieve that known at the problem with with basic science is you don't know what that known and it's right and you go back to people like popper and plenty these classical, philosophical science and it is a, it is a fundamental characteristic of basic science for them that it can't be directed, but ultimately we don't know where it's going, which means that that you know, since we don't know where it's going, we, we can't efficiently organize the means to get there. Right. And so what the risk I think the danger is that instead of you know using the actual ends of science which is what you often don't know like what progress is until you know until many years you end up substituting countable things right you end up you know substituting citation counts or you know some other proxy. And as soon as you do that then you then you introduce all of the problems with gamifying. Then you have you know scientists rather than actually trying to produce you know in a qualitative sense for other people in their, in their, you know expert community, you have them starting to starting to react to this external system, which is going to determine their career career trajectory. And so I think you already see this somewhat with things like performance metrics in in science, and I think there's a danger that efficiency, you know, in science becomes a kind of similar cultural. Great. Thank you for that David. In the interest of time we will move on to the next presenter. David and others feel free to engage the other questions in the chats. And we can return to some of them in the closing discussion. Our fourth presenter is Bernard de Vezer. She is an associate professor of marketing at the College of Business and Economics at the University of Idaho. Thanks very much for joining us. So, other speakers have covered a lot of ground with regard to the homological cultural and social aspects of the reform movement. I'll specifically focus on the mythological practices which I will refer to as metamethasology to quickly summarize my key critique I'll quote a paragraph I just came across in a book I was reading and thought perfectly captures the gist of my position. And it is, it's a book. It's from a well known book called probability theory, the logical science by ET James and this quote jumped at me right in the preface. In the old net, in the old works there was a strong tendency to argue on the level of philosophy or ideology. We can now hold ourselves somewhat a look from this because thanks to recent work there's no need to appeal to such arguments. This is a composition of proven theorem and masses of worked out numerical examples. One can argue with a philosophy. It's not so easy to argue with independently of all of your philosophy here are the facts of actual performance. Metamethas with literature we largely lack such proven theorems and worked out miracle or computational examples. And we customarily rely on verbal argumentation and assertions instead of establish statistical facts. Let me give you some examples of common methodological assertions often communicated in this literature. In this book I just picked a few related to the disability, which I will use interchangeable with replicability so it results reproducibility. For example, the goal of science is to maximize replicability so we should try to improve replicability was reproducible so results. The definition is the way to separate true findings from false positives. So if we suspect something is a false finding, we should simply try and replicate it to get conclusive answers. The latest the third one is that true results should or should be almost perfectly reproducible firing sampling air. These assertions are sometimes implied and sometimes explicitly stated, and all seem harmless and possible enough but how do we know they're true. Do we have enough evidence to justify the commoditization of such assertions science reform kind of is that literature is bringing with similar assertions on many methodological issues. Certain problematic patterns are right, such as vague incomplete or misleading assertions at times as slogans being presented as unconditional bold facts about scientific practice, unestablished or untested methodological assertions getting rushed as solutions to sometimes defined problem policy implementation proceeding a true understanding of complex problems of scientific nature and hidden on stated assumptions lurking in the background obscuring when on why methodological assertions should hold. All these demonstrate a lack of transparency, precision and formal rigor. Assuming that transparency is a shared value of health and the reform movement. We need to think carefully about methodological transparency as well. And it requires that we shift the current assertion based discourse and move away from arguments from authority and arguments from popularity. Instead, we need to normalize and prioritize proper formal methodology. So here we offer a need five step process to satisfy the minimum standards for statistical rigor, and to facilitate new on measured while supported assertion. So step one is to conceptualize the methodological idea, and this is something that we already see a lot formalism though starts at step one with clear statement of mathematical assumptions and definitions of our variables parameters constant. Step two, we formalize our conceptual idea, using the terms we defined, and under specific assumptions that we made a step three, we apply problems theory and derive formal results with proof. Step four, now we can demonstrate numerical examples or computational examples to solidify these facts and extend them to different situations. Only then should we start thinking about issuing recommendations and policy. Oftentimes in reform moment what we see is a jump from step zero to step five. My collaborators and I have tried to address the gap to study issues surrounding your produceable to formally. And we have aimed to achieve clarity in the foundational concept for meta science from get go. For instance, to properly formalize your produceable to related problems we first need to clearly identify what is the focal objects of meta science precisely define what a replication study is and examine what factors contribute to failure to replicate systematically. At the end, we introduced the notion of idealized study as a useful tool. We basically break down an empirical study experimental or otherwise into its core components to systematically and formally examine their relationship. The focal object of current meta science discourse is an empirical study, which we idealize and demarcate with psi. And it consists of a number of interacting components. M is the model under which we make inference data represents the some unknown components of the model about which we would like to make inference. X sub n represents data from a random sample of site and that's generated under a true model and assume true model. This represents our experimental in statistical methods so it covers both design and analysis related issue. K is the totality of the background knowledge we bring in to the study. And finally, there's D, which represents decision rules we introduced externally such as the significance level. This is not empty notation. Statistically speaking, we know how these elements behave. What role they play in generating empirical results and issuing theoretical guarantee. Now that we know exactly what we mean by an empirical study, we can also define replication study. And I replication study is a study that's exactly the same as I in all that two components. It only differs in X sub n because we need a new independent sample independent random sample. And K is different because we need knowledge of the original study added to the other background knowledge that we bring in. So concepts, we then can apply problems and did apply probability theory to drive theoretical results related to when we call a result of successfully reproduced. What's reproducible to rate is in general, how it's related to truth, and then use computer simulations to study how different components of the experiment of the study behave and interact with each other. So we're showing some examples sample simulation results we obtained that bring some nuance to the assertions that I listed earlier. First one should find subject wise improving reproducibility. In a 2019 paper we created a simulated scientific universe in which agents run studies on their different strategy and look at different outcomes of the scientific process. But the color dots represent different research strategy. X axis shows the speed of true discovery and y axis shows the reproducible to rate and experienced by the scientific community. The first thing that we see here, here is the kind of semi haphazard spread off dots all around the map. The second question is the effect of clear effect of research strategy. We find that some research strategies, such as the ones represented here in blue will make true discoveries very quickly, but they're going to be all over the place with regards to reproducible to rate. Such as the red research strategy will reach maximum levels of reproducibility while not being able to guarantee that a true discovery will be made in a reasonable amount of time. It's the minimum this results suggest that prioritizing reproducibility improving strategies may have undesirable consequences for scientific progress under certain assumptions. The second one, can we use replications to tell true results apart from false one. For that I'll show you another simulation from a recent results recent paper that we published. Imagine we study a research question about the effects of a single predictor on a dependent variable in a simple linear regression framework. However, imagine that there's a problem with the instrument we use so to measure the predictor such that it adds noise. But we don't know that so we fit a standard simple linear regression model without accounting for measurement error. So basically, we're making inference under model mis-specification. Every square on this map summarizes the results of one million regression models with model mis-specification based on independent samples. On the x-axis we have a ratio of measurement error variability to sampling error variability. The larger this ratio the greater the relative effects of measurement error on our own observed data. The y-axis shows the distance of our estimated effect from the true effect. The closer we are to the origin, the more we go down here, the closer we are to discovering a true effect. And the heat map shows the rate of reproducibility, the darker the color the higher the rate of reproducibility for a given result. So the black ones are 100% reproduced results and white ones are zero reproducibility. You can see that for different weights of measurement error, so as you go around the x-axis, we can achieve highly reproducible for results at varying distance from truth. We can be no longer close to truth, we can be up here on top of this map and still find a way to replicate our results perfectly. And finally, let me show you another result, our true results are reproducible. In other words, is reproducible to primarily a function of the state of truth. So here's a new stimulation example from a working paper. We have another linear regression example where the true effect size is small. The x-axis shows the accumulation of replication studies, the original study here and the 300th replication study at the end of the map through time. And y-axis shows the reproducible to rate again. In this one, we're making an instance under the true model, so no model specification. We have two different statistical methods, method one and method two to analyze our data. And we also simulate different sample sizes, small and large. Each simulation ends up converging on a value that's represented by a star. And the, by theoretical proof, we know that this is the, this represents the true rate of reproducibility for that particular result. And the replications correspond to simulations in which the replication study is in some way different from the original study. So from this plot, we can quickly observe a few things. First one, that's not all true results are equally reproducible. For example, the true results rate of a true result from a study based on small sample size, the blue and the orange ones here. We can convert to a true reproducible rate of 40% in this particular simulation. Even when true reproducible rate is high, so looking at the purple and the green cases here, we might see a lot of variation in the observed reproducibility after only a small number of replication. So between zero and 50, you're going to see a lot of variation in the reproducible rate that we observed. The reproducible rate then is a function of not only truth, but also our design and analysis choices. All of these simulations that I showed you, and there's a lot more in the papers are backed up by and exemplify theoretical results on their clearly stated assumptions. I showed that there's a lot of nuance to be added to methodological assertion. And without this level of specificity, common assertions we hear are at the minimum over generalization. But worse yet, they can be misleading and might have major consequences down the line that may be undesirable for science. In general, in current meta method narratives, we see certain patterns. For example, science reform has so far prioritized sample size related issues over sampling related issues. We're lacking in similar QRPs over model specification, restricting the use of data sets to prevent double-dipping over scrutinizing data carefully before analysis and things like selective infants. Replication studies over theory have been prioritized over theory development and transparency in empirical claims has been prioritized over transparency in methodological assertions and so on. All of these are arbitrary choices that potentially represent dominant epistemology and preferences of early readers, leaders of the reform movement. And there's no part of the logic to these priorities, no systematic approach underlying them, and no guarantee that they're the best way to address current problems. We think there's a better way and a better way to shift to a new logic for methodology. Similar to the sentiment expressed by the quotes I gave you from Jane, we believe a transparent and sound meta methodology needs first to the prioritized arbitrary goal post, then to focus on establishing facts instead of settling for various assertions that support clean narratives, because typically in statistics, there's not going to be a lot of clean narratives. There's always some nuance to be added and some assumptions that need to be specified. And to aim for higher precision nuance and statistical rigor as supported by a problem with this theory. That's all from me. And if you're interested in reading more, I'm going to have these links in the slides that I'll share. Thank you. Excellent. Thank you very much, Berna. We do have a couple of minutes for questions if they come up I see that there are one or two arriving. I'll ask a first one of the four the five step approach that you described. So where is the role or what is the role of of exploration in that testing ideas in the field to try to understand those models or assumptions and otherwise. Exploration I think may mean since we're talking about statistical claims here it's a little bit different from regular explorers research where we're talking about phenomena and thinking about this I'm exploration may come up and during the time of ideas for sure. And exploration there's going to be extensive exploration I think in the poor, where you're actually looking for boundaries and limitations of these effects and explore under which circumstances your methods work under which conditions they don't work how you can there's going to be extensive exploration. And after you have built the foundation for the, for the method. Great. Thank you. The another question john Sakaluk asks, I wonder if there are any themes that have become apparent to you in terms of more effective whatever way you want to define that combinations of methods used plus type of study or research question. What are the typologies that we should be emphasizing or de emphasizing. So that's an interesting question. I think I don't know I don't necessarily think of. I don't necessarily find I think putting different types of research into boxes like well specified boxes a useful exercise on pretty much a pluralist myself in approach I think we need all kinds of approaches and sometimes trying to fit them into a well specified categories is kind of artificial and made of cure. You know that's what we're doing. So I don't know. I don't have a good answer for the question I think I need to think a little bit more. That's true of many questions they are hard. So one more. So we have time. Could you expand this is from an anonymous attendee. Could you expand on what quasi replications would mean, given the basically impossible right I'll let you respond. Go ahead. Sure. So, in this case, let me just go back to the slide because I kind of know skip through that. As I said this is kind of a working paper. But what we mean here is that the these purple, green, blue and orange cases. They all represent exact replications. So everything that I defined in the idea life study other than the, you know, background knowledge and the ample are exactly the same. Whereas in cause our application so for instance this initial original study could be could be using method one and have a sample size where as the replication studies will use different methods and different sample sizes so it's going to be different in some way from the original in cause our application so that the reproducible to rate that it can they converge to overtime is going to be kind of an average of different types of methods so it's not going to be specific to a single result. That's what I mean by that. Thank you very much. And you'll see many more questions that you might want to respond to in the QA box. So thank you, Berna, we will turn to our final presentation and then switch over to a group discussion. Our fifth presenter is Kyle harp rushing. Kyle is an adjunct lecturer at Chapman University. Kyle, thanks for joining us. Thank you so much for having me. And I just wanted to, can you all hear me okay. Okay, perfect. Just let me know, if not, I just wanted to thank Brian and Claire for inviting me and putting this together. It really is such a wonderful treat to be included in such a fantastic panel. You know, studying increasingly relevant and crucial topic. So in, in, in early 2016, I became interested in the topic of open science after coming across a series of tweets and sort of tweet storm about reproducibility crisis that was brewing in the field of psychology. As a grad student in anthropology, my interests were largely in the ethnographies of science that kind of cross pollinated with the field of science and technology studies or broadly. You know, the science wars of the 1990s. And so this, this crisis that was kicked off by RPP seemed a very familiar. I think, you know, David was mentioning sort of getting to this as well. And so many of the heated debates among the scientists echoed those of a previous generation, for example, about generalizability of research findings and significance of tacit knowledge is that are kind of specific from lab to lab. So there was an omission. As David mentioned, of scholarship of science science technology studies in general, but more specifically subfields within STS that I think really lend themselves to the same, the same types of questions, particularly feminist anthropological approaches to science that are originated in from Nancy Hartsock and Sandra Harding standpoint, and really taking seriously the subjectivity of the researcher. And so Jane Moroski, a historian of psychology has talked about with relatively few exceptions. Psychology has tended to not quite integrate reflexivity in the same way into their interpretation analysis of results. So, and I forgot to set my timer so clear if you could just send me that that nudge once I, you know, start to get a couple minutes to the end. But so as it turned out, I was also working on it as a graduate student researcher on a project that involved a very same agency that had coordinated RPP. I got invited to visit COS and was enchanted by really what immediately felt like a sense of what Durkheim called collective effervescence, this sort of electrified atmosphere that really seem to infuse into the space. You know, it permeated this cool kind of decentralized porous open office, but it also really animated the extremely generous and welcoming employees that work there. And so I asked if I could be allowed maybe to conduct anthropological research there and I'll never forget Brian's response that we wouldn't really be open if we said no right. From 2016 to 2018 I spent about three months at the center, which is not really admittedly not a lot of time for a properly ethnographic research project. But you know, untreated and really a crushing social anxiety and depression were kind of really impinging on my ability to do that kind of long term immersion. And I didn't really expect it. So when I wasn't there, I also tried to keep in communication with with the friendships with the friends that I had made there. And so that friendship really kind of influenced the direction I want to take when I was writing up my dissertation. I was influenced also largely by the commitments to friendship and hospitality in general that I was really fortunate to experience. So I'm going to try to question how can we think of open science the question of social reproduction and reproducibility. Right, so really how do we create these alternative knowledge ecologies and infrastructures that, you know, genuinely invite and welcome folks, and that organize and promote relationality and sort of intellectual solidarity amongst folks, rather than, you know, these rebranded kind of simplistic suites of heuristics for, you know, research quality or impact or productivity or things like that. And so I'm very much in debt to Marxist feminist scholars that followed, you know, like Nancy Hartsock and Sandra Harding, like Kylie Jaret and Jodie Dean who productively have been interrogating the ways in which emerging information and media ecologies enable corporations to exploit really not only necessarily our labor anywhere but the very substance of our relationality as such so that's our means of communication. For example, Jodie Dean describes this particular moment in the monstrous transmogrification of late capitalism. I mean, she talked about our collective passage into what she calls communicative capitalism where quote values heralded essential to democracy materialized in network communication technologies, ideals of access inclusion discussion and participation, or realized through expansions, intensifications and interconnections of global telecommunications. So, although Dean and Jaret are really kind of focusing on social media, Jaret in her fantastic book The Digital Housewife, which of course is different in many ways from the open science information ecology. I kind of want to propose that the problem or difficulties that this opens up for downstream consequences for open science is in many ways similar. So, despite the techno utopian promise that, you know, amongst some corporations social media to facilitate something akin to class consciousness among an otherwise disparate group of workers in the knowledge class. Jodie Dean finds us trapped really in what she calls a setting of communication without communicability, in which the content of our utterances is rendered unimportant. And so her goal is to understand how it is at such quote participatory infrastructures that are predicated on positive visions of the communicative commons really instead to come in, come to reinforce the capital status quo by stifling solidarity and and knowledge sharing. So to organize the connection open science here might feel like a bit of a stretch. And so my thoughts rather kind of admittedly speculative, but keep in mind that we've actually already seen this in many ways. And we've seen many examples of open sciences I think we can call recuperation, or it's cynical kind of capture and slick rebranding by the same extractionist, extremely parasitic journal publishers that many of us maybe would hope would have crumbled under open science. So Philip Morowski in a recent article in 2018, and the initial issue of social studies of science discusses the quote numerous hybrid formats that combine corporate control with really only a modicum of outsourcing or crowdsourcing within these open science, kind of public private partnerships. And I'll pull that with the need to pivot right as many within and outside of the self described open science community have already pointed out with open access, charging $40 of journal articles and extortion model that whose viability is really rapidly vanishing right. So I'm going to turn to my anthropologist Chris Kelty, when Elsevier bought the social science pre print hosting platform from SSN in 2016 he put it simply in a blog post that was entitled it's the data stupid, meaning that Elsevier's motivation wasn't nearly to place thousands of pre prints suddenly behind a paywall, but actually, and really kind of more insidiously to monetize the enormous trove of data that was hidden behind those pre prints and to monetize the data about researchers. This, this kind of reminds me of a quote from the late cultural theorist and adjunct lecturer, Mark Fisher that that the limits of capitalism are not fixed by PI, but they're really defined and redefine pragmatically and improvisation. And so this makes capitalism very much like the thing in John Carpenter's film of the same name, it's a monstrous infinitely plastic entity capable of metabolizing and absorbing anything with which it comes into contact. And so the angles have said, all that is solid melts into air, all that is sacred is profane. So returning to Jody Dean's insight with the materialization of the ideals of democracy into network communication technologies has really coalesced into a milieu in which, quote communication exists without the ability, and that's also dominated by an informational ecology where in which utterances gain steam regardless of their value, and actually increasingly often in inverse relationship to their value right. It's, I think it's really timely that we think about what critique and particularly the critique of the critics will look like. This anthropologist of science, Kim Fortin's animation of Jack Derrida's notion of the paradox of hospitality, which she applies to, to data sharing platforms and their kind of proliferation of late. And she, she notes that Derrida points out that the, at the heart of platform hospitality, there's a tension between the necessity to exert mastery and control over a space right to to maintain a database to to perform that vital repair for example, and an openness or willingness to surrender that that ownership right to allow participation in these inside database to be really a seamless and frictionless and experience as possible to borrow from Paul Edwards. And so this tension becomes increasingly fraught with with contradictions as speed and bandwidth, sort of this is accelerationist approach open science become indiscriminate research targets are really simply as research productivity becomes measured in events and uploads, really potentially have even pretty shoddy, you know, content. And in some cases pretty pretty troubling content as well. In the concluding chapter my dissertation for example I tried to examine the downstream flows of a preprints that were that were posted to sci archive, the, the psychological preprint archive by individuals and psychology researchers, and one of which was downloaded, which was downloaded over 1000 times. And do we see claim to offer support for social models of quote meritocracy, and against quote racial discrimination models by comparing self reported racial categories to quote cognitive ability, or intelligence quotient, whatever that means, and income. And since I found the author of the preprint had generated a lot of interest on the white supremacist forum storm front, where the original poster of a popular thread favorably quoted and linked to an interview with the offer with the author. And he cited the author is being quote genuinely knowledgeable and racially awakened. And so, you know, to be clear, I'm not saying that the issue of miss or disinformation can be heaped on the doorstep of open science. I'm really not saying that at all. I mean, and disinformation science is a really quite a long history. The highly controversial pseudo scientific publication, for example, my mankind quarterly, which sadly for me Google lists as an anthropology journal has provided scientific racism with the symbolic accretion months and imprimatur of legitimacy. In the 1960s. Elsewhere Naomi Orreskes and Eric Conway have really, you know, brilliantly demonstrated how teams of scientific experts with otherwise impeccable credentials have been routinely assembled to manufacture public doubt about the causes of global warming and be the smoking cat cancer link. And I don't think that the problem can be neatly summarized as the result of a lack of clarity surrounding one online pre print archive submission and moderation policy. So for example, when I click on the link to Sherpa at the bottom side archive and direct the to a generic and then a search for sci archive turns up no results. I mean, you know, moderation policies of course, are of course part of the issue. But, you know, for example, it's not even clear how moderation policy can, or even really should be constructed that would have prevented a pre print being posted and disseminated that was falsely interpreted to suggest that COVID-19 was deliberately manufactured in a lab. According to bio archive this month alone has been downloaded astonishingly to me over 2000 times, despite receiving widespread criticism and ultimately being retracted shortly after being posted in like January 2020. So then the issue is how much damage can still be done. For those who feel that the scientific process is working just fine because you know people got together on on Twitter and you know really took these authors to task for several methodological issues. I'm also not saying that the pre print well is poisoned with misinformation. For example, I would be surprised if we if we come to find that pre prints on the open science infrastructure and ecologies. In general, we're really useful and speeding up the development of the COVID-19 vaccine, right that many of us fortunately have been really extremely privileged fortunate to receive. You know I kind of I want to think back with the sort of omissions. I see in meta science and a kind of sidelining. You know, as being a mention of qualitative research about science and qualitative studies of science and technology to begin to continue to think about the substance of relationships that create knowledge and disseminate knowledge, and how that can that also extends to other infrastructures like social media. And with that, I'll go ahead and start it looks like I'm out of time. So thank you. Perfectly timed Kyle thank you very much. So a couple of questions and then we'll transition to group discussion. The first is that I'll take is a, you know the communication without communicability phrase. I take that part of your comments as potentially ego deflating on the ability to trans transition ideals, effectively at all into practice are there in serve the analysis as you provided are there productive paths that are better strategies than others for manifesting those ideas. Yeah, absolutely. So my, my sort of attempt to animate the concept around that is to to really try to think with kind of the these forces of like memification or contentification right where we become really so deeply a mess and immersed and interpolated into participatory networks of communication that the content that begins to circulate within those those networks. You know, it is is reducing value reduce reduce, you know narrative is lost. And we lose a sense of complexity right and so I think this is really kind of a larger kind of commentary about the, you know, the, the advocacy we've seen of organizing on social media, right, and why we continue to see the harassment, you know, emerging to such an extent on social media. And so for me it's not productive to talk about open science separate and apart from social media because open science is really kind of embraced social media in a lot of ways. And so in that embrace I think it's really, you know, so your question is like how I guess how do we do better right. And then that's such a crucial question and one of the things that's always deeply inspiring and what really stood out from my time as us is that's that's a question that you all are continually grappling with right. And so there are scholars, Charlie G at MIT, for example, has a book on what might a feminist and internet look like. And I think, you know, trying to create spaces of accountability in in ways like that. That could really, could really help, but also not relying on technological fixes to, to really to achieve on it on its own right so there was a comment that I saw over the break. And that was really, was really fantastic about switching from talking about incentives to justice right and one of the, one of the hope, one of the, the, you know, the imaginaries that I always hear that comes up a lot and open science discourses is, you know, we no don't have to worry necessarily about data getting scooped and published from out from under us because it's time stamped right. But it makes me think about, you know, the contributions of Rosalind Franklin, for example, I don't think that her work was overlooked because she couldn't prove that she took the photographs that prove the helical, you know, structure of DNA. Her work was overlooked because even if she could prove it, what would have happened? I mean, she probably would have been labeled aggressive, she would have been labeled bitchy, like all these other things, right? And so it's this larger system or structure that, you know, I think the commentator, I don't recall her name now, really points out in differentiating between incentives and justice. Great. Thank you. One more question for you and then we'll go to the whole panel. This is from Bart Penders. It might be old fashioned but could a dramatological perspective help highlight the seemingly paradoxical ingredients of open science movements in which there's a backstage to the politics of erasing the backstage. Yeah. Yeah, I mean, that's, that's, you know, that's absolutely, that's really, I hadn't thought of it, but I think that's a really, a really great question. It reminds me of work by Charis Thompson on good science and the choreographies of good science. I wrote this in this book, Good Science Choreographies of, you know, trying to sort of interpret how it is that, you know, good scientific practices, what we've come to regard as scientific practices are enacted through the body, right, become embodied. And so in another research project that I worked on, you know, one of the, one of the tensions that I see one of the barriers with particularly graduate students and PIs adopting infrastructures for open sciences is primarily because they see it as in conflict with their, you know, their, their subject formation, their themselves as a research subjects, right, because they feel such an immense amount of pressure and stress to constantly be, you know, carrying out all these expectations of performing good science, right. You know, housekeeping, Mike Fortune at UCI talks about these four qualities of carrying out good science, housekeeping, friendliness, and so thinking about it from a dramatological perspective, I think would be, would be really deeply fascinating. I don't know if anybody doing that work, but that's, that sounds, you know, really interesting. Great. Thank you, Kyle. And thank you all for your individual presentations. Now, what we'll do now is, is encourage an open discussion among the panel members we have some seeding questions from members of the observing community but we encouraged that this group to talk with each other, ask each other questions and follow up. I'll just seed with one question and then we'll see what happens in how they interact with each other. And this comes from Andrew smart and references back to David's presentation, but is relevant for all of you or so I think, and that is why is efficiency a good scientific value. If anything science only progresses through inefficiency. We learn the most from failure refutations and mistakes efficiency is an economic value, not a scientific one. Anybody want David maybe you want to start and then anybody else can react. I mean I think efficiency is, is clearly a value. I don't think it's necessarily I so in a couple of weeks or months. Aaron and I are going to be. There will be a piece out called arguments against efficiency and science. And the idea is not necessarily that efficiency is a terrible idea, but I think it is a terrible load star for science, I don't think it is necessarily the value that should guide all of science. I think there are a number of other values and efficiency in some sense as as the commentator noted can can, I think, negatively influence the progress in science. But I think you know ultimately the goal here is to is to weigh these different values right is to think about the different values of, you know, things like justice or insight or sustainability or, you know, theoretical significance. And efficiency is just one value in the system. And so I think we, you know, we need to do a better job of sort of weighing these rather than just assuming because it is perhaps easiest to model and the easiest to to evaluate, you know, just go with efficiency. So I'm going to ask quickly I thought David's reply to this question earlier also was extremely useful. And I would say efficiency as a very loaded history. So I completely agree with him that this is not a value that comes with a very, with a wide open space for interpretation of science and is adopted a very often precisely because there are very specific ways of quantifying it, which relate to an economic models, which can then, you know, or there is an expectation that this can be transferred to scientific research. It's incredibly problematic on on a vibe what I had to respect. And, and not least, because it assumes that we know what successful science is. And that is a question that has come up in many of the comments in the Q&A, and also on Twitter, like in discussing this symposium. And I think it's really useful here to really question what success means because that's I mean I take it in fact to be the key issue in open science that there is a question in and a reframing of what success may be in research and that needs to happen in a way which is non universalistic and generalistic and trivializing while adopting efficiency very, very often is exactly a kind of trivializing of basically canceling out the value of that question in the first place. If I may just add to that, it's I think exactly efficiency is really tied to this kind of economic perspective on how to manage research which is really tied to funding, especially public funding. And it's a push for efficiency and efficiency trumping other things as a value is seen as justifying the public trust in all this money being invested into research right. So I think there is quite a bit of space in the reform debates and in open science more broadly for this, this talk of incentive structures right and radical propositions of what to do with how funding works and especially in Western Europe, especially in the United States so these countries that are really have big budgets, especially from the from the public perspective. And this kind of dictates the talk of efficiency for the rest of the world, because I can say that from the margins of the EU. Nobody talks about efficiency at Croatia University so they want to reform the universities to behave like they do in Europe, and there's their their thinking if we have a problem with with efficiency being imposed that's a good problem right so and what does it mean to have these funding stream function in this way and project values that Trump all others is exactly the kind of question that I think open science advocates should be addressing and proposing models to to solve that in a sense. Do each of you have comments or questions on each other's presentations that you'd like to raise or broader issues that deserve some discussion across thematically across the sessions. Yeah, I had a question. So, so you know you had talked a lot about you know framing these these problems in a geopolitical context as well and you know, and one of the things that came to mind for me is thinking about the pre print servers that have kind of really proliferated recent years and you know the, what kind of strikes me as almost like common sense in a way but also a little bit odd is that there are kind of country specific pre print servers. So there's an Indonesian archive right. And it's interesting that sort of, I guess, in the West, they're more discipline specific and focused whereas they're much more, you know, situated in like you, geopolitical locality based on countries. Yes, one of the things that I'm thinking about is in terms of colonialism and global extraction. Might that then lend itself to because you know that data has value, right that data or those those pre prints that has value and for it to be, you know, is there a potential for them to be a source of exploit or exploitation I guess an extraction. I'll think about that. Thank you very much Kyle that's a very important question and of course it can be related directly to any other open science infrastructure this is not just pre prints. This is data repositories. This is methods repositories. This is materials, a code, you know all of this is affecting the same in actually I would say similar way broadly speaking. And I think the the issue is to really work depending on maybe not so much in disciplines because discipline can be very, very diverse within themselves, but in relation to particular objects and objectives of science internationally to think about what a more equitable distribution of resources maybe. Now one of the things that we've seen actually especially in relation to a pre print repositories in the east and the south of Europe is that especially those repositories that were geared towards translating at least the title and the abstracts of papers into English, are providing a lot of visibility and making it possible for researchers doing local work to acquire better reputation in an international environment and so that there can be a lot of benefits to this kind of work in that way, particularly when it's sensitive to linguistic differences One of the projects I'm doing right now together with the FAO, the CGR and several other organizations working in agriculture is looking at what does it mean to do a data linkage in a responsible way when it comes to agricultural development and plant data. And this is an incredibly difficult space, because the space which is completely an utterly dripping with post colonian inequity, where there is a centuries long history of exploitation by certain countries over others, and this is very much to do with the representation of materials, germ plasm and information and knowledge coming from very often local indigenous communities in those countries. And at the same time, there are really important prospects coming from potentially being to share information globally. And so the problem is, in the case of data of that type, very much like in the case of preprints, there is actually still very little recognition that there are very significant ethical issues coming with extracting data from one particular territory and bringing it to another. So one of the things we're working on is to think about how this kind of digital information can be subjected to potentially governing rules or at the very least some sort of protection at the international level, in the same way in which plant material, germ plams has been in the past, right, and in touch, there are treaties around this. I think there's just a lot of work to be done towards, and it's possible to try and think about this in a way that's a solid and responsibly international. This means acknowledging the problem in the first place. And that's where the open science technology can be a double edged sword. I mean, it pushed the, you know, it highlights how important it is to find ways of sharing, and these kinds of knowledge, but at the same time, it can be used to obliterate the gigantic social issues that emerge when you start to do the sharing at a global level. That's the question of Berna. So, please. Berna, I think, I think you are the only panelist here who is doing, you know, quantitative work in this area, which I think in a way makes you very helpful because I think a lot of people that would instinctively dismiss a lot of critiques or civilizations of meta science would be much more interested in potentially reading your work and sort of, you know, filtering the critiques through this kind of more palatable means. But one thing that I was wondering about is in the kind of model that you set up for replication. It seems like the focus is mainly or the kind of variability is mainly sampling variability. What I have, especially in talking to biologists and actually Sabina might have something to say about this as well, is that the problem is not necessarily sampling variability as it is one that there are a lot of variabilities in like the technological setups and the existence of like for the accessibility of certain reagents or the accessibility skills. I'm wondering, like, are those sort of things, are they amenable to the kind of analysis that you're doing, or, or is it, is it too hard to try to like include that in the type of simulations, or is that, is it just an impossibility like, like how that works. I think probably it's an impossible to include all kinds of error but it's something that we actively try to examine actually because sampling variability is oftentimes the only type of variables that comes to mind when people are talking about lack of reproducibility of say, true results. So one of the comments that we receive is, yeah, of course, there's, you know, like potential for a type one error, but that's not what we are talking about. Of course, there's that, but that aside, there are so many different types of error that answer that's why we started out by identifying this, or trying to formalize this idea of a study, because all of the components that I identified starting from background knowledge that you bring into the models that you consider as your proposed models and under which you make inference to your choice of statistical methods or design choices that you make to the sampling issues like random sampling versus non-random sampling, all of these bring in further sources of error and in our simulations basically, we try to identify how much more complicated all of these can make the problem of identifying a true result from false results or identifying whether something holds based on reproducibility alone. So yeah, that's a great point. Sampling error is just one type of error, but oftentimes, especially in experimental research, what we have is a lot of model misstatification. That's the source of error and it kind of, you don't know how your statistical methods behave. We oftentimes don't have random sampling. We have biased samples that kind of change everything because all of the statistical inference that you make, all of the statistical guarantees that you have are based on the assumption of identical independent observation, independent distribution, distributed observations. So yeah, there are so many different elements that can mess up your results or interpretation of those results and interpretation of replication studies as well. It's important to account for at least some of them in our models, we try to do that, but of course the at the system level, there are so many variables and you just have to make some assumptions. It's impossible to model everything. It would be intractable, I guess. Great point though. Steve Goodman raises a question that applies to everyone. He says, this is focusing a lot on the theoretical framework of meta science. I would like to pose two challenges to the presenters. What would you think is the most important topic skill to be taught to young scientists? You can specify the discipline. And what is the one most important institutional change to improve the validity or efficiency of the science they do? And you can avoid efficiency if you like. If it's okay, I think it might, maybe for the first one, the most important topic skill for young scientists, I kind of want to come back to where I started with thinking about reflexivity, maybe. And specifically Jane Roskey's research on history of psychology that Ivan also cited and why, you know, we tend to see less of an infusion of reflexivity and taking seriously the significance of our own subjectivity and the observations that we make in other disciplines. So I don't know if reflexivity is kind of a skill to develop, but I know it's definitely sort of started to gain a little bit of steam with quantum mechanics and Niels Bohr and his insight, you know, the simple observation of subatomic particles can change their activity, right? But, you know, that sort of, I guess, if it's a skill has really been, I guess, I don't know, I mean, I don't know if it can even be, you know, like, distilled into a skill. It's a really good question. I like that. I make a suggestion about skills too. And these, of course, are very difficult questions. And I think for me is probably how I have to be transdisciplinarity. And that is not in opposition to hyperspecialization. I think we are stuck with being specialized. That's the way the world of science is proceeding. Things are more complex and we have more and more methods. It makes sense that there are very specific ways of specializing and acquiring technical skills. But I think the problem is very often researchers are trained and have seen it in many institutions to think that the moment they specialize, they basically lose the ability to communicate with people who have different specialties that can come from different perspectives, which is the opposite of what I think they should be trained in. I mean, they should really be trained. And there are ways to do this, I think, and to use their point of view, their specialty to contribute to project which have many different types of expertise and perspectives involved in them and to seek those kinds of problems. But I think very often we're educating people to shy away from those kinds of projects and that is really disastrous, I would say. Institutional changes, anything that can help that and this is not going to be one. Yeah, I think this is a really difficult question because everybody's going to suggest whatever is coming from their corner and we think it needs to be imposed on the scientists and I think many of us don't think stuff needs to be imposed right and curricula are already overburdened or programming or flexibility, etc. Just just going to revert to it even more. I think my perspective here and Kyle stole it from me, I wanted to say or flexibility to being informed by Joe Morales his work through. And I think this is a problem that historians of psychology really feel very strongly, because there's comparatively very few of us, and many work within psych departments and psychologists, especially quantitative psychologists who dominate are very reluctant to engage with perspectives of historians, let's put it like that, so I think it would go a long way to kind of be for reflexive thinking about methods reflexive thinking about theories etc etc, and you can do that by exposing students from philosophy to STS to, or different kinds of interdisciplinary to begin with, what to change institutionally. I'm always, I always come back to this point from from a paper by Andy Pickering about the discovery of weak currents in physics in the 60s I think that happened. He describes this interaction between experimentalist and theoreticians in particle physics. And this always struck me as where the theoreticians in psychology, as an organized group, like this dictate of, or mandate of producing empirical work to participate in the conversation. I think this really is part of this problem with a reflexivity that psychology proper and many subject areas in psychology have. So, changing that institutionally in some way, either through teaching, or through a securing space in journals from for really either theoretical formalizations like they're being suggested now or different kind of synthetic work. It always works so that's not literature reviewing. I think that's a that's a way to go at least for psychology. Any other reactions to that prompt. I, oh, go ahead, go ahead. Okay, with regards to the kill. It's a very interesting question and I agree both with you, Kyle and Savina that reflexive reflexivity and transdisciplinarity, both matter a lot. There's been more on the maybe closer to reflexivity one thing that I think could be cultivated. I don't think it can be taught. I don't think it needs to be part of programs but as individual researchers something that we can work on in a way that we can be cultivated on cultivating on ourselves is awareness of assumptions assumptions of the fields that we operate in assumptions of the research that we need to release assumptions of that we ourselves have when we're thinking a lot of times we do not know that a lot of times we don't don't think about what we assume, while we're formulating the research question, when we come up with a model when we're reading about the models or results that we are exposed to think questioning them is important for identifying them. It's the beginning, I think it's a hard skill to cultivate and very important to develop some kind of awareness. So, once we're aware, then we can start questioning whether they make sense or not, but a lot of times we're not even aware that we have assumptions that other people don't have or don't share. That's what I want to say. Kyle, did you have any follow up there. I wanted to follow up on the institutional part as well because it does occur to me that one really significant institutional change that open science advocates can really push for is to more explicitly ally themselves with the student debt cancellation movement and to abolish student debt, because, you know, one of the things that occurs to me is, you know, the, the types of questions that are being asked in labs and the types of methodological pursuits and the type of data that's coming out of that is coming out of extremely precarious casual working conditions for many folks right. Tom asked a question about the casualization exploitation of knowledge workers. And this isn't necessarily of course a new problem, but I think there tends to be a kind of, you know, aura that surrounds the idea of the you know the starving student as artists right toiling away in the lab. You know, unable to afford food, but like most other sources of misery, it's been intensified and accelerated right under contemporary capitalism, and we've reached a full crisis where if you know the knowledge workers within those, those knowledge workers are constantly stressed out about what is it, how are they going to survive how are they going to reproduce the conditions of their own life. Not only they, they're kind of come out with boring science right safe science right reproducing kind of just the same, you know, stuff not really breaking ground because they know it'll get on a job, but also, you know they're going to be exhausted and they're going to kind of produce awful science too. So that's I mean that's kind of one key institutional push that really we can push for is open science advocates for sure I think. Thank you for that Kyle. But looking at the time I think we can do one more big picture question that maybe everybody will have some comment on. And this is a theme that was most explicit in Ivan and David's presentations but really was apparent in all of them. The point is that there's clear heterogeneity in the motivations of reformers things that are the passions that they have or perspectives that they are advancing, and also in the desired end states, right is it is it rigor. Is it efficiency. Is it transparency for its own end. Is it democratization. Is it the challenges and opportunities that you perceive from your perspective of reform having embracing that heterogeneity or end up not being able to advance because of those differing motivations. I can speak from personal experience here I think so I'm trained as a psychologist and in my PhD and moved to history and philosophy of science and I worked on history of 20th century psychology. And I found it extremely liberating in that sense to move from psych department to a really interdisciplinary Institute where I was talking every day to everybody from a historian of biology to a philosopher of quantum mechanics and everything in between right. And I realized that that wasn't accessible to how I was trained. And in the beginning I thought that's because you come from a relatively prefer provincial marginal University in Eastern Europe so of course that wasn't accessible to you. But then when you read about disciplinary formation of psychology general in 20th century psychology is really closed off. And there are moments where this changes the cognitive revolution is a great example where something broke off and this kind of interdisciplinary transdisciplinary did things to how people thought right. So, I think this driving this point of epistemic diversity and any other kind of diversity within an open science movement that wants to reform a discipline like psychology. I think it's really crucial in making these perspectives accessible to as many people as possible. This is of course extremely difficult I think I think we need to be trained we need to be explicitly taught intellectual humility. So this comes with experience of interacting with different perspectives on the same subjects right. Yeah, I think I don't know if I answered your question but I think this should be like I would put diversity epistemic and any other as really a central issue for the reform movement to actually be successful. How to do that, I am unsure myself so we have read the criticisms of mandating things and using central institutions to change things. Those are really strategically efficient, but they produce a lot of effects that are unintended. And I think that's a million dollar question kind of how to do it efficiently how to form a movement that's grassroots but also strategically efficient in installing change within this insanely complicated and distributed system of research right. If I could just make a quick comment here I mean I think the biggest, the biggest danger I think the biggest anxiety that I have about this whole meta science movement is that it somehow becomes essentially an independent field where people can be raised intellectually entirely through the meta science without having to go through discipline. To me I think that represents an enormous risk because I think, just as all the panelists have talked about. There is great value in really knowing the science itself and appreciating the differences between sciences. And as the possibility to flip side of that, I think is that the meta science itself I think in as much as it can interact with, you know, sort of a traditional field like the social studies of science which you know already has this kind of rich tradition, and you know infuse it with a lot of new new methods with new perspectives, and maintain, you know, as I was just talking about a focus on not just, not just diversity, but also a sort of contextual understanding that science is occurring within a particular political and economic and institutional and so I think you know in as much as a meta science can be that space where we can have these cross disciplinary conversations, but like we're doing right now. I think that represents really, I think an exciting goal for it, which I think would, would, would, you know, would be far preferable to a kind of you know narrowly focused meta science which is, you know, really becomes almost like a bureaucratic or managerial force. So follow up to that or closing Sabina, please. Sorry. Just wondering, just directly response to David's point. Indeed, I was really glad to be part of this symposium and grateful to Brian and his colleagues for organizing it partly because to be honest I had also understood to meta science to be a very formal quantitative approach to these issues. In fact it was an open question for me participating here to which extent can we bring together this more qualitative and quantitative approaches these questions of social critique with what what is being done in the field so I'm really glad to see that there can be this dialogue and this can be a step towards towards doing this. And in terms of, you know, the fear that thinking about diversity may in fact dilute goals like openness so much that they become impractical. So, I think we have a lot of good examples of this really not been the case. I mean, as a very, very practical example, when plan S was launched by a coalition s, a couple of years ago, it was launched without a lot of consultation, actually. And they did do a consultation after the launch there is 50 more than 600 responses. And these were incredibly helpful responses and we're still working and I'm not working on planets ambassador with coalition s to try and keep modifying the system that initially devised to respond to this. And to be entirely honest, I also think quite a lot of the points that were raised in a consultation would have been easily, you know, it would have been easy to spot if there had been a tension to the diversity of scientific practices to start with. And I think that that plan S was going to affect adversely and dramatically badly the humanities should have been clear from its inception, right, things like that. So I think there is a place for regular consultations and regular and understand how things are evolving in different contexts, but it's also quite important to try and do a little bit of due diligence when launching a big initiative, because I think rather than diluting its effect, it can really magnify this and make it much more effective if you want to use this word. Very good. Any final comments before we close. Excellent. Well, thank you panelists this was a fantastic discussion broad ranging and deep simultaneously. Thank you to the attendees for coming and engaging with this all of these questions there were many, many more than what we were able to get to. Hopefully those will continue to stimulate in positive engagement in social media and otherwise, as we wrestle with these issues. And there's still a lot more work to do so consider putting submissions in for meta science 2021 and for attending yourself, the submission process is not open yet so you're not missing anything, but that will be coming soon so that we can have more discussions and learning like this. Thanks again everyone have a great rest of the day or sleep or wherever you are.