 Okay, let's get started. Welcome everybody to this session on unintended consequences of scientific reforms at MetaScience 2021. I'm really excited for this panel here. This is my excited face. This is the best I can do. I'm really excited to be here today and to be able to facilitate this discussion with Noah and to hopefully hear some questions from your end as well. So first, just a brief introduction. So my name is Leo Tiohen. I'm a postdoc currently at the Einhoven University of Technology, where I work on a BD grant in order to study how to improve the efficiency and reliability of science, mostly studying how incentives affect scientist behavior. And my background is a bit different. My PhD is in evolutionary anthropology. That was in a past life. And I'm co-moderating this panel with Noah. Noah Van Dagen. Hi, I am Noah Van Dogen. I focus on very interesting informative tests and scientific explanation. I am currently doing a postdoc at the University of Amsterdam with Danny Borspoen, working on the theory construction methodology. I have a background in philosophy of science. I have a PhD in philosophy of science and a PhD in experimental aesthetics. Yeah, that's it. That's perfect introduction. So before we introduced our esteemed panel members today, we just wanted to give you a sense for why we thought this would be an interesting panel to have, why this topic is interesting. So Noah and I discussed this and we all share kind of a few core reasons why we thought this was interesting. So both Noah and I have read some literatures in other fields. Science is a broad field, but through many fields in the social sciences and have slowly realized just how hard it is to anticipate the consequences of changes to any system, let alone one as complicated as science. So even in very simple situations like introducing a fine in a daycare center to try and prevent people from showing up late. So many of you might know this paper, a fine as a price I believe is the name where people introduce this fine in order to prevent parents from showing up late and even more parents show up late because basically before parents felt guilty for coming late to pick up their kids and after that they could just pay the fine and sort of buy off their guilt. So this is just one example of how people can you can crowd out certain types of incentives with financial incentives and so even in a system like that you can get sort of unintended consequences and we know that in in the past past reforms to try and you know standardized how we evaluate scientists and measure things like the impact of their work by the journals that people publish in or by counting publications. Now those were reforms at some point and now people are gaming them and now we're all complaining about these reforms and we want to change the system again so and you know me personally also in my own research, I've built some models of the scientific process and have found results that make sense but that completely were unanticipated, like the fact that in some cases rewarding negative results can actually harm the reliability of these things because people just are incentivized to conduct really bad low quality studies so. So, because of these sort of convergent sources of things we've been a bit worried that, you know, as great as it is that we're all pushing for reforms and we're discussing these things and there's a lot of excitement that we also, you know, want to learn from past mistakes and learn from other fields about how we can prevent and potentially anticipate unintended consequences of our interventions and so we wanted to put together a session with diverse panelists and there is fields who have had expertise, both involved in the scientific reform movement and thinking about interventions to systems to be able to see okay what are the gaps and opportunities in this domain. And the goal of this session is of course to have a nice fun discussion to learn some things but potentially also later to have a report or report or potentially a paper but that's that's that's way down the line we just want to have a fun discussion and learn some stuff. So please for the audience if you have questions, feel free to write those in the chat and we will do our best to incorporate some of those throughout the session. And that's all I have and now Noah will introduce our esteemed panel. So, in no particular order, we have a professor Danny Borspo is a professor of psychology department of the University of Amsterdam. According to his website that I just added I read only a few minutes ago his research focused on conceptual analysis of psychometric concepts, the development of new psychometric techniques, and more recently the formation of a mass psychological theory. Next we have our esteemed on the driver, your name, Johan Yorkman professor of economics at the Stockholm School of Economics, as you mainly focuses on meta science and behavioral and experimental economics. Our third panelist is Sophia cruel, PhD student in philosophy at at the University of Cambridge, her research results around replication crisis psychology, and she also works on empirical matter research at metric Berlin. And finally, last but not least, Karthik punch Nathan associate professor at in the department of anthropology at the University of Missouri, his research interests include the evolution of cooperation, cultural evolution and evolution of development. So why did we invite these people. After careful consideration, we invited this diverse group of scholars, because they have been involved in a signed a perform movement, and had had experienced reasoning about and through consequences of intervening in compact systems. I guess that's it right. So, that is it for our scripted for a script introduction. Yeah, yes. Yeah. Thank you. Thank you. So we wanted to start by asking just an open question to the panel about, you know, your experiences with thinking about unintended consequences so whether some of you can maybe share some either personal experiences you've had with unintended consequences of scientific reform or even if you haven't had a personal experience with it. unintended consequences that you worry about and you know why you worry about them and and how you see things playing out. And so if any of you would like to share and start off the discussion, you are free or free the discussion is yours. I can start with an anecdote, which I think is relevant to what we're talking about one of the one of the core ideas of some of these reforms are increases in transparency and attempts at reducing so called strategic ambiguity. When I had an experience a little less than a year ago. I think it was a year maybe it was a year and a half ago I can't not remember, in which we were ready to submit a paper on meta science, in particular on applying costly signaling theory from economics and biology to the publication process peer review process in particular. Leo. A good thing post his paper on an archive and advertises it on social media we get some great feedback, including a comment from Karl Bergstrom an evolutionary biologist who points out some very obscure papers from economics in the early 2000s. Leo and being a good scholar decides to rewrite the introduction and mentions this research which we didn't know about not exactly what we were talking about but certainly related. The editor who was a psychologist probably would certainly would not have known about this research but because we flagged it. She send it to an economist, who read it, and said this is a really good paper but I don't care about psychology we already know this in economics. And one of the comments we made was journals over emphasizing novel results. As a result of this comedy of errors the editor rejected the paper on the grounds that it wasn't sufficiently novel. Even though nobody in psychology knew this, but a few economists did a good example of a very unintended consequence for something like transparency backfiring. So that, not any big reform but an interesting anecdote I thought. Yeah, that was no fun. I also have an anecdote of we're starting with anecdotes. So, a couple of years ago I was involved with E.J. Wagemakers in a paper that emphasized confirmatory research strategies and argued that you should have pre registration in cases you have a theory that you actually want to test. Actually, I was sort of expecting, or hoping that this would lead to a situation in which psychologists and with the field in which I operate, would acknowledge that say 90 99% of their research is in fact explorative. Have you still see me and I saw. But that's of course that didn't happen. And what happened instead is that now suddenly, you know, almost all research or everybody has the idea that research has to be hypothesis testing. And that went so far that that's one time I had a student group, and we were doing a research practical. And at one point and they had been, they are now trained you know in openness and transparency and very good. But we were talking about the research project and they at some point they said well, what are our hypotheses. And I said, well, you're free of course to think about what you would expect to see given what you know, but I don't personally have a strong idea of what I expect to find. And then one student said to the other, a supervisor without a hypothesis. They already warned us for this. And, you know, the funny thing about this is that this is a typical thing of backfiring I think, because what's intended to be pretty much a technical methodological addition that applies in the case that you have a really transform theory and what you want to test it is transformed into a sort of sort of universally normative moral imperative that if you don't adhere to that you're a bad scientist or even a bad person. And I think that's that's that's that's really shocked me it shocked me a little bit that it's that things can take on this very normative moral character. So that's, that's what I worry about. So the analytical consequence of how we advocate for the science reforms. Right, because, I guess the problem there is that these students probably works. Yeah, someone explained these concepts to them in a straight way that they thought was straightforward and explained it to them clearly, and then it kind of morphed into something that creates this full sense of security understanding. It was actually a good educational experience because of course you know I was taken aback a little bit but then I explained, you know, how I think about explorative research and that it's really important and so it was a good educational experience and I also don't think they were taught badly or anything like that it's just that these rules can easily translate into universals. I wasn't going to imply that they were taught badly I just, I mean, I just think that there's a there's this tension where we want to be. Obviously, you want to have enough nuance for, you know, for things to make it make sense but you also have to have to make things straightforward and you have to make things easily applicable. Because otherwise there's not going to be much uptake right. Yeah, can I actually, yeah, I will also say what I worry about. If that's not too way to transition very smooth. So, I guess, I would I really worry about a lot is these consequences of implementing these reforms in a way that probably works best for established researchers at rich universities, equity issues, and at least two forms. So one in the smaller form of ECR likes of how about this how this affects ECRs. And like what do you know what do we do in that transition periods where grad students who adopt reforms relatively early, are then kind of not not really fit into the system that gets them the next job. The problem just die out. And then in the larger sense of how maybe the way that we were implementing this is made for the established researchers at rich universities, the bigger problem of creating more work and a bigger financial burden that might not be, definitely can't be taken on by by researchers from all universities and all countries, and thereby sort of further reinforcing that those inequity issues. Do you think there's some types of reforms in particular that are well at least I mean so I don't think this is a problem of the reforms themselves I think it's the way that we've chosen to implement these reforms right. Because like sometimes I think it's sort of the easier road is to just to go with the terms that we already have and create these green open access things that will still feed the current publishing system, because you know that that way we don't have to burn it all down. But we, you know, but we still we still can still kind of get get away want to go but that only works in a system where you kind of we you know where some of your university or your your grant can can pay for that. Because I don't think it's right, I don't think these are necessary inherent inherent to the to the reforms to, you know, to want to open access publishing or to wanting people to put their data online or to want to increase their sample size or whatever I think probably there are ways of doing it differently. It's just that it's kind of created by by a certain, at least the first steps of these solutions are created by a group of people probably. So this thing about equity so something that's been raised quite a lot when I've been talking to economists is that experimental economists are worrying about. That is being unfair when you have to do larger studies have higher statistical power that then only rich labs can do studies and now we're talking about labs in the US racing this point so you can imagine that many other people are more worried about this and that's of course sort of make it more unfair is not something we're after but at the same time we don't want a bunch of low powered studies that don't really say much either so I guess more team science and that's easier said than done was pushing for it and we'll like it but how do you reward it so to a larger extent and related to what Sophia was saying, I'm worrying about worried about that we're sort of right now we need out some of the junior people who are doing their best to do reliable science because now we're in a situation that isn't economics where most people are continuing with business as usual so there haven't been that much sort of consequences of any reform on them. But for some people who have reacted they're doing better, I mean they're doing pre analysis plans that they're sticking to, they're getting no results, and they're getting a harder time on the job market. So I was telling my colleagues in my EECOM department like next year when we hire, when we interview job candidates there's only go for those with no results and show that we're serious about what we're after. And then some colleagues said like but there is no point and if you haven't no results, we won't see that paper in economics the person won't write it up so of course they won't go on the job market with that so we'll miss these people who were interested in even by doing that. So what do we do. So I find that's problematic. Actually, actually just today, my colleague Han van der Mask walked into my office and said in response to this same concern that we should have an end factor in, in, in addition to the age factor. And with the end factor should be the proportion of no results you find. And if that's if that factor is too low, then, you know, it's a joke, of course, but But it's a good one in terms of what are you do, you know, on our any anyone who's commented on this do you do you see like solutions to this, like, what do we do with the issue that young people who are adopting these reforms are sort of selected against because the prevailing system is rewarding different types of things have you experienced people who have managed to succeed otherwise or, you know, I think I think I'm living in a relatively easy field. Now, not when I was starting this but now it's for for methodologies with an open science signature. It's not so difficult and in psychology of course there's a lot of momentum now but I imagine that in economics for instance that might be different I'm not very well first in economics. But we in psychology I think can really start putting this on the agenda for hiring policies. You know, so there's a lot of talk also in the Netherlands but also internationally I think about re evaluating criteria that we use to evaluate job applicants, and this guy could certainly be should be added actually so Yeah, but are we going to. No, but if those if those are added, are we going to make those necessary things or just a nice addition and if you have the right papers published in the right channels then you don't have to worry about it. Yeah, that's I'm not I'm not an administrator I'm not sure about that. I think this is an interesting one for unintended consequences because if you make it mandatory, then you're going to get unintended consequences and if you don't make it mandatory as well so. Yeah, how do you react to this. Sorry, Sophia. Oh no sorry no I was just going to be replying to that day's point of the end of the consequences say you're going to get so open watching if you make it mandatory. Yeah, that that to me seems more desirable than putting in a policy that at the end of the day. You know doesn't might help someone who like the you know this requirement might have someone who can fulfill it, but won't have anyone who doesn't bother trying to fill it. I think we have such a diverse panel, wondering whether, you know, you've seen some reforms that have developed in certain fields that have been adopted in your field or where people have tried to adopt them and you've seen any problems with that, or had any personal Yes, so I mean I of course love the pre analysis revolution and I think it's fantastic what's happening. So at this around the same time as that was happening in psychology there was also pre registration economics and that typically means something very different that's basically that you at some point when you do a study. Even after the study once you've written up the results and everything, register it at the American Economic Association's registry and you get the number which you use when you later submit the paper. That's a very different things from having a pre analysis plan and some OSF stand for what was something else right. So when we started realizing in economics that's pre analysis plans were happening in psychology some people were super excited and started writing them. Others started sort of continuing registering their things that they a very big hypothesis. No test, no information about tests or samples or anything basically, and everyone is getting the same registration type of number, which becomes completely meaningless. So now if I seen a registration number, I can't infer anything whatsoever about whether there's a pre analysis plan that people are trying to stick to or anything so. But many other people I think are a little bit confused by this and pay attention to this number is like that that's a good signal quality. I mean that's pretty problematic that we're not we haven't reached like a good equilibrium and we're not near anything good at this point so some confusion I guess when we move from fields and try to learn from each other. Any other voices. Have you seen any of that or. Other panelists. I don't know we earlier we discussed so this gets at, you know, some of the potential unintended consequence of adopting pre registration. And have you had other experiences, either with adopting, you know, other reforms that have gone wrong in anthropology or philosophy, or other things that you worry about with pre registration in terms of ways that interpreted ways that people might game it. Maybe you've had experience gaming it's that you can share with us. Yeah, please. It's not really pre registration but I think one of the one of the fears, I suppose I have is in thinking about anthropologists how anthropologists have to be cultural change, and particular transmission of cultural ideas from one group to another. In science as a culture, different disciplines have their own cultures. There's a host of things that make up that culture, some of which are very modular and can be easily exported other, and some of which are very salient, but other aspects of the culture that are deeply embedded and tied in with a whole bunch of other practices and norms and institutions, some that are very opaque and invisible. And so one of the fears is that you end up adopting the easily exported and salient features without really understanding how they're causally related to other norms and habits and institutions. And since we were talking about anecdotes, the anecdote that I sometimes talk to my students about this is so called cargo cults that arose around World War two, in which the Americans in defeating the Japanese and conquering islands in the South Pacific would clear fields of planes as part of the war effort. The indigenous peoples didn't really understand anything about how planes worked, but they did see interesting cargo coming out of them. And what they observed was if you clear a runway and start doing a particular kind of dance planes will land and cargo will emerge. But lo and behold, they start these cargo cults started forming in which the indigenous peoples would do all of these surface level features that were easy to observe with the expectation that airplanes would land and give them cargo. And of course it never happened. And, you know, Richard Feynman back in 1974 actually worried about exactly this problem in coining the term cargo cult science, in which he was contrasting his own field, his cherish field of physics and I think he was picking mostly on psychology but other disciplines as well, where a lot of the trappings of science are bought or borrowed, but not the core kind of scientific methodology and the broader point is it's very difficult. I think to know all of the aspects that make a particular practice work. So that that was just a thought that I had. I'm curious to know what others think about that. It's a really common worry for me. So I tend to, you know, I, I, I taught a lot of research methods and you know the scientific method and all that and the line is always in psychology that you know what makes research scientific is that you follow a rigorous methodology, etc. But you know following a rigorous methodology can also be itself a ritual. You know, if you pre register a dumb idea, it's still a dumb idea right so yeah, it doesn't get necessarily better by adhering to a set of practices and so yeah that's in my field in psychology I often actually am a little bit worried that the emphasis on methodological rigor may also serve a little bit as a cover up for the fact that sometimes we don't have very good theoretical ideas on on how things work. And yeah so you know so so that's a different level of course of cargo cults but that's a worry I often have. It's hard to tell right if you're in a cargo cult, you wouldn't know. It's really outside you can see. But if you're in that indigenous tribe, it makes a lot of sense right. Yeah, so you wouldn't know that that makes it difficult. Is this what you were thinking about when you wrote the theoretical amnesia piece that. Yeah, well it was one thing of course because, yeah you know, you're you're really not sure it's not about you. But it holds for a lot of these things. Yeah we have no outside vantage point often. So you really don't know unless of course you have technology to prove it. Right that's of course the big thing that say the physicists have you know you but you built an atomic bomb. Okay, you know, that's, but we don't I don't think we have something like that in psychology. It's a manifest proof that what we are doing may really makes a lot of sense and yeah it's sort of unquestionably in your face correct. I think most fields have that I don't know about the other fields here but I guess the economists and the anthropologists also have some of this. It's probably much worse than anybody else because there's a there's a very famous set of ethnographies that is often taught. This is one of the challenges at least in principle with psychology you can design an experiment and test it again, even if the theory is bad you can at least replicate it. And the earlier ethnographies from the 20th century went to a small village Robert Redfield went to a small village in Mexico and rural Mexico and cataloged all the wonderful ways in which these people were cooperative and unlike the people in Mexico City. Some about 15 years later a generation later Oscar Lewis went to the exact same village and reached the exact opposite conclusion and from that, his study of that same village he coined the famous expression a culture of poverty, and talked about how much richer social life was in big cities and urban areas compared to rural areas. And it's not clear what to make of those two ethnographies other than to teach them and the dangers and difficulties of doing science when you have so many variables at work. So how do you think this ties into the idea of, you know, incentivizing replications and making sure work is replicable and things like this. I don't see any you see so you mentioned you know the. Yeah, go ahead. Sorry, Leo go ahead. Sorry, you were going to say no I was thinking so there's this, you know, you brought this up about the result being non replicable right but then you don't know it's not right why it's not replicable and so you see, the increase one of the biggest increased emphasis now is on rewarding replications right. So do you are there are there things things related to that that you worry about on a Denny Sophia about this this, you know, increase in civilization now of this thing that before we know we're very happy now to have a place for people to publish replications but do you worry that it's going to, yeah, potentially have any sort of side effects, or maybe things we're not anticipating right now. So maybe sort of, I mean, an intended consequence of the replication moment and all these revolutions of course that we increase your liability of experimental research, I mean, so most of this has been for experiments. So, another anecdote maybe but so working in economics when the replication crisis hit psychology lots of economists were saying, oh, yes, the psychologists they have a problem. And we started doing replication, and many people have done replications of course in economics but we did a systematic replication project for experimental economics, which shows some problems too. And then many of our colleagues were like, okay, yeah, you guys who are doing experiments, you have problem. So it's sort of, it's good that we realize we have problems but now we want the rest to realize they have problems too and that's easier said than done when you're working in so called natural experiments and regression discontinuities and instrumental variables and other things that sort of, I can't I can't redo that natural experiment that someone did that some policy level somewhere. I can barely get data and actually look at it so sort of I think, yeah, I think it's, it's unfortunate if everyone thinks that we have problems when we're trying to do good things and improve things, and the rest of the world keeps continuing as business as usual sort of One other thought that I have following up on what Anna's point is one fear potentially is replications are fantastic when they can be done, but the danger one potential downside is we discount research that can't be in principle replicated easily. The other good feedback to what Danny said is, in many cases, these field studies these observational studies these natural experiments are the grist for hypothesis generation. You know when you do the easy to replicate experiment that's often the tail end of the research project where you've already have a lot of background knowledge on a phenomenon. And this comes back to the idea of everything must be a confirmation rather than an exploration. If we over emphasize replications, then things that can't be replicated, don't count, potentially, I completely agree with this. Although I have to say, just maybe as just an intermediate we're now talking about fears and dangers and horrors, but I'm generally very, very happy and optimistic and I think it's a tremendous improvement over, you know, like 10 or 15 years ago, at least in our field so that's really nice. And actually, I think that needs to be said as well because otherwise it can seem as if we're sort of like, gloomily looking at this, but the tail end part I think is really important that most of the techniques because there are techniques right pre registration data archivation replication their methodological strategies, and they are are almost all of these strategies really apply to the confirmatory part. You know where you have a really strong theory and it's really a critical experiment and you know the whole proper story basically that's that's where it applies but you know in my view. The field is, you know, by and large some areas, you know maybe our exceptions but by and large the field isn't there. So, it's much more about, you know, transparency of the process that you, you follow to say do your ethnography or to. Yeah, just do your explorative research but that's not obvious. You know how you do that. So I don't know whether that makes sense but yeah. Yeah, I think the problem might be that we often end up framing these reforms in a way where like, okay, this is about eliminating bias, for example, when probably it's much more about making. Obviously, and really also eliminating bias maybe but just just making transparent what's happening so that bias can be eliminated if it's if it's there and can be eliminated. Or and yeah but the problem is that yeah because kind of it's gone too much in that direction where it's like hey we want to get rid of it, whereas, sometimes it's also fine to just make it transparent and and that that probably would also then be more inclusive of research where yeah maybe you just can't make it, you can't completely erase, erase all bias biases but by making transparent what's happening you can at least give the reader the context that they need. So I think that's, I mean, this is kind of this, this is something that definitely worry about as well from the way it comes to the way that we talk in this whole reform movement thing, where like, it kind of seems like we're, we've got this idea that oh you know if we if we do all of these things if we do a be it being seen. If everyone do it does that then we'll all end up being perfect rational agents, and we don't have to worry about anything anymore forever. And that's just that I don't think that's desirable or possible really I mean maybe it's desirable as I could go as long as we know that that's actually impossible for human beings that aren't gods to do. Maybe we will maybe so just just think of themselves as gods who knows. Is it is the fear that we're trying to turn an art into an algorithm is that the, is that no, no, I don't think that's the fear that the fears is really about this, this sort of this this illusion that we can completely get rid of bias biases, Yeah, that's the thing that we should focus on rather than just making transparent what what these biases might be, if you can't completely. So kind of related to what what Danny was saying, I hope. This this also resonates strongly with me in the sense that for instance yeah I always think that the reason that Francis bacon the old philosopher guy was so important was that he, he was one of the first to realize that we are human and we are a problem, you know you cannot leave us alone with evidence or data everything will go wrong we need all kinds of checks and balances, because we suffer from all of these biases. And you know that won't go away. That won't go away. It's not like if you do open science and transparency. Then you don't have this anymore right and that's what I would sometimes worry a little bit about that. If we only do it like that, you know then it's, then it's okay. But of course, these are just as human projects. Right and I'm not sure there's enough discussion in the open science community about. Yeah, well how how how that actually pens out. I sometimes miss that a little bit. The diversity of opinion. But you take, go back a couple of comments from Karthik so the situation where, if you have some study where you cannot do a replication for some reason because it is a natural experiment and then people don't care about it anymore. That would be super unfortunate so that's obviously not a situation where we want to end up. But I would fear that in economics parts of political science sociology like parts where fields where you work with registered data or observational data. It's almost the opposite like a good for you if you have a natural experiment that nobody can try to replicate then you're way safer than if you have something where people can actually try to redo the study and see what the results holds. So, right now I think that the situation is really the opposite from what you said. Okay, you're better off having some something that nobody can try to replicate. Right. There's a bunch going on in the chat here that I'm struggling to keep up. Time to see which person. So there's a separate discussion going on that was fueled by Karthik's initial anecdote about the thank you publish. So maybe we should should address this briefly to because we might lose a few participants otherwise. What's going on with it. I'm not sure. We got it goes back along. So whether or not it should have been published where I think it was published, it's a good paper published in a good journal. It wasn't published in the original journal and I think the debate the question was about, you know, there were questions about whether generals journals should publish things that aren't totally novel and there was a very interesting comment and then there was also comments about everything should be published or not. I mean that's a whole nother I think conversation about the gatekeepers of knowledge and journals and what role they play in all of this. What I strive for novelty is is is is something that that really creates a lot of problems. So it's just very hard to write a paper that goes like you know I went on this research project. And I think it's, you know, it's not the strongest work I've done or it's not not so interesting but you know I did my best this came out of it I don't know what to make of it. Right. That's not a paper that you can write. I think the other issue with novelty that that in this particular case was was interesting to me and maybe relevant to what we're talking about is. I think every science would be much stronger if there was more conversations with and communication with other sciences because different sciences have have hit upon different methods and different techniques and different theories and it's a good thing when you can port over a theory from one discipline into another. The theory that that was at the heart of the paper that Leo and I wrote costly signaling theory this was well widely known within economics by the early 1970s and three economists shared the Nobel Prize over the role of asymmetric information in markets, that's spends and Akerlof. This was not known in biology and it was independently invented in biology and it was one of these things where an economist Jack Hirschleifer tried to point out to biologists hey guys, what you guys are doing is interesting we've already thought about a lot of these you should read the papers and they didn't. And part of the reason there's a strategic reason not to because if they had admitted that they read those papers, then suddenly it deflates their own contributions in their field. The other the premium is I came up with this without anybody else having ever thought about it, it's supposed to. Oh, I found this really interesting tool over here that I'm going to apply. So I think the novel. But then again in their defense. It's, it's not really easy for an outsider to read the economics papers. Oh, no, I don't, I don't, I don't like hard actually to read papers from a different field right. I don't disagree with that at all what I meant was, there are strategic reasons I think, for example, in our case, had we not posted the paper in an archive we wouldn't have known about it. After we posted on the archive we could have still ignored that comment and submitted it as is because the paper was already written, and the editor would have never known. But by following that, you know, chain of events then suddenly we're penalized for for being transparent. Admitting that Oh, somebody else came up with this idea to the USC. So I see two directions here one is the penalization, you know unintended consequence of being more open. For the scientist itself. I wonder if you have thoughts about other unintended consequences of openness in general for science because this is really if I think of, you know, all of the reforms the ones that have proliferated the most have been open sharing of data and materials and this gets to Denny's thing of it being really a normative thing you know if you're not open right now. You, yeah, you you're you're seen as a bad person by add one thing to Leo. I'm really curious about this but one thing I wanted to add to what Leo said as a game theorist the way I'm thinking about this is right now it seems to me sometimes. There's a frequency dependent problem if everybody were open and honest, and you were penalized for not being that would be a wonderful world. But right now it is a minority kind of strategy. And so the question is how do you not penalize people for trying to adopt this is often the case is that when it's a new strategy even if it's a better one, it's not going to do well until the majority adopted and so there is this challenge that arises. So I think this this has to be solved in an incentives way in the sense that you publish your data and get credit for it. You know, so I think journals like you have the Journal of open psychology data now there's nature scientific data. I think that that's a way in which you can actually incentivize it to be a majority strategy. But that's, that's, I think that but that also has has costs. I think Sophia said something about that earlier but you know there's a it's also a lot of work to create databases that are, you know, addressable by outsider researchers and I know a lot of researchers who do that and never get to their data downloaded. So it's also a, yeah, where do you say there should then also be more reuse, and of these materials we have the same with pre registration right. I think research found that pre registration often is not really followed up for instance yeah I mean, then it's just a ritual. Sophia, have you experienced or do you have worries about, you know, the cost of openness in any way. I mean, following up on what they just said. Yeah, I find this, this particular worry, really tricky because, yes, right so no one you're creating all of this extra work, and then the question, you know, like, maybe maybe then you can pay a research fund to do that for him but someone else can't. So that's that's a that's a problem itself. But then either hand obviously there's, even if, even I think that even if no one ever uses that data and if no one ever has a look at your pre registration, I do think that having that out there is useful if you if you can, can put it out there just that even just because even as a sort of as a record of what happened, and particularly thinking about the fact that a lot of research actually is done by people who aren't permanently employed, and to, you know, who might move to other places and like it's just a good way of making sure that all of this work that was being done actually has a you know it's documented somewhere in a way that can be reused, even if it's not reused. But yeah, no there's there's attention there right like so I do think that there's an inherent value to to putting this stuff out there, even if no one ever uses it but the other hand yeah it is it is a lot of work that not everyone can afford to to put in and to maybe So there's also a question from the audience from James Smith. And he responds to Danny's comment on that the focus on pre registration, presumably, well diverse attention from, from other practices, for instance choosing important research question and he asked, so should the science be spending more energy working out what is actually worth investigating. It is a role for meta science. Isn't that what a philosopher does. But it doesn't tell doesn't tell scientists so he figures it out for himself and then talks about it to other philosophers but not the scientists. Anybody have, does anybody think meta science should be investigating what's worth investigating, or do you think that is the role for the substantive scientists. I think that this is a great question to percolate on while we take like a seven minutes break so that we can all do whatever it is we need to do during a seven three minutes. There's more energy is back up and then and the sprint to the finish line, maybe in seven minutes. So, I think we're going to put it on pause for seven minutes, and we'll see you back here at 1027 Central European time. See you all soon. See you in a few minutes. All right, welcome everyone back. Hi everybody and so we have about half an hour. So if you have some pressing questions that you want to ask our genius panelists now is the time to write it in the chat because there's been a lot of great discussion, and we haven't been able to, there haven't been many, too many direct questions so if you have a direct question right now. We'll, we'll do our best to take a few minutes to bring it up. Right here there all these four people are here so as long as they're not too embarrassing I think you can ask almost anything will will will screen these questions before with the right questions are good too. Right. Maybe we should pick this up where we left off where this the question about should meta science be spending more energy working out was actually worth investigating. However, I'm considering this this is that all this question is about unintended consequences. Do you see any intended or unintended consequences when meta science start meddling in what researchers should investigate. And so I was involved in the reproducibility project. And, yeah, but that wasn't more or less people could just, you know, choose something that they would replicate. And that's a model. So, regulate what gets replicated through all kinds of indices. And in that way you steer a little bit, you know what's what's what does get replicated actually. And I do think there are at least methodological research that can be done to develop that kind of in this index, you know research that's very important, very simple, but has not been replicated must have a higher replication opportunity for instance so that that that's kind of thing I do but I don't think that meta scientists should steer psychological research or something like that you know directional psychology, but but it is doing that. How do you see it doing that. By, by, for instance, if you push a lot on on studies that we just talked about that are reproducible. You, you, you steer the discipline in a direction of studies that are reproducible. As Anna and Karthik, you know, noted, there's a lot of things that we really aren't really reproducible in the replication experimental replication sense so field studies qualitative work, really, really important. Explorative work, but not not not so easy so not so easy to replicate so that's an example I think where where there's a risk. I'm not sure this is happening, by the way, but it's a risk that that that you start steering. And then you better, you know, do it consciously. You know, instead of just, you know, finding out 20 years later, oh shit, we, we steered this field field in this direction without. Do you see any of the steering that you worry about a bit in economics, you know, maybe, no, not really but I hear the fear of this theory so that that's been raised like in many management economics seminar that when you have pre analysis reports or registered reports, you steer people to more boring hypothesis and more boring results whether that's the case I guess that's an empirical question and I'm not sure how exactly to measure how boring hypothesis are tested but I mean, I'm sure we can figure someone can work on that, which is interesting. That's it that's obviously a worry for some people by lots of people that pre analysis plans and registration reports need to be more boring, boring hypothesis more boring papers we don't discover the world the way we should basically. And I think the part of the reason why people are thinking this is that though thinking that if you do a pre analysis plan you cannot do any exploratory analysis. And that's, of course, not necessarily the case I mean you can have a pre analysis plan you just clear what's confirmatory and what's exploratory so there's been some miscommunication, I'm guessing. In practice, do you see, do you see this changing people's behavior in terms of actually pushing them more towards doing a bit less exploration because they have a plan, for example. Yes, I think people are doing. I mean, at least the papers that I read and see, I think people are doing less exploratory tests than they used to. And that's not only bad. I think many of them without reporting on that and hopefully people are reporting this crowd know you know what I talk about here, but no so I think sort of, but the misperception that this is the case is blocking maybe some people from embracing the pre analysis plans more. So I think that's a problem. But I'm not sure do you think that sort of the registered reports and pre analysis plans have had people to test different things more than sort of whether they replicate but if you are in the world of experiments where you typically can replicate things do you think people are testing other types of hypothesis like high more high probability hypothesis, probably yes. But are they necessarily more boring. I would actually say that's not necessarily a problem. I love boring hypothesis. I love actually things that are really robust. And in psychology we have a lot of experience with you know really, really cool things. So, yeah, I have the bear rule and the bear rule is if there's a finding in psychology that I can talk about on a party, and people start getting me beers to keep me talking, then it's probably not true. But if it's interesting and it sounds good etc so I'm all for more boring hypothesis but what what I worry more about is so in my field in psychology there is in my view, yeah, very little attention to qualitative work for instance interviews. Nice. You know, you need examples to teach it. And what are the examples. If you want to see, then you need to sort of always, you know, experiments. So you create a, yeah, you create a bit of a mold. Right, because all your prototypical examples, your exemplars as cool will cool them are these these things so that's that's more what I worry about. So we address a question from the audience. Can I ask a question that somebody from the audience asked that I think is a good one. There was the comment from Didier I hope I'm pronouncing this right tourney. That there's some irony in talking about not seeing what's been done before elsewhere in a meta science conference as the field is almost completely ignored 40 years of science and technology studies. So, you know, so I guess the question that I'm curious about is, I don't know the answer to this because I'm not very good about this kind of research, but in the same way that when the replication crisis suddenly emerged people started realizing wait people like Paul meal have been talking about this forever. What are we doing that's new so I guess the question is, you know how much, how much ground, you know in these sort of conversations have already happened I have no idea. I think as a meta scientist like the first people to ever think of these, these, you know, we need to be clear about the distinction between confirmatory science and exploratory science. I think I think my eighth grade science teacher would be quite upset with that. I think this is a good question because it gets at a bit, a bit like what we can learn from what has been done in the past or other disciplines. And it also gets at one of the goals of this panel which we will try to achieve in the next 20 minutes about how we can, how we can anticipate these things so do we have any tools, you know, how can we draw on things from other disciplines or whatever other tools we have to try and, you know, meet either mitigates and these consequences or try and try and be, you know, not completely kind of walking blind right and hoping that maybe then 20 years down the line will figure something out. I would love to hear, Sophia, Anna, Denny, I think if you, if you think if you had to pick one tool that you're like okay if we have to address this problem, you know I would go for this, you know, not as the only tool but like as something that you think would be a promising way to, to try and anticipate some of these things. So one thing that I'd really like to see is, I think one of the issues is an over emphasis on specialization. And especially if we're interested in interdisciplinary science I'd love to see a mechanism by which, for example I'm just thinking out loud graduate students in psychology let's say, there was some way in which they could do internships and in related to disciplines in other disciplines and, you know, if we intentionally created more of these bridges across disciplines than a lot of the, you know, what's known elsewhere would there be there be ways of connecting these things up, but also you, you'd come to understand your practice a little bit better because that would address one of Benny's earlier points is you would have a view from the outside. Everybody would have a view from from some particular outside. I don't know some way in which we could foster more kind of. I know like within our psychology department here in Missouri I think the students will spend time working in labs, other than their advisors lab but but they're all within their sub discipline of psychology, which doesn't really address the issue. So that was one thought that I had. Why do you think that would, that would help me how other disciplines do science to start thinking more carefully about your own in the same way one of the key lessons in anthropology that that we teach about ethnography is part of the purpose of doing anthropology is not to discover some exotic and not to make some, you know, sort of exotic culture familiar to you, but to make your own familiar culture exotic to you, where you start to ask different questions about your own practices, because you've seen how things are done differently. I think that's a great idea. Good luck actually doing it. Yeah, let's do it. I think I would love to intern in your lab Danny fly me out to them I'll do it. Was there anything that resonated with you about the question about like, you know what what can we do like things you've done that have helped you to, you know think about reforms in a different way. Well, I guess, sort of, related to what I mean so we've had this come up a couple of times now when it comes to the consequences, both from Danny for example talking about the value of exploratory research and I'm not talking about the analysis plans and also the more recent point that you made which is also kind of like well people kind of misunderstanding what's happening here. I guess some I think I think something that's quite important is sort of. Yeah, thinking thinking about that. I mean, I'm not just thinking things simply, but also wanting things to actually make sense to be coherent and nuanced. And yeah just having having these, these very conceptual discussions, much more probably than, and then we are currently having them. I think, I mean obviously meta science is an empirical science, but I think there's a lot more space for for conceptual discussion, even within that. And then he has I has a great anecdote about that I think because like defining what a replication is before you actually go into replication project. And right so like this this this kind of very straightforward stuff. I think, I think that would be helpful. Probably ties into that question about science and technology studies as well. Before we were probably also ignoring certain things that someone in the comments made the very good point though that they're on our don house made this point that there's a difference between we're the first ones to think of this and it's widely known and understood so I. Basically, like just some extent you should, you should try as meta scientists should try to to see literature is that that we don't necessarily see but also I don't quite understand the anger at meta science as a as a field because if it came to came to the conclusion that maybe we need this this this field because to better understand what's happening. And then maybe that was was a disconnect between between the sort of the field of sending the science and technology studies and actually changing something. And the scientists then will actually change something that that's as that's already too controversial I think. And was like five different points and one. related to the talk that you gave at light in about like meeting more conceptual clarification to get a sort of. I forgot what you called crisis. Oh the crisis of inference thing that you presented at light in. This is my sort of little thing that I'm writing right now yeah I don't yeah I think I think probably the fact that we focus on replication so much has obviously we've seen this led us astray a lot. And I think, yeah, I've made this argument I'm trying to make this argument that actually if we see a lot of this more, more as being a crisis of inference. And being problems in the way that we make inferences that that way we might be able to to better understand what's what was happening here and to to be able to to direct our focus on these different sub crises that are happening, not just application but also problems of generalize this over generalization. And the way that we use the method methods, the theory crisis all of it. But yeah, I think we're much more complex than. Yes, it is all horribly complex. I'm wondering on as well just give you a chance. As an economist who also is, you know, yeah knows about how hard it is to change policy and things like this. Yeah, there are rules you think we have, you know how can we mitigate stuff for anticipated. I mean, I know very little about policy. So that's not the type of economist that I am. But then, so I think that. So I like to pick up Tom's comment in the question the Q&A. So, like, should we run trials on reform initiatives before the scale up or would that suppress momentum of reforms and reduce opportunities for natural experimentation. I think we shouldn't do more experiments to the extent we can, in particular sort of it. Some people have priors that we by pre analysis plans you get boring hypothesis etc. Okay, let's, let's sort of get experiment more with our journals and with their policies and see what does this lead to and many of these sort of concerns are things that we can address in various ways. I mean, we can look at priors in etc. So I think, yes, we should, I think we should experiment more, in particular before sort of we implement things all over the place. I mean, experimentation in general, I think. If you were to pick like one, one reform, you know, from the big ones that are being rolled out that you think really, if you had a prediction market and you had to put your money on one reform that. Now, okay, now that I don't know, but talking about prediction markets, maybe we should add prediction markets as a reviewer sort of, some journals should gather with this or like experiment with this having instead of just having like three reviewers to read the paper. Okay, the fourth reviewer can be a prediction market that people can bet on the replication outcome and something and we have a positive probability of all papers being picked for application and all of them will be replicated. So I think we should like do more stuff like that. I mean, I'm not saying prediction markets are a solution to all problems in science, obviously not, but they can play perhaps a tiny role and if they can, that would be one way to test it. So I'm just saying that there are probably many such small things that we should be testing. So, Tom, I think we should be more of experiments to test reforms. Before implementing them, right? So I think if I answered, no, I mean, Yeah, no, but it's just if I answered it correctly, I think what Tom meant was that, yeah, right, so should we run trials before they're scaled up. And I think especially for things like illustrations and register reports, probably it does make a lot of sense to put it out, to have it out in the open and to ignore, not as a small trial, but to really see if it goes wrong on the big scale, because I think we have enough reason to believe that if done properly, and if understood correctly that it has value to pre-register and to do the register reports, right? No, so I can still find out that it's wrong for some reason, but yeah. I mean, so it can happen somewhere. I mean, we have all these different fields. It's happening a lot in psychology, medicine, et cetera. One could try to take that to other fields and see what happens if you then sort of randomly allocate pre-registration and off to journals, what happens to results, et cetera. I think, so my colleagues can probably go to many other fields right now and test these things because many other fields are way slower, including economics, I would say, lots of business administration research, et cetera. So maybe some low-hanging quotes that could be interesting. Just going to other fields and saying, can I interest you in some controversies and things a lot of people will disagree with. So no, are there questions or there was a question. There's a question from Jeffrey Mowgli on changing incentives from novelty to methodological rigor. Would this motivate different people to become researchers, do you think, and would this be a good thing? You can read the entire question in the Q&A, but I'm not going to read it verbatim. So will changing incentive structures change who will become scientists or who want to become scientists? I don't quite understand the question, or at least fully. So are they saying, because Jeff definitely was saying, the smartest people possible were coming scientists that have bankers. So is the assumption here that if we make, if we change the incentives that it'll not, it'll be bankers who are coming in or... And initially, maybe it's something like the guide. People with different skills might... So the question is to what degree are people currently in science? Are they plastic in terms of developing to whatever is needed versus people with skills that aren't currently included by rewarding different approaches to what might be select in different kinds of people? I think... I think that might actually be a good thing. I often worry that the way we fund research now is like you assemble a soccer team with 11 strikers. Everybody who gets funded has a big vision. So it might balance that a little bit to more of the checker types or critical people, and that would I think be a good thing. But I'm not sure whether that will actually work because everything, anything that involves this kind of hypothesis is very hard to chart. But I wanted to say that in response to Anna, that if you can see what's going on, then everything is hard. So one thing that meta science I think could do is keep track of stuff. I don't think we currently have... I don't have a view of how many papers are being pre-registered in which fields. You would want a dashboard of sorts of just to be able to understand what you're talking about. So that would I think be a hugely important thing. And what I also think would be very good if we had... You start out, when we started out in psychology, it was more like activism. Any change in this direction was good because it was so bad. But now in psychology, in Holland, in my country, I think it's more or less policy. So it's no longer activism. It's now about, okay, so we've got a policy. How do we evaluate the negative results of that? Do we actually do that? And I think that's also really a role that things like Center for Open Science or other organizations should maybe push a little bit more. For like, if you get on top and you're actually riding the horse, trying to get on it, then you also have the ability to... So that's something that I often miss. We're about to be demoted. Yeah. So if we can probably do one more question and then... Then we can either do a rapid fire of reforms, which will not be good, because you'll only have like a minute to respond to each one or just wrap up. Was there another question that we didn't address? I know Elizabeth Bick had one, but I didn't see what the question was. Daniel Fanelli's question. Yeah. What was it? Daniel's question. That what do you think of the claim that science has mainly a Christ of theory instead of methodological rigor? Yeah, I think we should skip that. Sorry, Daniel. It's not a brilliant question. But that's an interesting, very interesting question. So I guess I would, you know, if I were to wrap up, I think there's so much here. I wish we had, I mean, yeah, which was an 11 p.m. But if you were to think of one thing that either you would want the audience to take away or just one question you would like the audience to think about, I know, is putting you on the spot with regards to reforms, potential other reforms we should consider or the potential unintended consequences of reforms like in your field or ones you're worried about, what would be the one thing you would want the audience to keep in their mind just in like 15 or 30 seconds. Maybe I will, I think that might be a nice way to wrap up. And then hopefully you, I won't put you on the spot so much that we'll have nothing to say. Karthik, shall we, shall we start with you and just hope for something? Yeah, this is a, this is something that I do kind of think about is how much do we, are we overemphasizing the role of incentives and, and therefore turbo charging self-interest and crowding out a whole bunch of other ethical and motivational schemas that, so I don't know, I mean, building character rather than feeding self-interest through incentives. I don't know what that means, but that's something that. You mentioned that before as well as like a concern you've had. Yeah. Okay, great. I don't know. I don't have any insightful things to say. Go for it. There were so many earlier. Okay, we'll skip over you for now. I mean, I don't know if I have something insightful, but I'll say something. And probably you'll disagree. So I think that a lot of these kinds of consequences that we talked about are consequences of the way that we implement these reforms within the system that we found ourselves in. And that is a system that's really quite broken and unfair in a lot of different ways. And so reforming it's going to be complex and it's going to make things worse in lots of different ways, unintended ways. So I guess my question is, if the ways that we implement these reforms in the system that we're in, if those lead to all of these consequences, then, yeah, are we being radical enough in how we make these reforms? It's my question, yeah. I mean, not things sort of further outside of the box, I guess. Then he shall you take to you take us home. I want to hear from Leo or Noah as well. We can we can say we can finish after Danny. Yeah. Are we being radical enough. That's, that's a nice one. Yeah, so, so I have this thing that I really think much of the open science is predicated on a very limited template of scientific research. And I really think I wish there were a good way to report exploratory research without, you know, faking the theory introduction, you know, as if you had an idea or this or that, but just, you know, we had this we had this data set. We had this model and we, you know, just went ahead and look, this is interesting. That's interesting. That's interesting. And so, so I think that the scientific template that we all work in, you know, the thing we write the paper we write is so uniform. And so already encodes so many philosophy of science basically about, you know, what research ought to do. And, yeah, maybe we're not being radical enough in revising, you know, the system as a whole. I mean, if it's, if it's transparency and reproducibility or openness that we value, you know, do we actually go far enough in in reconsidering what counts as a publication for instance, what is the openness about if we basically force everybody into a I had this theory and then I made a prediction and then I checked it, sort of, you know, artificial straightjacket. I mean, I find very often that I, you know, I work in a very open way with when it comes to writing the paper is really hard, because the paper format doesn't really allow for. And then I did that analysis and then I thought, you know, this is a good idea, because then the editor will say hey, that's hypothesis it should go in the introduction. That actually happens right. So, so I think yeah maybe also in that spirit. We shouldn't just rethink research methods but also, you know, the whole whole way report things and what we put in there. Well, that was that's my hang up. I have a big, big food for thought. Thanks, Danny. So Noah, do we should we end on a few minutes you want do you want to say your thing I'll say my thing and we'll wrap up. Let me see if I have a thing to say. So what I was reminded about was I think it's poppers, the free society and his enemies that we should not consider the question, who should rule because that's something that is unanswerable. I want to say, okay, so what is the best scientific reform and try to implement that, but figure out ways in which we can, well, try many things and especially see as clearly and as quickly as possible where they go wrong. So that we can, well, check our government and eliminate it from power when it fails, and give it the all the opportunity to succeed but but remove it as quickly as possible when it starts to build the same with scientific reform. And give it all the opportunity it deserves to to flourish and kill it, kill it quickly when it fails and try something. I don't think I can top that Noah. So, I think that's great. Yeah, my, my theme, I have a couple of things but I guess I think we should be more humble. Science is complicated, you know, it's this crazy complex system and it's hard enough to predict the effects of interventions to little simple systems, you know, let alone, let alone something as complicated as science so I think it's fine to have verbal policies that we are confident in, but I think if you if we learn enough about how other people have been equally confident in other fields and, you know, just look up, you know unintended consequences the Wikipedia page or something of all the ways people have gone wrong in so many different domains, we should be a little humble and then we should try and figure out ways we can learn from other fields and the tools other fields use like, yeah, like modeling the formal theoretical modeling of these things for small smaller scale experiments of interventions and whatever, making analogies with those existing processes where you know and seeing what we can learn from them. So yeah, I think some humility would be good. Of course you don't want to be so humble that you'd never push for anything but we need to have a balance. We have a minute. So I want to thank all four of our panelists of the accrual Danny Borsbaum on a driver Karthik punch and Nathan and Noah co host or whatever. This was really great. Thank you so much for your time and thank you for the audience for your participation. I wish we could have gone to all your questions, but we're happy to continue the discussion with you later if you're interested in this topic. And thank you to the conference organizers, and we wish you all a great rest of the conference. Thank you very much.