 All right, everybody, let's get started. So my name is Patrick Forsher. I'm an associate director at the Pizarra Center for Behavioral Economics. That's a nonprofit research advisory center headquartered in Nairobi. So I wanted to give just a little bit of context for the panel and introduce the panelists. So this intersection between meta science or meta research, I'll use those terms interchangeably, that is something in development, that is something that does not currently exist or is not currently recognized as an intersection. And the reason why I think this might be a fruitful topic for both disciplines, for both fields has to do with my particular background. So prior to joining the Pizarra Center, I was actually an academic. I was very briefly, for about two years, a professor of psychology at the University of Arkansas, actually. And I got my graduate training at the start of what's now known as the replication crisis in psychology. Quite a time to get your research training in an area that seems like it might be using methods that aren't as solid as we thought. And that's what got me really interested in what's now known as meta science or meta research because I couldn't stop thinking about the question, well, if some of the methods that we're using aren't as solid as they seem, why is that? And how can we make them better? And how can we make the evidence that we're generating more reliable? Fast forward several years and I've long been interested in applied research using evidence to do some practical good. And I found myself at the Pizarra Center using some of the very evidence that I had been generating back when I was a psychology professor and this topic has not left my mind. Well, I'm trying to use that evidence to do good if there are things like global development alleviating poverty and that sort of thing. Isn't it an issue that some of this evidence might not be reliable? And how can we think about the intersection between some of the things that we've learned from the replication crisis and well, development and other spaces where the evidence is used? I've also noticed from my time at the Pizarra Center that there are some issues that are of huge focus in the development community that seem like meta science itself and the meta science movement could benefit from. One of the most prominent issues has to do with research in the global south and the fact that the global south is excluded from a lot of research conversations. And I think that's also true of the meta science movement. So the purpose of the panel is to brainstorm or to try to think about this intersection between meta science and global development or more broadly policy efforts and to try to think about those intersections and where both fields could benefit from each other because I think the benefits for each field could be present. So I'm gonna be structuring this panel as a Q and A session. I have some prepared questions that I've discussed with the panelists, but as we go through the questions, this is gonna be a brainstorming session on a relatively unstructured conversation. I want to see where the conversation goes. So I really welcome any questions that you ask in the Q and A and I'm hoping that this can be a little bit participatory and that can help guide the conversation. So I'm going to ask the panelists to introduce themselves and then we can get started with the questions. Let's start with Jason. Can you give a short introduction? Yeah, thanks, Patrick, for that great introduction and for organizing this. I'm really excited to hear what everyone thinks. I'm Jason. Research and teach primarily evidence law at the Australian National University. Background before I went to law school was in psychology and I kind of had the same experience as Patrick where around that time kind of realized a lot of the evidence base for psychology wasn't that strong and the methods weren't that robust. And so now that I study evidence that's used by courts and policymakers, I'm trying to think about how to improve that and how to work with those people to be able to evaluate evidence in a way that's more calibrated to the actual strength of the research behind it. Thanks, Jason. Nakubiana, how about you? Thanks, Patrick. Hi, everybody. And it's a pleasure to be sitting on this panel and having these conversations. My name is Nakubiana Mongomba. I am at ID Insights in Zambia in particular and I work of the Dignity Initiative here at ID Insights and we, I guess, are exploring how development can be done better in a way which respects the dignity of the people that we wish to serve but also how organizations can uphold the dignity of their participants in their research work but also within their organizations, so their staff and the way that they do things. Yeah, I think that's it for now and just looking forward to contributing. Thanks so much, Nakubiana. How about you, Tara? Great. Hi, everybody. Thank you so much to Patrick for organizing and it's an honor to be here with the other panelists. My name is Tara Slough. I'm an assistant professor of politics at New York University. My research is a long a number of streams that are relevant to this discussion, hopefully. So I do applied research including some experimental work in Latin America on political institutions but sort of more towards the meta science community work. I sort of led a prospectively harmonized six site experiment. One of the EGAP medic had us a couple of years ago. I'll talk a little bit more about that I think in the Q&A and then since I've been doing some more theoretical work on thinking about what is external validity and how can we evaluate it in ways that were inspired by that work. So I look forward to hearing what all the other panelists say and I look forward to your questions. Thanks. Yeah, thanks, Tara. And finally, how about you, Joel? Hi, everyone. My name is Joel. I'm really happy to be here today to discuss the issues of meta science and development. I'm a research specialist at Boussara Center. I work with Patrick within our meta research unit and specifically I lead an agenda on ethical research. So what you're trying to do is to improve the experiences of our participants within the research ecosystem. And this means that we are trying to generate evidence and data around like how participants' perceptions with regard to ethics and how researchers can learn from that to improve their projects and their designs. I'm happy to be here. I'm looking forward to a very fruitful discussion today. Yeah, thanks, Joel. So I hope the attendees are noticing one that we come from a wide variety of disciplines and backgrounds. And that's a theme that I hope to draw out a little bit too that we come from a variety of institutions, not just universities, but also nonprofit research centers. And that's another theme that I think will come out of this. And three of the specific aspects of research that we're trying to change or improve are also a bit different. And also perhaps a bit different from the traditional focus areas of meta science or meta research. And that's a last theme that I think might come out of this discussion. Okay, so I'm gonna get to the questions and there can be focused at some are more relevant to meta science as it is and some are more relevant to development and what's happening in development. And there are some questions that are related to the global south as well. But I just wanna emphasize again, if you the attendees have questions that you want to post to the panel, please do put them in the Q&A and I'll try to work those in as we're going through the various topics. So first question, meta science has had a lot of success in shining a light on issues, especially related to open science and reproducibility. So what would you the panelists say are some blind spots of meta science and how might interfacing with global developments how help address some of those blind spots? I'll take anybody if you want to unmute yourself or raise your hand or whatever, go ahead. Yeah, Jason. Oh yeah, so I thought maybe I would start here just because I think I've very strongly identified as a meta researcher. So I kind of have a bit of skin in the game here. But the last decade of meta research, which people have studied research practices for a long time, this is not new but there's been this real spike in interest in it over the past 10 years or so. So there's been a lot of work lately, mostly quantitatively studying research practices, folks like the people who are organizing this, the Center for Open Science have been really good about working with research communities and trying to see if reforms to the research ecosystem are working. But I think one thing that is, and there's been some efforts to look at the generalizability of research findings like the mini lab studies. And now there's many, everything like mini babies, mini lots of projects studying samples in different communities, but it's really been the majority of communities in the global North. And I think there's limited understanding of, I think maybe Tara will speak to this, the external validity or generalizability of a lot of this research, despite this trend towards meta research over the past decade. Great. Yeah, go ahead, Tara. Yeah, so thanks. I think in some sense in development economics and political science to a lesser extent, there maybe have been some more isolated efforts to do meta research or try to replicate experiments across, field experiments across context. So I think of sort of the multifaceted poverty alleviation project that sort of Banerjee et al did. And then that in some sense inspired these medicadas which are through evidence and governance and politics. And so these are, as I said, prospectively harmonized sets of experiments on a given intervention, right? So to date they've done them on pre electoral information and political accountability, community policing, community monitoring of common pool resources and tax formalization and compliance. So the idea there is, if we implement a common treatment across contexts and these are all in the global south, do we see similar effects or not with sort of an eye to formal meta analysis. So that's sort of where I got started in this community working on two of the medicadas, the community policing and community monitoring of common pool resources. So I think in that sense, those efforts have been really concentrated in the global south, at least within, you know, econ and political sciences disciplines. And that's good. I think that, you know, we perhaps in these efforts jumped on an effort to sort of just get more data to do the study over and over again without perhaps thinking about what structure was underpinning the exercise. So some of my work since the medicada has been to think about like, how can we formally think about what external validity is and when can we evaluate it? When are we assuming external validity to estimate a quantity, right? And when is it something that we could actually test with our methods? And so I think like that effort, the sort of external validity sort of as a theoretical concept is much more abstract and sort of outside the context of global development, right, because it should apply in the many lab studies, even if they're all in the global north in sort of any other study. But I think it takes inspiration from things that, you know, I saw and others saw while like trying to do these projects across contexts. So I think in that sense, you know, there are some links there though between, you know, global development and meta science but they're sort of links that we need to study and problematize in new ways. Yeah, thanks Tara. And I wonder if you could elaborate on that a little bit more. So are you aware of researchers in the global south leading some of these efforts to test external validity? Are we taking ideas from say researchers from African universities and testing whether they generalize outside of Africa? I just wonder if you could describe here about that problematizing comment that you made. So I would say that I think one of the weaknesses of the projects to date is, you know, with some exceptions, most of the PIs have come from the global north, right? So the people who are implementing these projects in some sense, a lot of the people who had the sort of context specific knowledge to about like what the intervention would look like are based in the global south, but those, you know, are not necessarily people that have been credited on the ultimate, you know, publications as researchers. And I think that that's sort of an important thing that we need to be thinking about, right? So for example, you know, I ran one of the constituent interventions in the community monitoring of common pool resources in the Peruvian Amazon. And so, you know, there's a lot of conceptualization in terms of thinking about what does community monitoring look like there? How does it need to be structured such that people are interested in doing it such that the monitors are not, you know, subject to violence? And a lot of that came from, you know, leaders that came out of these Indigenous federations, for example. And so they contributed a lot to thinking about what the intervention should look like, how it should be structured, you know, and then we served as a liaison with sort of the broader group that was doing that. Ultimately, we're the role of sort of researcher and implementer falls, I think is an important question for a lot of development research, right? And I think that other people in this panel will have good ideas. And so I think, you know, we want to, I mean, many of us are motivated to study interventions that, you know, may help the communities that we work in or hopefully could help communities like those we work in. But like, it's certainly not our voices alone that are the ones that are relevant in terms of thinking about what it is that we're measuring, what treatments we're doing, how we're doing that. And so hopefully we can have some broader discussion about those types of issues in this panel as well. Yeah, that's great, Tara. And to tie this back to your comment, Jason, I wonder if you could describe a little bit more, just for the sake of the attendees, what are these many lab studies and what do they typically look like? And again, is it commonly the case that these are led by someone from the Global South or what could you just unpack your statement about the many lab studies at the lab? Right, yeah, I was realizing that I gave almost no context for that. So thank you, that's good moderating. And I think there's more of these now than I know that I'm keeping track of. But the original ones were, I think not really necessarily geared towards testing generalizability or external validity, but just as a way to get more participants because there was a perception and it's true that lots of psychology studies are underpowered. So I think there'd be some sort of agreement or some sort of call for collaborators to test some important, practically or theoretically, usually I think maybe theoretically important finding in psychology and there'd be a call to go out and you would say, oh, my lab can collect like 80 people and like to throw it in there and they would all get to some sort of contributorship or authorship. I think only many labs to set out to expressly set out to see if possibly one reason for the failures of some studies, the replicate was due to variants in sampling site and possibly culture, geography, things like that, but it wasn't really well-designed to do that and that was one of the major criticisms of it. And to your other point, these are almost, again, I'm not keeping up with a lot of them now, but the original six were I think all led by folks in North America. So that might have something to do with what Tara was talking about. It brings in the PIs values and approaches and what they think is important to study, I suppose. Yeah, thanks, Jason. And I guess building on that point in either these like crowdsourced replication efforts or meta-analyses or just in the topic of formal meta science, are there any topics that you think should be focused on that have been ignored? Like what would those topics be that might be these blind spots? So some of my recent research is trying to think much more concretely or precisely about what the theoretical relationship between studies is when we try to accumulate evidence. So that means when we try to take a study and then do some type of direct or conceptual replication or when we do sort of these big trials in which we're ultimately looking to meta-analyze treatment effects across sites. And so I think that often we have conflated sort of our statistical assumptions or estimators for what those theoretical properties that link the studies are. And so there are certainly statistical issues across this literature and sometimes pooling studies together can help on those things, right? So if we have underpowered studies, like Jason says, pooling across studies can help us improve precision. But that doesn't necessarily address whether studies are measuring common quantities or whether the quantities across experiments, for example, relate to each other. So when is it that the average treatment effect over in my experiment in Uganda is at all sort of comparable as a quantity or to an average treatment effect from Brazil, right? And if those quantities are just sort of totally different objects, it makes some probably little sense to either compare them formally or to then combine them in a meta-analysis. And so I think thinking out sort of more of that theoretical relationship that underlies that before we get to our estimates can help us understand when this type of meta-scientific endeavor can teach us something substantively about the policies we're studying or the problems we're seeking to address and when it provides less value added. Yeah, thanks, Tara. So moving a little bit towards the development side of things now. So as we were just describing meta-science has focused a lot on replicability and evaluating quality of evidence. What do the panelists think about the potential for those topics to benefit development efforts? Should development efforts be focusing on replicability and quality of evidence and what might the benefits be? Maybe I could jump in here. First be to sort of also react or add on to what Tara and Jason said around, I'm sorry, I lost my train of thought. Sorry, yeah, so what they were saying around sort of replicating studies in different contexts and what some blind spots might be there. I think one thing which development can or which meta-science has sort of started doing well is the whole idea of open access and being able to, you know, share more openly in terms of the methods that have been used and the data that has been shared which is something which isn't being done as much in the development space. And that's definitely something that we can learn from meta-science and start to do more of. And then also Tara mentioned something about co-authorship and how sometimes these studies are led by PIs potentially from the global north working in contexts outside their, I guess, native space. And one thing that we have thought about quite a bit as the Dignity Initiative and that idea site is the fact that there is such great value in understanding the context in which you are working. And granted, a lot of the research that goes on does emulate from the global north and it is people who have the resources or the ability to be able to conduct these research, these activities in these contexts. But what we are also trying to emphasize is the fact that these activities should be done in a way that makes an effort to understand what the local context is and what the local situation is. So that might be by co-opting co-authors to work with the PI on these studies and they can add the value of the local context because it can be quite a nuanced experience to be able to understand exactly where things are different in Uganda compared to Brazil. So while the fundamentals might seem the same and you might think you're comparing apples and apples, you might actually be comparing apples and oranges without actually realizing it because you do not fully understand the context of the place that you're working in. So I know some journals have developed requirements for anyone who is doing research to have a co-author from the space that they are going to be doing their research. And I think this is a move in the right direction and it adds such value to the results that you would be getting from those contexts and beyond even just co-authors, it might also be a matter of understanding what's the people you're working with actually want and what they need before you sort of come in and conduct an initiative or research or whatever it might be without fully understanding whether this is useful for the people you are going to be working with. So understanding that these people should not just be there to provide the information and you're just getting data out of them but also thinking about the value that you are adding whether this is going to be useful for them. And now I have forgotten the question that you just asked. Could you do that? No, I love all of the threads that you drew out and I want to pick up on a couple of them. Sorry, Jason, did you have something you wanted to say first? Maybe it was what you were going to say and I don't want to put them on the spot but I have to say that Joel gave a keynote talk at the last conference for Amos which is one of the sponsors of this event. And I still tell people about this talk to this day because it was exactly what you were talking about. It was kind of the shocking thing where it was one of the examples you use, Joel, of some researchers came from the Global North and they were performing a study, I think maybe in Kenya and they didn't understand the context at all and they just gave, I don't want to tell your example but they gave money to the local community without all these weird stipulations where the people couldn't tell their friends where they got the money from and it actually caused more problems than it... I'll let you talk about this Joel, sorry, yeah. If you want. Yeah, sure. So I linked it back to the question that Patrick asked on what development research could learn from meta research and for me, I think it's about open sharing, sharing of knowledge or whatever goes on during a study because, you know, going back to the example that Jason was talking about, we have a lot of cash transfers studies that are happening in the Global South, you know. And in almost all cases, we usually have like problems that are associated with cash transfers because it might seem that giving people money would make them like really happy. But if you look at the Global South and how people exist with society, people do not exist as individuals, like they live within a community and if you, as I'm giving to get money from whatever you get it from about being able to properly explain like what's going on. It's usually problematic, you know and people could start associating bad things that are happening to the community, to you getting money from like strangers who came to do a study in the area. And these things happen more often than you can imagine but they still keep happening. And my theory is that like they do happen and they are rarely reported, you know when you're publishing findings and talking about the study, all these other things that happened during the study are really talked about. The trend has been just focusing on the positive aspects of a study and you don't have a lot of information about the conditions under which your intervention was a success or a failure. And because of the lack of openness in terms of like about these issues people keep on doing the same mistakes over and over again and creating more problems than development. So development can borrow so much from like meta research by being a little bit spoken about whatever happens within an intervention so that like other people can know and avoid some issues that exist when doing research. Thanks so much Joel. And I wanted to follow up on a few more of these examples of a lack of contextualization because I think it can be hard to get your mind around what's happening unless you have a couple more examples. So Nakibiana you brought up this topic in the first place do you have an example of a project where that was hurt by a lack of contextualization or it can be a story that you've heard? Would you mind some blushing that out a little bit more? Yeah, sure. So this is a story where a piece of research or an intervention was carried out in I believe it was Zambia and some people had found that when you provide a community with I think it was something like tomato seeds or something like that and they planted tomatoes and they grow them, they sell them it can improve livelihoods and it can improve like the way that you, the livelihoods of the people that are involved in the intervention. So they came in and provided these seeds for free to the people to say or plant these and once they burn then you will be able to sell them at market and you'll have more cash to be able to spend on whatever you need and the people were a bit like, okay and of course they got the seeds, they planted them but before long this community was based like near a river and for whatever reason they were not planting any crops they used to have crops brought in all the time and so the intervention towards that if we provide them with seeds then this should help them start growing stuff. What the researchers soon realized is that the reason that those people are not farmers as it were was because hippos would come in from the river and trample on the crops or eat the crops or whatever it is and it was just not a productive venture to try and farm because if you try and fight a hippo you will lose. So that's a it might sound like a bit of a silly story but it is like an example of literally someone just swooping in and thinking oh here's a community which doesn't currently farm maybe their issue is that they don't have access to seed we will provide the seed and that will help them start up without understanding the context that the reason that they do not farm is that there are other things which are outside their control so that's an example of a situation in which a little bit of even just asking the people before you actually plan this full study and you've already spent all this money and identified the site and just going in and starting your intervention just asking the people why they don't do what you think they need to do and understanding what might be more useful yeah that's just a little example I can provide Thanks Nakamiyama and the lack of contextualization can also show up in measurement Joel do you have an example of a measure or even a manipulation like in a lab experiment that didn't work or was improved by contextualization? I don't have a specific example but given that maybe the Trier social stress task from that stress project that happened in Basara do you know that one? so for some of the measures that have been borrowed from the global nodes and applied in the global south so what usually happens is that like a measure is taken from a study that was done in the global north validated in the global north and the only difference that is done in the global south is translating it to the local language and the problem is that different communities interpret concepts totally differently what could be interpreted as fair in America could be totally different but what could be interpreted as fair in a place like I don't know where I am right now so when you for example take a scale that measures maybe stress or something like what Patrick said and in an African context for something to be qualified as stress must be very extreme where you're really unable to function and because some of these scales are validated in the global north it could give you a misleading measurement when you're trying to quantify what stress looks like within maybe a community in the global south because of the different context and how people interpret some of the concepts in the social sense so we have that problem you might think that you need to see something but you don't see it and not because you've done anything wrong it's just that you haven't taken the time to understand how people define this concept you're trying to measure and if you just borrow a measure from the global north and just translate to the global south whichever community you're working in doesn't always work well so we have cases of researchers not finding significant results they expected to see something but they didn't see it and mostly it's because of this shared different understanding of concepts and how they're defining them in different societies that's a great answer for the specific case that I was thinking of maybe this is one that I just heard about the Pusara lab which is attached to where we both work so the story goes that this was a task that was designed to stress our participants out this is like a psychology research study basically and the way you stress the participants out is that you have them do a speech in front of a panel of judges dressed in white lab coats and this has been used hundreds of times mostly in the United States and Europe and really well validated but when we tried to use this in Kenya none of the Kenyan participants were stressed out because if you haven't been to Kenya Kenyans fucking love speeches people are just giving speeches all the time so it's not a stressful thing and the second problem was the most common person or role that people had seen with a white lab coat was a butcher butchers wear these white coats so one of the participants from in the Pusara lab said why am I giving a job talk or a speech to a bunch of butchers what is this I think this is a nice example of how those assumptions can creep in in this case in kind of a funny way but in more serious ways as well I wanted to transition to a question that we got on the Q&A so there's a strand of open science that is focused on trying to increase equity and some examples of this haven't necessarily worked all that well but there are things like open access fees and efforts to share research data which maybe add to the total cost of doing research because they require extra effort and so the question is that a lot of these movements have started in high income countries so is there a role for meta research especially maybe learning from development and modifying these kinds of open science movements trying to increase equity and what might that role be anybody have a thought on that I can say something if you want yeah go for it I think it's a really good point because at least in some ways that's got to be right I suppose one of the things we're learning from meta research is that we need larger sample sizes and open access fees those are expensive to get in a lot of situations so that's true that if we're evaluating research based on the sample size then it puts some people a bit more of a disadvantage I suppose one thing the actual practice of meta research can be if we're evaluating meta research more itself then it's the grants I've put in for meta research are often I ask for fewer funds less funds because it's a little bit more inexpensive because it's often just desk research when you're like research assistance code papers going through publications that's a little bit less expensive than collecting data in the field but then now I'm thinking that it's the greatest challenges that we've been talking about throughout so I'm kind of going in circles here but those are just some thoughts I had this is a little bit above my pay grades I haven't been involved in these discussions but I guess one thing that strikes me is these seemingly well intentioned reforms having perverse consequences on researchers from the global south I wonder to the extent to which we need to increase representation of researchers in these positions whether in the global south or researchers with access to fewer resources even in the global north there's a lot of inequality there in these discussions when we make these reforms and so I wonder to the extent to which when we say open access is good that move is based on experience and opinion of people that are in a relatively high resource position and so increasing equity or the number of voices in those discussions could help to the extent that these are in principle good practices that may also have benefits for some researchers with fewer resources in the global south or otherwise having access to journal articles that are not costing $30 each or being able to download data I wonder the extent to which graduated fees based on where a researcher is and what resources they have would be a better way to not have to throw the baby out with the bathwater so to speak but of course these are above my pay grade at this point yeah I love that idea of graduated fees this next question is carrying forward these themes and maybe gives you a chance to say more things so meta science to date has been pretty concentrated in the global north and I think the previous question reflects that and maybe picks on a specific aspect of that um do you think that the meta science movement or the field can learn any lessons from the development community about the dangers of being so concentrated in the global north and are there any strategies that the meta science movement could be thinking about to mitigate some of those dangers I'm jumping here to start I think the the lessons that could potentially be learned um by meta science from the development community is is firstly something that we've already discussed quite a bit in terms of um the aspect of context and how that can how that can change um your findings um depending on where you're based or where you're conducting where you're conducting your research um so I think development has internalized a little bit more perhaps um the facts that that aspect about context and how that really matters um when it comes to to conducting your when it comes to uh validating your results and saying that they are comparable um we are very hesitant to say that because we conducted uh you know cash transfer program in this situation and it was wild and successful that even if we moved to the next to another region in the same country or a neighboring country that we would necessarily get the same results so I think that is that is something that the first points that can be learned um and then from that understanding that if most of the research is concentrated in the global north you cannot generalize the findings to be a global representation so while it might um hold true more broadly in the global north you cannot assume that then those findings can be transplanted to other contexts um particularly uh compared to situations in the global south um yeah I'll I'll pause there for now and allowing my next jump in Picking up on what Nakubiana says like I think she's absolutely right about sort of context being really important both to understanding why we might see effects or not why something might be effective and also how we interpret the effects that we see I think one thing that's interesting and perhaps something that the development community needs to continue to think about but is something interesting perhaps to meta science research more broadly is to the extent that a lot of these interventions that people do are some type of policy intervention right conditional cash transfer some type of community development or community monitoring program whatever it is um right like a lot of researchers or at least their partners who are funding these things have a goal like to use this to inform policy more broadly right so to get this in the hands of policymakers that you know it's not just 40 treatment communities but the government can you know scale this up and I think that often we don't think enough about those issues of context you know both you know how communities might be different from each other individuals might be different from each other but also like how the you know scale or identity of the implementer might matter for what effects we see right which has issues for scale up so if my little NGO I don't have an NGO but you know doing this program right and we have close relationships with the communities and we do all this but then we want to go to the you know national government and tell them to do this right would we expect similar effects and so I think that's beginning to be problematized um in development and we have some you know good examples of that I think that that type of you know thinking about how evidence is used uh can be useful for the meta science community and is ultimately a meta scientific question uh more broadly even for efforts conducted you know in the global north there are much more specific populations um so that's sort of one place that I think both fields have a ways to go but uh you know could be in dialogue yeah I love that you brought up this idea of um how evidence is used um so I wonder if we could unpack that a little bit more um uh what are the intersections between meta science and development on this question of evidence use and um what sorts of things do we um know so far about evidence use so I think that you know this is a field where we're you know learning I think you know for example one way that people might propose using sort of evidence is you know either on one hand uh you know we use this small intervention in order to do sort of advocacy for uh broader you know reform right so in I did a project in Haiti where we were studying you know free legal aid for people who were illegally detained right and this is a context where there was no public defender at the moment so this was you know six or seven years ago and the goal of using that evidence was to be used in advocacy to the Haitian government to actually create sort of a public defender's office right um and you know it was successful and so far as sort of legislation there's sort of many other issues um you know since and at the moment um so like that's sort of one way that sort of organizations like aid organizations often use evidence we could also think about targeting so it may be the case that an intervention is effective only in some populations like only for women or only in you know communities below immediate some income level right and you know if that is sort of a feature of the intervention that would persist in a different context maybe we want to allocate that in for intervention to only women or to only communities below a certain income level right and so there's sort of different ways we can think about using evidence from a single you know study to you know change you know or address you know perceived social problems um and so you know a lot of that though a lot of those are sort of conditioned on assumptions about how the context of the intervention would be similar right to Nakubiana's point right so you know if when my NGO does it it has good effects for women but when the government does it there's no different differential effect for women and men then like you know telling them to sort of target this uh is you know uh a harder question and so we have some ideas about like using evidence in those ways that I think you know often come stem from uh discussions and development that are sort of closely integrated with you know policy stakeholders um I think that there's a lot of questions that we still don't know about sort of government's capacities to actually use or understand data right even sort of in the United States we have the evidence act which is trying to get you know federal agencies to run a lot more impact evaluations on their projects and sort of just the capacity building to do that is a huge problem here um which presumably is the case so I think that there's sort of a lot of open issues that we haven't studied very much there um but are you know relevant even if we want to sort of use you know a small intervention from psychology to inform you know uh you know to increase like response rates or some type of behavior on the margins um that could be learned from this type of policy collaboration. Yeah one thing I wanted to jump in on there the last thing that Tara mentioned is that one thing I've noticed in a lot of context but um I'm thinking most right now of in Australia where I've been most recently researching um the policy makers here so on one body that I've looked at the Australian Law Reform Commission who makes recommendations to the government about how some some laws might change they have so much discretion about what research they pick and which one they focus on and what they rely on and for any given questions you know sometimes there's dozens of studies on it and they can pick anyone to be the one that they use to guide their decision and the same thing happens in courts where expert witnesses can cherry pick evidence uh as much as they want really like unless there's another expert there who can call them on that and it doesn't happen very often uh so there's a real challenge of um how to make sure that these decision makers use the evidence that's or the research that's you know the most reliable or the most applicable to the context um and I haven't been able to think of any good way um and um to do that within the mechanisms that we have available here and that my field uses uh to be very curious if there are you know other fields represented this call or you know elsewhere that um have had better fairer ways of doing that because it's it's a real challenge and Jason um I wonder if you could uh talk a little bit about um the quality of evidence and it's um how that factors into um policy and law um do you have any opinions on that yeah I mean as I kind of hand waved at um the um I think in so in I look at research and forensic science which is a field that kind of produces evidence to use to solve legal problems about also criminology and psychology and law and yeah I think it's of very uneven quality um and a lot of the indicators of that I consider are indicators of quality that not sure that not everyone does um sample size and whether it was registered and there's any outcome switching I think those aren't really obvious things for a lot of people so it's sometimes challenging to um kind of encourage these bodies to take a look at those things but then there's the there's much more serious issues that folks here deal with like there just isn't applicable research on that on a topic so um I don't know I guess it goes from uh too much bad research to just not enough research um yeah again I'd be curious what how people here deal with this this problem yeah it does not any yeah go ahead knock a piano yeah sure um so in a past life I worked at a think tank which worked on policy analysis and research directly with the Zambian government and I think evidence-based policymaking has has become in some ways a bit of a buzz phrase where we are or a lot of times we're doing research in in order to try and influence policy because we see that um sort of as Qatar alluded to that is the way that we can have greater impacts or that is the perceived way that we can have greater impacts um so going from our little experiment or the piece of research and saying that okay perhaps we can apply this to to the general population um so in terms of how we can actually use evidence to influence policymaking or rulemaking I think that is that is the a tenuous link because um as Jason has mentioned these policymakers will often cherry pick what is relevant to them and what we as researchers need to understand is that they also have other factors that they have to consider so the evidence might be solid and concrete but they have um political economy like there is um you know politics that needs to be to be understood there is um okay will this affect my you know popularity scoring with the people so it is not as researchers as people who are focused on evidence we often think that well this is cut and dry why are you dragging the piece but I have had experiences where you have generated the evidence you have discussed it and presented it and then that piece of research that booklet or whatever it is just goes and sits on a shelf and collects dust because at that time um the political will to implement is not there um and what has sometimes proved to be more useful is building strong relationships um with the policymakers and building a relationship of trust with the policymakers to help them to slowly I guess um convince them of the evidence but also get their more personal buy-in so if they have a greater conviction of the facts that okay this evidence actually needs to be acted upon then this is something that I should push for despite the you know politics and the and the other um factors that need to be considered so unfortunately um it's I feel like it's not a straight link um between evidence being produced and policy being influenced or changed um but what I had what I think is a place to start or one place to start is building those relationships of trust um because yeah as as Jason mentioned um there there is a lot of cherry picking and if we work with central institutions so obviously this might be different in different countries but more often than not the government will have a think tank of some sort which they um trust I suppose so working with those types of institutions to help build those relationships I think is something that could be useful um go ahead Joel yeah um I agree with um Nakibiana about building um relationships of trust uh with the government because from my experience the governments or policy makers cherry pick they also cherry pick where to cherry pick you know um and for most of the evidence that has turned to anything into a policy has been evidence that has been generated with these policy makers a stakeholder just in the generation process of um so anything that they weren't involved in anyway then like just becomes some other published published work and uh for anything any of the findings to like be translated into policy we need to find a way to um include these actors in the process that is from my experience you know like all the research projects that I've worked on that have been translated into policy I've heard people from these policy think tanks being involved in one way or another that's the only way I've seen evidence turning into policy anything that they are not involved in even beyond interpreting evidence or using evidence even like for letting the evidence itself you know if they are not involved in one way or another it becomes really difficult to do it well and peacefully so we have cases where um it's because like nothing happens like outside the political, social and cultural dimensions you know and everything should be cognizant of this fact you know once you ignore those dimensions then things like become difficult whether it's the use of evidence or um collecting the evidence itself you know all these social cultural issues should be taken into account yeah that's great and maybe a follow up question for anybody on the panel actually do you think that there's a role for research on research or meta research to help understand more of the factors that creates the strong evidence to policy pipeline or how to strengthen that pipeline do you think that could be fruitful or do you have other ideas for how to understand this process better I think so since right now we don't have that framework for turning evidence into policy I think meta research could have some role to play um we know that there are things that need to check for evidence to be turned into policy but right now we don't have a systematic way of doing it you know if you have to collaborate with government actors or political kingdoms you know like how do we do it in a way that is both useful for researchers as well as the policy makers in the sense that to make sure that they don't influence like how you interpret your findings as well as they find your findings useful you know having that symbiotic relationship so I think meta research could generate some evidence around like how do we create a systematic framework of translating research findings and evidence into actual policy that could be used as well so transitioning to a slightly different topic so because this is about the intersection between meta science and global development I think that intersection can't be fully understood without bringing in issues around the global south which has already been a topic of discussion for the panel so I wonder if someone could do a little bit of stage setting just to describe what are some of the existing efforts within the global south to improve research culture that's a big focus on in meta science but as we discuss meta science is focused in the global north primarily but are there existing efforts and what are they that are more focused on global south issues and maybe to lead you a little bit more maybe like you could talk about some ethics issues or dignity any of that I think would be within scope I don't want to monopolize this topic as someone working in the global north but I think one thing that we need to address better and has already been alluded to are various issues of power and these come in different ways this is in the context of we talked earlier about research collaborations what type of credit or compensation is the appropriate device when we collaborate with whether they're researchers or activists or implementing partners working in the global south what is the right way to structure those relationships to ensure that those people have both maximal agency and also are compensated for the invaluable knowledge they provide the second piece of the power issue are questions about ethics there's emerging literature at least in political science trying to problematize these things thinking about how we should think about harms and benefits for the communities and individuals that we study so I think that's an emerging topic that we need to talk more about and then I think a third thing is we don't ideally when we do research in the global south it is not just fully an extractive activity and I think that we have plenty of examples I'm sure there are better examples than I can provide about when that type of extraction has just happened and then something takes the data and runs we can also think about ways to do capacity building whether those are workshops or courses or some type of more formal mentorship for researchers and people working in the field in the global south my experience with this is limited for example when I was working with evidence and governance and politics and learning days which are week long workshops on experimental methods for PIs Latin America and Sub-Saharan Africa which I think are valuable with some follow on grants for independent research so I think that's one model but at least when I think about this question as someone from the global north with probably many blind spots those sort of approaches to this power and balance are things that I think are worth talking about Go ahead Yeah, sure I'll springboard some of the things which Tara mentioned when we're looking at understanding the people that we're working with or the people that we're trying to serve or making sure that research is not as extractive I think that is something that we as the Dignity Initiative at Ivy Insights are particularly interested in in that we are we want to find a way to do development work, development research development interventions in a way that respects the people that we are trying to serve so not treating them purely as you know subjects or numbers or quantities from which data can be extracted and then we proceed to you know find our claim to fame but treating them as humans who we see their full humanity and their agency and finding ways to acknowledge and uphold their dignity One thing which comes to mind is the issue of consent I think in the global north it's something that people are very a lot more cognizant I guess of the facts that when consent form says that you can say no to this you know to us conducting this interview or you can say no to providing this data there are a lot more cognizant of their rights to be able to do that whereas in the global south or maybe I'll speak more specifically for Africa or maybe even Zambia there's a sense in which people do not fully understand or appreciate the agency that they have in being able to to guard their privacy to guard their data and say no if they do feel uncomfortable so a lot of times we find situations where a researcher comes into a community and already there is that power in balance because if it is an underprivileged community, if it is a remote community the locals themselves might feel like this is for someone coming in from abroad or from the city or whatever it might be and therefore we must cater to whatever they need and provide them and be hospitable even because it is in our culture to be hospitable so I think that is something that we need to be more cognizant of in the sense that we need to check ourselves as we do research we need to be approaching these type of interventions in a way which doesn't say or because we are going in and they are welcoming us or because we feel like this intervention is going to be super impactful or whatever it may be we should not be of the habit of just justifying what we are doing because we think it is for the greater good but seeing the people that we are working with as full humans who should be given the agency to decide whether they participate or not and whether this will be a valuable exercise or not I will stop here because I could talk for the rest of the day and let other people jump in I can jump in here with what we are currently doing which is similar to what you are doing at the community project the issue of ethics and how to conduct the research has been really out in the last couple of years in the development sector so what has been happening is that for example right now we are a panel of researchers talking about these issues and some of the ways that we can ensure that these processes are better but one thing that has consistently been missing in this conversation are the participants themselves how they define whatever thing that we want to improve for example if we want to say that you want to improve fairness how do they define fairness what is fair to them at the moment I think what is happening is that even for researchers and even the ethical review boards whatever guides the interpretation of these is a framework that was developed in the 1970s the Belmont Principles like what means to be fair, what it means to be respectful, what it means to be just and it was for even a different context like clinical trials and not social sciences which are totally different and since then we have not had a framework that is that addresses the current issues and the context of social science and what we are trying to do at Busara is bringing in participants into these conversations just to understand when you are saying that your research is fair does it tip the boxes that participants care about when it comes to fairness like what does fairness mean to them if you are saying that your research is just like does it the results reflect what participants perceive as just and I think that's one of the issues that is facing these new initiative to ensure that research processes are respectful and ethical is that we are not involving the participants more like to try and understand from their own perspective what they expect from research and what all these things mean to them so I think adding participant voices in these conversations around ethics and how to improve research is really critical like we talked extensively here context is very important so you could think that you are doing something really good your research is really ethical, respectful fair but from the participants perspective they don't understand what you are doing it's not valuable to them in any way and whatever you are doing does not resonate to what they expect from research so researchers and all the stakeholders including the voices of participants more in debates about improving research in general it should be something that should be done more so that we can make sure that all these stakeholders have shared meaning of concepts and they are on the same page at the end of the day otherwise we will have the same same problem where researchers and experts sit down come up with solutions and when they try to implement them they don't resonate to what seems to be the same cycle that's fantastic Joel and Nakabiana and to unpack this some issue even more what are those alternative procedures what would they look like to involve the participant voice more and since this is a meta science or meta research conference what are the how did you figure out those procedures what was the research process like or the evidence gathering process like to develop those procedures I could jump in here so I've been working with participants for the last 8 to 10 years you know like implementing studies collecting data from participants and the same issues I always prop up every time we go back to the field and conduct research because and all of these studies have been vetted by ethical review boards they've been approved and everything checks on paper just check again as to what the researchers did versus what they promised they would do everything checks but by the end of the day participants are not satisfied with how research is conducted there are always researchers doing this extractive sort of activity where you just come take data disappear then come back again later on with more questions and a promise that findings will turn into evidence and that never comes to life so what to realize with that process happening again like research protocols being approved checking the ethics boxes and everything seeming fine on paper and still having the same issues prop up over and over again made us realize perhaps the approach is wrong we need to be involving the end recipients of this process like more and try to understand like what's their perceptions and what's in what ways do they think that researchers can do better we believe that they are experts in their own lives they know what is fair what is just what is so without that if you've tried the frameworks that exist defining what is ethical, respectful and all that and still the issues keep on roping up again probably you should turn our approach and start talking to the participants first and trying to understand what they want and how they perceive all this ethical process what an ethical process would look like and by doing that you not only find buying from participants and them engaging more in your research you could also help in the evidence generation process because right now because of the current system that exists where researchers get data go interpret published findings and never come back to the participants to share the findings and sort of like see whether they misinterpreted anything in the process is something that could benefit from this approach where every decision that a researcher makes or anyone called in the research ecosystem that eventually affects the participants the solutions and the information should be coming from the participants on how to improve these processes otherwise we will keep on having the same issues again like I've seen the last 10 years doing ethical research and still having complaints from participants in terms of them not seeing the value of participating in research to them like it's not clear like why they do because never resonates with them that's fantastic so shifting gears a little bit I mentioned at the beginning of the panel that we're from different kinds of institutions Joel, Nakampiani and I are all from we're working at nonprofits so more generally just for the benefit of the viewers here what do you see as the role of non-academic researchers people at nonprofit institutions like us in global south research especially in improving research cultures and how do you think non-academic researchers can be better supported in their work maybe I'll start I think one difference maybe between academic and non-academic researchers particularly in the development space is that in some ways non-academic researchers maybe more often based in the context where they're working and we've already talked quite a bit about how context is important and also I think because we are often in those settings we have a greater opportunity to look beyond that was just the results of the research that we're doing or just making sure that this research is rigorous and is producing replicable results or whatever it might be to acknowledging the participants that we are working with as Joel has just talked about quite a bit so because we are in these contexts sometimes there is the potential that we can better acknowledge the participants that we are working with and therefore be a bit more drawn to the research and the rigor and the evidence to finding ways to action the research that we have been conducting and I think in some ways perhaps we can also play the role of advocacy that we spoke about a little bit when it comes to policy creation or yeah, policy making so we straddle the line between generating that evidence and being the producers of that evidence to also be the people who can talk to the policy decrees more directly in a bit of a dunked down way to say okay this is the p-value but to you this is what it actually means and yeah, maybe perhaps use more layman language to make it a lot more accessible to the decision makers or the policy makers yeah, those are some initial thoughts about how non-academic researchers might help the situation that's great so I have one final question before I kind of try to give a wrap on the session I think this is a nice one to end on so in what ways can or should the tools and methodologies of meta-science be adapted to better suit the needs of researchers either in nonprofit settings like us or generally who are doing work in the globe itself I could go so yeah meta-science what has been happening in the development space right now in that concentrating more the global north and later coming to the global south has its own coasts and I think for meta-science methodologies to be well adapted to the global south there's need to be more efforts to contextualize this and test them in the global south more efforts should be put in place to do whatever is happening in the global north in the global south simultaneously and trying to test these methodologies across all regions you can see what's happening in the development space right now we are developing the global north and now it's taking more efforts to try and implement them in the global south so if the issue of evidence generation from meta-science research could happen simultaneously across all regions yeah somewhere in terms of making sure that they are applicable in many many areas and spaces thanks so much Joel so to try to bring this to a wrap we've covered really a lot of ground so in terms of where meta-science is obviously meta-science has focused a lot on open science reproducibility and those topics are relevant to development research and development research could potentially benefit from those kinds of focuses Nakabyana for example mentioned how just the access to articles can be a great hindrance to knowing what the evidence is in order to build on it but I think also things like open data and pre-registration although we didn't talk about them in detail in the panel those could be applicable as well however one of the big blind spots of meta-science is the issue of generalizability and everything that lies behind it and that's a topic that has been a focus although maybe there's more to go in development research and Tara mentioned this Metaketa initiative that's one example in general there's there seems to be a greater appreciation for both how findings might not generalize but also generally the role of context and how it plays all of these different roles in the evidence generation process even it can even affect whether the measures that you're using are meaningful at all and we also covered the evidence and how policy pipeline and how there are many factors involved in whether evidence is used at all including trust and I think you had a lot of insightful comments about the role of trust in particular in relationships with policymakers and perhaps meta-science could play a role in understanding that pipeline a little bit better and finally we talked a lot about issues around the north and global south power differences and some ethical problems that can happen with research that is not thoughtful and I think all of these could be our fruitful areas of intersection and places where meta-science and development could benefit from each other so I just want to thank the panelists so much for your patience in tolerating my little pet interest and I hope this pet interest can be not a pet interest anymore but something that more people find interesting too and something that where there is this fruitful intersection between the two areas so thanks so much and I think Jason you had one small pitch to give maybe but yes I am a member of the association for interdisciplinary meta-research and open science formally on the board and president of it and we have our own conference coming up in November in Brisbane so keep an eye out for that similar stuff to this conference yeah thanks Jason and actually Joel and I I think we referenced the keynotes that we gave at that conference which is online and on similar topics do look that up so thanks everyone thanks for your patience and for your attention over this hour and a half that's a long period of time I hope you found this interesting and thanks for coming