 Why don't I start with Farida Benna who's joined us from the International Rescue Committee. She's a Director of Humanitarian Policy and Advocacy since 2017 and she brings 20 years of experience in programming and policy and particular focus on accountability and aid effectiveness. And prior to working at the RICRC, she's worked with various other organizations and NGOs, OECD, CARE, the European Commission, Oxfam etc. Right here next to me is Maria Luz Biazafini-Kamiya who's with our Gindak Department Centre in Berlin. She's one of the impact evaluation officers we have in an IOM and we don't have that many so it's a pleasure to have her with us. Maria Luz joined IOM just over a year ago and she's working primarily on using experimental and quasi-experimental impact evaluation methods. Prior to joining IOM she's done program management and research assistant focusing primarily on data collection and analysis in various countries across Asia and Africa. Then we have Kylian Norden who's joining us from the Abdulatik Jamil Poverty Action Lab, J-PAL as associate director here in the European office in Paris. And he helps to manage the J-PAL network looking at rigorous impact evaluations on violence and conflict resolution, or sorry reduction, excuse me. Prior to joining J-PAL he worked also with the International Crisis Group as well as a policy analyst for conflict. And last but not least we have Kristin McCullen who's joining us from WFP as an evaluation analyst. And she coordinates a portfolio of impact evaluation on cash-based transfers as well as gender. And prior to joining the WFP Kristin was working also with 3IE, the International Initiative for Impact Evaluation as well as E-5 and Oxfam Great Britain. So once again welcome to the four of you, it's nice to have you all with us today. Before I start asking questions to the panelists I have two questions and I hope people can be candid. But ultimately we knew that we would have a mixed group of people in this audience. And it would be interesting for us and hopefully for us to in terms of how we engage in this discussion. To aid just get a quick poll if you can just raise your hand I won't call on you I promise. But one if you've already envisioned how within the context of your own work how you think you could use impact evaluation. If you can just, great, okay, okay, pretty good. And if maybe you can ask those to just put number one if they feel for those that are on the webinar. And the last question and again no judgment but it's important for us to know if you're still a bit unsure as to whether impact evaluation is realistic within the context of your work. Okay, very few people that's great I think we can all go home then. Perfect, so again if those in the panel if they can give us a potential to just to gauge the interest in the knowledge and just how people are feeling about impact evaluation. So that's hopefully gives you a sense as well of where our audience is today. But because we do want to try to get through our questions what I'd like to do is start with our first question. Which is trying to look at the benefits of impact evaluation and how the approach has contributed to advances in your work. And potentially because of time if you could just you talk a little bit about what your work is when trying to answer that question. Maybe we can start off with this. Thank you. Thank you for inviting me and good morning to everybody. Good afternoon. Yes, I think it's important before I reply to your question to explain a little bit what IRC International Rescue Committee does. Our mission is basically to help the people affected by crisis to do three things. First of all most important to survive to recover and hopefully regain control over their lives. We like to call the people we serve our clients is the provocative term that we use because we reject the notion of beneficiaries. These people do not necessarily just passively benefit from our intervention. But we want our clients to be active in choosing how to respond to our intervention and also voicing their concerns. So client voice and client choice for us are paramount. We operate across the full arc of the crisis from the emergency response to recovery to development. And for us impact evaluations are key to understanding the effectiveness of what we do. The impact evaluation for us responds to the primary question, are we doing a good job? Is this actually working? And more importantly, why? We find that a lot of the monitoring activities that we carry out actually focus on the what? What are we doing? Where are we at? What's happening? But rarely do we and I can say also the humanitarian community takes the time to actually reflect on the why. Why are we doing what we do? Why do we continue doing what we do? And invariably this why question is in our opinion ignored. IRC has, we believe, made an important contribution to impact evaluations because out of what we have counted as 171 high quality impact evaluations carried out since 2006, we have conducted 39 of those and we have 18 more in the pipeline. So among the humanitarian implementing agencies IRC has made the largest contribution to impact evaluations and we're happy about that. But we also think that impact evaluations to respond to some critics do not really answer all of the why question. You need a specific context that you need to have in a way an ideal context to carry out rigorous impact evaluations. And sometimes there are factors that you simply cannot measure. I don't know who said this but not everything that counts can be counted. Not everything that is counted actually counts and it is so true. We work in some of the most complex contexts around the world, crisis affected countries, people that are in protracted crisis on average for 17 years. So there's bound to be factors that really escape in rigorous impact evaluations. And I think one of the greatest learnings that we've made is to realize, to recognize that there's only so much we can measure and there's so much we simply don't know. So in a way the challenge now is how do we try to measure or at least understand what we cannot measure through impact evaluations alone. Thank you. Maybe jumping on that and potentially going to Kristin next. If you can also talk about benefits of impact evaluation but also looking at what Farida mentioned about where it may not actually be the best way of trying to measure and learn from. And what the experience has been in WFP when developing impact evaluation. So WFP has a lot of parallels with IRC and that we also are a humanitarian organization in many contexts and working in tricky situations like that. Our interventions are almost always household based and so that actually wins itself really well to impact evaluation. I think where we find it useful is testing these assumptions. This use of this rejection of the term beneficiary I think highlights it really well that that itself is an assumption. Are they actually benefiting? So we use impact evaluation to test that assumption but then also contribute to these questions about why. Adding to that when we speak about impact evaluation we always say and for whom. And so when you have the sample size or when you have the time to design this kind of study then it's also really interesting to be able to tease out what age is or what genders or who exactly is benefiting and who needs extra support. And I think that's in all contexts but particularly in the context of humanitarian interventions is a very important question to be asking. My job is entirely impact evaluation so I'm probably not the best place to speak about when it's not useful. But there are conversations that we've had with country offices when they have important questions about institutional capacity. When they're talking about resilience building and trying to bridge that gap between what happens after a disaster. How do we make sure that we can prevent it from being so grave in the future. That kind of institutional level capacity building is something that makes a lot of sense for other types of evaluations. And also getting a sense of the entirety of a country's activities where impact evaluation would focus on a particular intervention and the mechanics of a particular intervention. If you want to learn broader lessons about how a country does, you know a country program does what it does then that requires different types. And so as was mentioned this morning obviously this is something that complements itself rather than is at odds with each other I think. Okay, thank you. Maybe we can jump over to your key and if you could speak a bit about what J-PAL does and the use of that of all of the impact evaluation that you researched. And how you're learning from that or potentially where you find that you might have to actually get that information from different sources of evaluations or research. Sure. So as you mentioned J-PAL is the Abdul Latif Demilo Poverty Action Lab. It's a network of professors over about 180 professors who are affiliated around the world in different universities. And then a staff that works with those professors to work primarily on randomized control. So like you Kristen through my focus is on rigorous impact evaluation. But I can talk about how other methods sort of complement this research. I think some of the most exciting ways in which it's grown the network was founded in 2003. We have now either completed or working on over a thousand randomized evaluations across 10 different sectors. So some of those that were maybe the best known in the beginning were agriculture, looking at the adoption, new technologies, health and questions like preventative health products and pricing. Is it important to charge a price for something or is it better to hand it out for free if you're looking at uptake and uptake is what matters or reducing incidence of disease. And then with education I guess would be another classic field looking at best ways of ensuring that students don't end up only in school but actually learn things. And then we've expanded into new areas. So as you mentioned one of the areas that I've been involved in has been experimental research on crime and conflict. Issues like policing but also dispute resolution trying to find ways to reduce conflict in different ways. We've now set up a new sector on firms which focuses on the role that firms play and even management practices within firms and sort of how that affects individuals welfare. But also sort of their take home earnings and then also our new sectors on gender sort of looking at particularly this question that I've asked for mentioned earlier, heterogeneous effects. Whereas you were saying sort of how different programs might have different effects for different people and looking at particularly issues of women's empowerment. So across all of those I think this question of measurement is a really interesting one. It's one where you don't need to do a randomized control trial to sort of invest in interesting measurement practices. But I think it pushes you a little bit in that way. Yes friend Felipe we're talking about some of the challenges of collecting this information and thinking about how we measure it. Making some sort of cost effectiveness choices about what are the best ways to measure it given that this can be quite expensive method of evaluation. How do we make sure that we learn the most that we can with limited resources. And I think measuring some things that we thought maybe weren't measurable before. So I think about a lot of the work that's done in the sector I work on around issues such as social capital or the bonds that keep communities together. Some of these questions that people are researching right now about whether increasing social capital is an effective bulwark against conflict particularly the resumption of conflict in areas that it's happened before. So thinking about how we measure that it seems like kind of a fuzzy concept. We might be able to put their finger on it. We might be able to sort of talk about survey responses. You think there's a lot of social capital here. That's not a question that's going to gel for many people. So sort of one study that comes to mind was in Sierra Leone where they actually asked questions about how willing would you be to lend money to people in the community and exactly whom. And sort of measuring that at baseline and then later at end line and the reverse question on some level who would you ask for money. So measuring these kind of elements of trust and then also seeing the extent to which people participate in civic life. So do they attend parent teacher association meetings what kinds of activities are they engaging and that also seem to reflect some sense of the ways in which people are bound together. And so I think that's actually some of the work that I find the most exciting is thinking about and again it's not explicitly tied to rigorous impact evaluation. But I think these things sometimes go hand in hand and it's one contribution that different researchers are making. But we see in a lot of rigorous impact evaluation is thinking how actually we're going to measure this. And it pushes us back to that theory of change that the program started with and thinking how do we think the change works. And so I think the other thing I would pick up that we're hearing across the panel is this question of what Yasper and Felipe are there called the mechanism. How does it work and thinking that impact evaluation is an important investment because it can take us beyond did this program work. How did we get done what we wanted to do which is a very important question in and of itself to also how did that change happen and sort of how does that advance our understanding. And I guess one thing I'd say that is perhaps a little bit special above the way that J-PAL works is that it's trying to tie together the academic conversation about these issues of what's getting published in academic journals with the policy conversation as well. So some of the conversation that we're having here today and sort of the decision makers need to decide what to implement by sort of saying we're learning more about how human behavior works. In this case it was about the sort of theory of change identifying the assumption that there's some idea that people sort of have a, there's a personal resonance of the story when it's told by people who they can relate to. So trying to sort of pull that out and then think how can we then apply that to other types of programming and tests in new ways. A few things to come back to. Thank you. Thank you. I think carrying on that grain but also moving on to our next question in terms of trying to look at why we would do this and what investment that it brings. Maybe, Maddie, at least you could talk to us a little bit about how do you advocate within your field for better evidence and trying to see how to use impact evaluation for that purpose. Well, my field is our field. So I don't speak much because I'm the IOMer here and I think so most of you guys know what we're doing because you're also IOMers. So to advocate for impact evaluation we always go back to their benefits and I think I'm pretty new to the field of migration before that. I was in other field development economics and environment. So for me, it's a discovery of migration. I'm here for my technical skills in terms of impact evaluation and statistics and experimental method to address the impact of programs. But I'm very new to the field of migration and what I found highly interesting in using impact evaluation within the field of migration is I think there is a field that desperately needs evidence is migration because it's very politically sensitive because there is a lot of opinion because there is a lot of ideas and we cannot afford to design programs and projects in migration on the basis of ideas of preconceived ideas of opinions and we need evidence and I think that's why in the particular field of migration nowadays, irregular migration all the more, we really need robust impact evaluation and I would also add that what is also new for me is to work for a big international organization where I was working for J-PAL at the time so it's much smaller and for DIME which is the World Bank but it's a smaller unit and what I do see we're doing, what I'm doing when I'm doing impact evaluation for IOM is that I'm bridging the world of HQ and the world of the mission and the field work where I think the very highly empirical nature of impact evaluation is itself a benefit because it gives us which are somehow remote from the field a very good sense of what is happening when you're implementing an impact evaluation you have no other choice but to get there, to get to the field, to get to understand the reality of the work on the field and also the reality of the people you are going to work with so I think this is an invaluable benefit for designing and for also advising policy makers in future projects. Thank you. When we discussed a few weeks ago one of the elements that we were talking about is also how to ensure that at the program level there's a way that when we're developing, designing and implementing programs we're thinking about how impact evaluation can be used and what is the common ground. I'd like to go back to our panelists to talk about particularly maybe we'll start with you Kristin in terms of WFP but how can we use or how does WFP use impact evaluation to either complement or strengthen existing M&E systems you might have within your programs at WFP? So something to note right now is WFP is in the middle of implementing a new impact evaluation strategy so it has quite a long history of doing impact evaluation within the organization but it was largely decentralized until this point so right now we're kind of witnessing the evolution of that into something that's a bit more centralized so that we can harness a lot of demand that already exists and kind of direct it into priority learning areas. Part of this strategy is the acknowledging that we need a lot more kind of capacity building and infrastructure building in house and to be able to recognize that a lot of that already exists in the country offices and so this is kind of where our monitoring data pops up. So the vision for the impact evaluation strategy moving forward is to really complement what we're doing with the routine monitoring activities that are already happening in the field as part of the reporting that must be done to either headquarters or to donors. So when we roll out let's say a baseline survey that we want to talk about impact evaluation the best case scenario is that we're just kind of hooking on to a survey that would have gone out anyway and that we use that survey to make sure that we're asking the right questions and we wouldn't tinker with it much other than within that survey is the sort of randomization that happens. There's already a wealth of information that exists in our country offices and that can be used for a variety of things that are inherent in the impact evaluation process so one that I'm thinking about right now is just using it for a sampling frame before you go in to really start understanding the context that exists knowing what questions that you should be asking based on the data that already exists and there's this other point about that was brought up earlier which is the cost of an impact evaluation is large because of the data that is a huge contributing factor to that. If we want to do this well and if we want a lot of country office buy-in and if we want to prove the worth of the cost of the impact evaluation then it's beneficial to us to find ways to lower that cost and so monitoring data and monitoring activities that are already ongoing if we can kind of hook on to that to not add any more costs good monitoring data is one of the most logical ways of making impact evaluations cheaper. Thank you. Frida I think I'll ask you a similar question but I would like to talk a little bit I'd like to add a little bit more about the financial implications that I think came up in the earlier discussions as well about the cost benefit and as this is also something that you are involved in it would be interesting for you to add that. We do cost analysis all the time especially because now donors increasingly want value for money and already that definition is open to interpretation what do we actually mean? I think one thing we learned through impact evaluations is also the importance of the time dimension once you factor in time especially when you deal with displaced populations people that as I said earlier on average are displaced before 17 years you have to reassess the overall cost of an intervention by IRC or the humanitarian sector in general we have developed some of you may be familiar with this tool called scan systematic cost analysis tool which is also available on the net and on the internet for free for everybody to use and it basically looks at the cost of the intervention per unit for example in the case of a health intervention we look not just at the basic service delivery but also we look at important factors like the quality and the duration of the training how we train health workers and we found out by comparing different interventions that once you factor in the quality of the training actually the cost per patient drops so if you evaluate the whole intervention across the whole ARC of the response they are similar different and varied sometimes surprising findings but most of the cost effective intervention may be the longest and that also brings me to talk about the time dimension when it comes to research and evaluation we now focus research on multi-year endeavors on four different areas education in emergencies, cash in emergencies under five mortality, reducing under five mortality and reducing family violence because through impact evaluations we've come to learn that those are the four areas where research can actually have the highest impact that has the potential for the highest benefit to our clients Thank you very much, thank you One of the elements that you mentioned, Kilian were some of the innovative approaches that you have within J-PAL and I'd like to know if you could speak a little bit more about what has already been put into place that are used to address some of the challenges we might have and impact evaluation Sure, it's a big question Maybe something I didn't mention was that every evaluation that the J-PAL network is involved in starts with a partner organization in some cases that's governments, be it national governments or state governments in other cases it's non-profit organizations and I think it also starts with a question an evaluation question which is the basis for that partnership saying we think this works, we think this is an effective approach but we'd like to know more or we're interested in understanding whether one of two different approaches is more effective some of the conversations we've been having earlier look at sort of Kristen's question of sort of for whom who do we provide the intervention to and how so I think that's the beginning and when we talk about the complementarity of different research approaches a lot of that initial analysis begins on comes from monitoring data it comes from administrative data in the cases of governments looking a little bit at the data they already have and trying to learn more from it and think about how we frame the question partly going back to what a theory of change might look like and examining sort of whether we need to unpack it a little bit further whether there's some sort of missing links that we haven't thought about and that we want to test in an evaluation I think that's one very powerful tool another one is sort of various kinds of qualitative data looking at people's understanding of the processes behind how some of these programs run where they think they're most effective and understanding things there understanding better who the population that's involved in the program actually is and things like we were talking about earlier sort of how many people are going to show up at the screening trying to understand that a little bit and what people's behaviors are so that you can structure the evaluation accordingly and then I think at the other end trying to then understand and I think this is where there's actually a lot of work to be done but it's really exciting work is once you have the results and impact evaluation what do we do with those and how do we think about how those generalize so earlier we mentioned external validity I think thinking through that carefully what were the assumptions that were in place on why this program ran the way it did in the place that it did and how do other contexts look like that or how do they differ so one thing that just comes to mind from our conversation earlier was relatively high baseline assumptions of what risk looks like which is interesting so it might be different to think about how this program would work in an area where people had very low baseline understanding of what the risk was and sort of thought there's zero risks at all how would this program need to be adapted and sort of what would that look like to think that's part of it and then also just thinking impact evaluation can often be quite dense we learned about the annex to this report being 60 or 70 pages how do you actually digest that material and think how do I then feed this back into program design particularly if I'm not someone who just wants to zoom into the tables and then that's where I think J-PAL and IPA or Innovations for Property Action other organizations like this are investing and many governments and international organizations themselves are investing in sort of saying okay how do we sit down and think about what are the many ways in which this makes us probably rethink some of the assumptions we had going into other kinds of programs so I think that's a it's a lot of work and we shouldn't assume that it just gets done once an impact evaluation is sort of published in some sense that's the beginning of another stage of kind of thinking what types of adaptation do we now need to build into the rest of our programs and often you know what's the new study that we run and how do we then run this against the effectiveness coming back to these questions of cost analysis how do we run this against other programs and think about where money should best be invested okay thank you that's an excellent point about the usage and again an element that comes up not just all to see an impact evaluation but in any type of research or evaluation what are we doing with all of this rich data and these reports and how do we actually turn this into a good implementation and learning for all of us and I'm going to stop with some of my questions and give a chance for those who are online or in the room to ask a few questions as well yes please go ahead following on from the point I'm really thinking about the burden that's placed on people accessing services in support to partake in these types of surveys and feedback groups and what to the panel what can we do as a sector to try and alleviate some of that burden learning the way that we share our findings our research our data and thinking about how these communities are revisited by numerous organizations asking similar questions okay is there someone that would like to start tackling that experience of what you might have done I think this goes right to the core of what Diana was talking about we seem to be learning the same thing over and over again so at some point you have to wonder why doesn't the learning translate into policy that's I'm not sure that the challenge is to find out what our clients think often we already know what they're going to tell us the issue is why isn't this learning translating informing policies and that's where I think the advocacy plays a key role because we have to make sure that policy makers decision makers not only listen but actually act on what they listen and that's where we usually face political more political barriers so yeah thank you for raising that point we have an internal team that we call client responsiveness team and they come across this issue all the time the population our groups are tired of being interviewed and even if you give them food some other kind of incentive they won't show up at this community meetings because for our community meeting you have the leader opening up the ceremony and all that I mean these people have to work and if they don't work they've got to be at home looking after their kids how can we expect them to tell us over and over again what they need it's not that I'm not saying that we already know what they need because that's a major assumption all I'm saying is that at some point all this information needs to be acted upon and why isn't this happening again the why question you know we may know what is happening or not so why isn't this translating into effective response effective policy maybe I could just say briefly I think that sort of question of sort of how do we use this information reporting back results I think is important it's often a challenge to think what's the best way to frame this information particularly those 70 page annex but I think sort of where you can build instructions and I guess this is a lesson maybe particularly for relevant for international organizations where you can build instructions to actually then adapt programs relatively quickly and sort of say we're introducing a change this is why we've made this change we believe it's going to be more effective I think that's part of the answers and saying better programming comes from having had good results whether they're positive or negative maybe learning that you need to scale back a program it's not always going to involve building it up but I think some of the most powerful examples we have from within the network are when something started as a pilot project it got early promising results there were some adaptation was running again at scale in a different way then it was perhaps adapted and replicated in other contexts and then you sort of have this snowball approach of something where you're trying to understand how it works and I think the answer is yes the results you know this is why the study worked the way it did but it's even more so in the actual implementation and saying the programs now run differently because it's more effective this way and then continuing to evaluate but also just to very quickly on this we are now having this project of pulling together the results of several research impact evaluations to avoid this to avoid response fatigue and also because we are sitting on all this primary data which is costly to collect and it's super interesting and it's primary data and we use them for one report and then it sort of goes somewhere in a hard drive and IOM is full of those kind of data and so we want to pull it together and so also maybe it could be possible for us to exploit much more this data that we already have so that's what we are now trying to do is out of all the study we are going to do in the impact evaluation unit to use them as secondary data that's for one so it also reduces response fatigue it's also a way to have secondary analysis without investing again in data collection and also it's sort of it could help to sort the external validity question which is crucial to impact evaluation because it's true for now and for there but if we have pretty similar studies that we are running in different countries and we are able somehow to pull all that data set into one then we can maybe generalize a little bit more what we find for Senegal or what we find for Guinea so and if we think we can collaborate between different international institutions and put all our data together I'm sure there is a lot there that we can dig without having to harass people in villages somewhere exactly or drive them with food maybe before can I jump to the webinar to see if there is anyone who has a question and I'll go back to the audience sure there's a question from Sophie thank you for the presentation and panel for providing the opportunity to join remotely building on a point raised about how evidence is used and applied I would like to ask the panelists whether they can provide some examples of how impact evaluations have informed their programming please great do we want to do a few more questions and then come back to the panel that's fine, yes go ahead thank you very much it's really an interesting discussion one of the characters always to create awareness within international organizations and within governments that we use evaluations to support it now we have some analysts here that actually have created this unit in international organizations and I was wondering what advice you could give on how to create units like that and how to create an awareness and similarly Kilian I don't know if you've worked with governments to create similar units I mean we know the British government has this bed-insides theme but there are many governments who don't have that in touch so I don't know if you as J-Pel have experienced and then I'll just take one more question and then I'll go back there's actually more comment on the question but just a very little while ago we had a very big meeting down at the Graduate Institute where the international network for education emergencies and NORAC got together to talk about data management and data sharing and I think a lot of the focus was also and that's an IMEI initiative as well on trying to find ways in which we can harmonize our indicators because if we go and we run higher education programs in the field so if we want to triangulate the data that we've collected in the spirit of really very very low data harvesting thresholds we don't find the quality data in the field from IMEI output that would allow us to triangulate so we have no choice but to go back in so my question to you is how would you approach that in terms of research ethics in fragile context kind of guidelines that we should be abiding by but also in terms of the data sharing and obviously all the data regulations prevent us in many ways from sharing thank you Kristin did you want to just to provide kind of one example obviously since we're the world food program we work a lot in the area of nutrition we're very lucky to have a department of WFP a nutrition department that is already quite interested in evidence and there's a very clear incentive there right nobody wants to be bad at their job so it makes sense that you would want to use as much evidence as you can to improve as much as you can before I arrived at WFP there was a series of impact evaluations which is also confusingly abbreviated as MAM modern acute malnutrition which was done as part of a series and so it was done across a few countries in the Sahel to kind of get at what are some common lessons that we can learn that would really inform not only the country programming but would have some broader lessons that this department at headquarters could build on and it actually parallels a lot with what we're seeing in this MAM which is that there were some results that were almost immediately actionable and so the nutrition department has responded by saying it's clear from this research that this kind of peer to peer approach for behavior change seems to be largely effective for trying to understand how to best encourage better nutrition for families for children so they're taking that approach from moving forward in more countries and also to build on kind of the digital aspect of it so mobile phone aspects so they are building better monitoring systems also to your point to kind of make sure that when they go back to these countries in the future that they have much better data and these cell phones are collecting basically visit data for health care checkups and things that the person themselves is information that the person themselves is volunteering as data but it's a good way to track how much are people relying on our services how much do people actually benefit from WFP's interventions and using that data to inform future programming in other countries so it had certainly that series taught us a lot about how we want to do impact evaluations but it also still has some effects that you can see in the organization. Thanks for that. I think on that first question of sort of in terms of working with governments and how to set up an evidence partnership at the very outset of the relationship I don't think we have a lot of insight to how it started or how to sort of create it because I think if you don't care about the effectiveness of your programs it's going to be difficult to set up an impact evaluation you didn't go there and I think a lot of that happens on partly sort of single change agents people sort of say we should be asking more of a question or it comes from some dissemination of perhaps negative results there's a sense that we're doing something that's a bit like this project which appears to not work at all and some pressure so I think from at the outset of the process we probably don't have a lot of insight of how things happen I think when in terms of moving forward it often starts on a little bit about what Malia is talking about thinking about the data you already have and ways of using it more responsibly and sort of engaging different sort of professors or maybe even students to kind of think about what's the data we have and what story does that already tell for us tell us and one thing that J-PAL and that IPA have often done is sort of just placing someone embedding someone whose focus is on actually thinking about what that data is thinking about how it can be used and also thinking about what small changes could be made in the way that data is currently being collected that would allow us to use it much more powerfully so is there some way that we could either be cleaning it or standardizing the way in which we're bringing this information that would really improve quality with actually a small marginal change and then I think from there sort of setting up sometimes labs with governments that say we have a question or an area that we're particularly interested in I think of J-PAL and IPA partnered with the Education Ministry in Peru in particular we're in India there's a strong partnership with the government of Tamil Nadu at state level to say we're going to evaluate a huge range of programming in this area we're also just going to think a lot about what monitoring and evaluation actually means in this sort of government unit and sort of how we can learn to do it better and when and when not impact evaluation is going to be the best way forward and then sort of I think kind of get in more energy to actually drive that into a cycle where you're both before you start a program drawing on the existing evidence base to say what's clear that I shouldn't be doing what are some ideas about what I might be doing and then pilot sort of innovating and piloting and then sort of running this kind of cycle of innovation and evaluation so I think at the very outside we're not exactly sure but once you sort of have that start that says okay we'd like to learn more about the way in which we're programming that's one way to start it We are close to running out because I know there's still another presentation but can someone tackle the question about data sharing because I think it's a very valuable question I think we'd like to make a broader comment about data sharing and just data management knowledge management I think to go back to what I was saying earlier we are living in an age where there's just too much information out there and even if we do a great job in impact evaluations the reality is that they're going to sit on a shelf or in a hard drive very soon and we're not going to go back there so NIRC and I think in the humanitarian sector I think it is indispensable to hire and design a sort of new role that is the one of the knowledge broker the impact evaluation, the book, the toolbox the toolkit is not going to do the trick it's not a database that's going to find the magic solution to this problem it's the people curating content and digesting in a way all this information into something that you can communicate to the busy politician I mean you think the elevator pitch is a parody it isn't I have been in elevators with ministers who literally had 20 seconds to hear my key message and at that moment I have to be quick with killer facts with the key message I have to look confident and sound confident and for that I don't have the time to do the research work I need someone in my organization that will talk to me and give me the rationale for the argument I'm going to make in that elevator the knowledge brokering is absolutely key then we can go to the technical side the legalistic aspect of things and discuss how we're going to share information but to me that's already like an idealistic scenario before even getting to that challenge which exists I would like to know how we're going to manage all of this how are we going to manage impact evaluations I'm going to turn that around because you've taken the word right out of my mouth I want to ask one final question for just a short answer for each of you for those who might still be not convinced or unclear is there one piece of advice or one pitch that you can give to each of you just to keep them engaged and interested in taking this forward I'll start with you I think evidence-based policy and advocacy really works once you're able to sit down with the busy politician the skeptical cynical bureaucrat and actually show them that what you're saying is documented and builds on reliable, rigorous evidence they're going to be ready to listen and if you can really summarize it in 10 or fewer points it's so much the better thank you I would say don't be sort of afraid of the cost as an economist I'd say you always have to look at the opportunity cost you're saving and think of you implementing a program or a project that doesn't work it's going to cost way more and I don't need to do the survey to tell you that it's going to cost way more than this evaluation you are investing in and this evaluation has many positive spillover anyways so you can only win by doing a rigorous evaluation of your project I would just say that with the experience that we have so far I think one of the encouraging lessons that we've learned is really that there are incentive structures that already exist for impact evaluation not only asking for it for reasons of accountability but also for learning what it is that they next want to fund that demand from programmers people who are really on the field demand for this evidence already exists there's a wealth of information and a wealth of experience in all of our countries and people want to be able to prove that they are thought leaders and what it is that they do and so it just takes that to be so small to really kind of be the conduit for that demand and for those incentives and be the advocate that's kind of bridging those those two things okay thank you yeah I'll make one practical and then we'll be one for our argument the practical and I think is that impact evaluations once you have sort of strong evidence are a huge way to unlock resources over 400 million people have been directly touched by a program that was evaluated by JPL researchers and that's not started at scale it's because they started small showed promise of effectiveness and then we're scaled up over time and then I think the sort of broader point makes slightly more idealistic perhaps or a little bit less maybe not cynical but practical it's just that there's a wealth of information that you can learn and I think with a well designed impact evaluation you just sort of can uncover quite a wealth of insight and that's useful both in terms of the program itself but also sort of broader thinking about how development works or any program thank you very much thank you for taking the time to join us and thank you