 Good morning everybody. Welcome. Thanks very much for joining us. My name is Lisa Cardi and I'm the Deputy Director of the Global Health Policy Center here at CSIS. Before we get started just a quick administrative announcement which is I don't know if any of you were planning on coming to an event we were having here this afternoon at 1.30 a part of our fault lines debate session. Unfortunately we've had to unexpectedly cancel that due to a constraint that was managed with one of our debaters but that's going to be rescheduled for January so we'll hope if you're planning on coming today we'll hope that you can come back and join us in January for that discussion. So I think I'll start this morning with a bit of a confession which was that when we decided to organize a discussion on this subject on data and using data to get better global health impacts we weren't entirely sure what type of enthusiastic audience we'd get first thing on a Monday morning and we can see from all of you here we've actually gotten a great audience. In addition we have several dozen people online on a webcast and we want to say welcome to them as well. I think for some of us you know the field is quite fascinating and I'm certainly one who's really enjoy trying to think about the set of issues but you know for other folks I think it can sometimes seem a little bit dry and a little bit technical but it's certainly good to see that that is no longer the case. Even though I think the use of data, how to collect data, how to analyze it it's been a fundamental part of public health training for many decades and it's certainly the language of epidemiologists. I think one thing we've seen over the last five or ten years is a much wider and deeper understanding and a higher prioritization of the critical importance of getting good data and then using it effectively. We've seen it in the work of the Gates Foundation, we've seen it in the area of HIV AIDS with the focus on knowing your epidemic and we've certainly seen it now in the new U.S. Global Health Initiative where one of the key principles is improving metrics, monitoring, and evaluation. And now I think it's great this morning that we have a first-hand opportunity to learn more from three individuals who've been doing some really critical work in this area so let me say a little bit about each of them. Gina Dallabeta is a senior program officer at the Bill and Melinda Gates Foundation where her work has largely focused on the Foundation's HIV AIDS Initiative in India, which is known as Avahan. Before that she spent many years with FHI. She's been working in the HIV AIDS field since the 1980s and has lived and worked both in Malawi and India and probably a number of other places that I don't know about. Gina is a graduate of MIT and also of the Johns Hopkins School of Medicine. Paul Bowie is the Deputy Global AIDS Coordinator. He's responsible for strategic information, budget, and management. He was a senior advisor in PEPFAR from 06 to 09 prior to assuming his current position. He's worked extensively with WHO, UN AIDS, and the Global Fund on issues related to strategic information. He was director of evaluation at the Pangea Global AIDS Foundation from 2001 to 2006 and was resident in China during part of that period. He's also worked extensively in Rwanda, South Africa, and Ethiopia. Paul has degrees from the University of California and also from Johns Hopkins School of Public Health. And then finally, Ruth Levine will be joining us shortly. Ruth is the Deputy Assistant Administrator at USAID and the Bureau of Policy, Planning, and Learning. She leaves the agency's efforts in evaluation. Before joining aid last year, Ruth spent almost ten years at the Center for Global Development as a senior fellow and vice president where she did a lot of really critical research on issues related to evaluation, both in global health programs and development more broadly. Prior to her time with CGD, she was with both the World Bank and the Inter-American Development Bank. So what we'll do is I think we're going to start off with Gina. And Gina's going to tell us a bit about the work that she's pursued in the Avahan program and particularly lessons learned from data collection and use in that HIV AIDS prevention program that are broadly applicable to other types of global health efforts. Then if Ruth's here in time, we'll move to Ruth and she's going to give us an overview of how under the new broad GHI evaluation focus, global health, the GHI and particularly the GHI learning agenda are being viewed. And then we'll turn to Paul who's going to tell us about strategic information in the context of PEPFAR. Then we'll have a little discussion here among the panelists and then we'll open to the floor for questions. So thanks very much for joining us. I think we're going to have a really great discussion this morning. Thanks Lisa. My topic here today is to really talk mainly about monitoring data, not all the other data that one uses in terms of evaluation, but really focusing on some very basic data use principles that we were able to employ in Avahan to really achieve scale and coverage. And so my, let's see, page down. So my, today's discussion I'm going to talk a little bit about sort of definitions and principles that I think we can all agree on about monitoring data. Talk about some of the barriers that I've seen globally in terms of monitoring data and again this is more applicable probably to prevention programs than care programs because clinic based programs have a built in sort of monitoring system as part of the health system. I'll talk a tiny bit about Avahan and our experience and then some recommendations based on that. So I think we can all agree, or I think that the literature basically says that good monitoring data has four major roles in program delivery. One, it really can inform and guide scale up. District managers can use the data to drill down and look at districts that aren't performing well and allows frontline workers to be able to manage and know what they're doing. Data from monitoring systems can give early signals to funders that the program is moving in the right direction and this is particularly important given short donor timeframes and competing priorities for money. It's also a critical component of outcome and of impact evaluation because data that looks along the program pathway and basically says that you've achieved intermediate outcomes, one gives you plausibility for your evaluation design and also if you have data that shows that you haven't really implemented to program design, maybe you don't need to do the more expensive evaluation surveys because you haven't done what you had set out to do. And then in some cases good monitoring data can be all you need for an adequacy evaluation of a program. So for example, vaccination coverage if you have a good system along with evidence that you have a good cold chain is probably enough to say that you're doing what you need to do in a vaccination program. So if monitoring data is so important, why is it not done very well in the field? There are several barriers. One is that I think monitoring is lumped with evaluation and it probably should be lumped with monitoring and because monitoring is lumped with evaluation it's sort of a poor stepchild to something that's sexier in terms of surveys and analytics. Monitoring is just day to day doing the routine and it's not, I think for people interested in data and results is not that interesting. Second, I think that enough attention hasn't been paid to developing good indicator systems that are really tuned to the program pathway, meaning we will train health care workers, provide them with vaccines and they will deliver services, etc., etc. So you really need indicators along that program pathway to be able to talk about how well you're doing in terms of implementation. And then there are some problems in terms of trying to capture transactional data and really having indicators that measure exactly what you intend to monitor. And there are complicated forms cross postings, errors, double counting issues that really can be taken care of with a really careful initial design issue. Third, there's really a lack of a data use culture. Most people use monitoring as compliance. They report up because they're required to, they report up to show that they're doing something, but there's not really a culture to use the data. And this was particularly true in India where everyone was a data reporter and no one used data except in the five year designs. And then finally there's problems with overall views in country because there's not standardization of implementation, not standardization of indicators, and some difficulty of getting all data into a central repository in part because of the lack of standardization and in part because there are separate reporting requirements that somehow don't get it all centralized. Just for definitional purposes, I'm going to be talking mainly about routine monitoring which is really the actionable routine measurement that one gets from individual transactions that come from financial data, outcomes, outputs, and are really based on individual transactions at the beneficiary level. Your primary data forms on that are really financial administrative records. This is in contrast to process evaluation which has more data in it, but also includes monitoring data, routine monitoring data. So a couple words about Avahon. Avahon was a phase one, was from December 2003 to December 2009. It was a $250 million investment by the Bill and Melinda Gates Foundation targeting high-risk groups in India, namely sex workers, men who have sex with men and transgenders, men ejecting drug users, and clients of sex workers, so about 5 million minute risk. The focus was on at that time the six highest prevalent states in India with an overall population of about 300 million and you can see the numbers here of the actual numbers that were covered under the program. The idea was to complement government services and so the blue boxes shows the districts and sites where the Avahon-supported partners worked as well as the national highways and rail client interventions in the yellow and red on your right hand side. The first phase of the program had a well-defined prevention package that included, as you can see STI treatment, condom promotion, community mobilization, referral for case management, advocacy as well as some communication for social norm change, a large investment in evaluation and knowledge building the dissemination and then some investment in the government for transfer of whatever learnings were appropriate. The package was really codified into something that's internally called the Common Minimum Program and so I think this was one of the first things that the partners did well is that there was a standardized project implementation plan across all partners. I have this slide here to indicate that Avahon wasn't small. We had at peak about 136 NGOs that were implementing the program across six states and if anyone knows India, these six states are quite diverse culturally, linguistically, socio-demographically and the money flowed through nine lead implementing partners who then subcontracted 129 NGOs who then hired as frontline workers about 7,000 peer educators. So technical assistance and oversight went down and data flowed up from these peer educators to the NGOs to the state lead partners to the central repository and all of what was guided by coverage goals of greater than 80% and this Common Minimum Program that specified explicitly what at a minimum the program needed to see but partners were allowed to innovate around that minimum to achieve coverage. The data management information system in Avahon then was really focused at the transactional beneficiary level where outreach data where these low literate sex workers recorded individual transaction data where they met there, the person they were supposed to target, measured condom distribution, measured referred to clinics and that was combined with clinic data, clinics were also available at the sites and then these were used on a weekly or bi-weekly basis where the outreach workers and the peer educators and the clinic professionals looked at who did they reach, who was missing, made plans to try to reach them, there was remapping trying to look at are there new entrants into the communities that we should be looking at. So at this local level was where decisions were made about coverage targeting the Common Minimum Program coverage norms that were specified. This data was then aggregated on paper and reported up to the NGO and then entered into computers. We had initially tried to do a web based reporting system and that just didn't work for lots of reasons. Even India had trouble with connectivity from the rural areas and it was very difficult so we ended up mainly using Excel spreadsheets that then got forwarded up to the state lead partners. Dashboard indicators were accumulated and then those were reported centrally. I think over time the partners got very good at this and we would say based on this that what really made this MIS really useful is that there were really high quality front line capture tools that there was a low error rate in aggregation so the tools were used once and not copied and tallied and recopied and that we really created a data use culture such that these front line workers were responsible for 50 to 60 of their community members. They knew what they were supposed to do and that was when they made decisions around that then NGOs made decisions to support the peer educators, et cetera. I mean none of this is really rocket science what we all think should happen but I would say it probably doesn't very often. This monitoring system was really then embedded in an ecosystem where the data was really used so there was intense field engagement from supervisors whereby the MIS system was the point of the field supervision so the MIS data was looked at on a monthly basis. In addition there were formal reviews where foundation staff and other partners came in to look at data probably every six months to a year where decisions were made about what's going wrong, how can we fix it, why aren't we reaching what we need to reach and all of this was really based again on the standards that were defined in the Common Minimum Program. I would say that the Common Minimum Program evolved as we went forward but at least there were initial targets that everyone was trying to reach. I just wanted to give an example here of sort of how this worked. As background, Avahanan Theory was a peer led program. If you know India basically hires high school and college graduates to do outreach as part of NGOs and we really wanted to shift it over to having peers be the front line workers in kind of aligning ourselves with the theory that peers are the best able to reach marginalized and high risk groups. So as one of the Common Minimum Program norms NGOs were asked to hire one peer outreach worker for every 50 community members and this was based on some estimate of hours that were needed. So over time this peer community ratio increased. We were reaching about 50 peers per community members. But then someone looked at the data and said okay who's delivering the services. And what was happening is that indeed we reached the peer community ratio but the services were still being delivered by these high school and college graduate outreach workers. So at which time then we investigated further and the state lead partners investigated further and it was clear that there was a lack of trust with the sex workers. There was they didn't think that the sex workers and MSM could do this. And so there was a whole effort to develop tools for the front line workers and develop low literacy monitoring tools. Some training around and support about how to work with these community members and then creation of new positions because part of the issue was really job loss. People didn't want to lose their job. So these NGO outreach workers then became supervisors. So there was a great fear of losing jobs if you relinquished work over. And what happened then is where there was skill building for peers, coaching for NGO staff and there was a monitoring focus shift. So then there was kind of a phase specific indicators kind of shifted and everyone started to look at who's delivering the services now. Not that services are increasing but who's actually delivering it. And there was a big shift in who delivered the services such as about 80 to 90% of the services were ultimately delivered by the peer outreach workers. I just wanted to show you that based on some first two years of implementation data, we don't have the full four years monitoring. Not monitoring evaluation but monitoring management took about 8% of the cost from the lead implementing partner in the NGO level down. So this wasn't a trivial cost but anyway it was 8%. Some people say it's part of normal program implementation you shouldn't cost it separately but based on costing this is what we think it costed. So based on this very short review of Avahon I wanted to talk about what we think the partners did well in Avahon to really get good data use. And one there was obviously an investment in money and effort to get good design. There was a real change in the ecosystem where instead of people just reporting up there were supervisors going down with frequent reviews of the data. And for example if foundation managers worked with state lead partners and we sort of shifted the phase specific indicator to something else that kind of went through the system such that the NGO started to look at it and the community started to look at it. And these frequent reviews allowed us to shift the phase specific indicator so once we reached scale with footprint then we were able to look at intensity of intervention and then we were able to add additional services as we shifted what we looked at. I guess internally within the project it was always said that these implementation teams had limited bandwidth for multiple things to look at so it was really phased implementation over time. And then it's really important to equip the data collectors to use the data and this is both with skills as well as with authority to make decisions based on the data that they see. So we really saw at the local level these peer outreach workers saying okay I've only seen 20 of my 50 charges this month. What do I need to do to find them? How do I find them? Let's plan together with my other peer educators and with the outreach workers. And then again I would say that I think we probably should refocus monitoring into management instead of through evaluation. It informs evaluation but it's much more critical for management. And then as I said before I think there needs to be careful attention to indicators really improve tools and trying as much as possible to have single transaction entries so you can avoid errors along the way. As well as in terms of trying to integrate at the national level some standardization both of program elements as well as indicators having all systems reported into the national system. And I would say that even building it into the ecosystem of the national level with probably regular reviews of dashboards. India does this quite well now as part of NACP 3 where the donors meet once a month and look at dashboard indicators that the national level is monitoring which is really a subset of the state which is a subset of the NGO indicators. And then I'm also wondering if there's really a role for some international standards for monitoring. There are now international standards for HIV prevention indicators and these are really impact and outcome indicators. And so about 10 or 15 years after the start of that effort there's a lot of data collection in countries that report condom use et cetera et cetera. I'm wondering if their international system might be geared to sort of require some monitoring standards also. Slow but it might work. All right. Thank you very much for your attention. Gina thanks very much in it. I'm sure there's going to be a lot of questions and we'll come back to them but before I give the floor to Paul. Gina is probably too modest to point out but there... There she is. Okay Ruth come up please. Before I give the floor to Ruth there are stairs under the side. Let me just say that there's actually a great booklet on the foundation's website with the title of data use it or lose it and it captures a lot of what Gina's been saying. It's part of a broader collection of work on the Avahan program that you might find of interest. Gina was too modest to tell you about that herself but it's there. So Ruth welcome. Thanks very much for coming and you're having a press morning so we're grateful you can join us. We can either give you the floor if you'd like to catch your breath from there too Paul. Okay great. I don't know how to start the slide. Oh it's right here probably. Thanks very much and apologies for being late. Thank you for the previous event that ended right at 11 and despite our investment in science and technology we are not yet at the teleporting stage and so I had to make my way across town. So thanks very much for the occasion to talk well first to be on a panel with Gina and Paul and to talk about the work of USAID to really strengthen our evaluation and monitoring work and also how that applies to the Global Health Initiative which as you know is a fully interagency activity so what we do at USAID contributes to a larger larger home. How long do I have? As long as I want. No no no no we'll try to keep it balanced here. So there are two somewhat inelegantly I will say combined or joined parts of this presentation. The first part is really about what we're doing at USAID. The second part is the direction that the monitoring and evaluation work is going in the Global Health Initiative which I think will lead quite well into Paul's presentation. These are sort of stylized facts about where USAID currently is with respect particularly to evaluation and I'm going to do something that I'm going to do the opposite really of what Gina did. I'm going to say that evaluation has been the poor stepchild of monitoring and or a step sister or something and really is in need of some strengthening to recapture some of the proud history that USAID has as a leader in evaluation and to bring it up to fully contemporary standards. So again these are kind of a stylized profile of where USAID currently is. The evaluation practice is highly variable dependent largely on kind of idiosyncratic individual interests of mission directors, of different technical specialists in the technical bureaus and I think also reflective of sector specific norms. So for example in the health sector there's a strong sort of culture of monitoring and evaluation compared to other sectors and so that has been certainly maintained and there have been resources to back that up in health where there haven't been for example in growth or democracy and governance so much. The requirements with respect to evaluation within the agency are quite limited there's a requirement that currently that once during the lifetime typically five years or so of each strategic objective and evaluation is conducted the guidance is currently silent with respect to methodology in fact there's a statement in our ADS the kind of marching orders for USAID staff that says that there are no standard methods for evaluation. There has been, this is not a surprise to those who have followed the agency or worked with the agency closely, there's been over the past decade or so a tremendous amount of attention to identifying, collecting and reporting to Washington indicators of program and project performance largely output level attributable to US government investments which has actually really strengthened I think the monitoring functions at that level within the agency. It has at the same time not been paired with as strong attention to evaluation and the evaluations that are conducted and there are many, there are hundreds. We looked at the 2009 data and there were hundreds and hundreds of evaluations listed variously as assessments, impact evaluations, performance evaluations, process evaluations they had all sorts of different kinds of names the average price I guess of those they were almost all contracted out largely through institutional contractors and the average price I'm trying to recall precisely but it was somewhere in the $40,000 to $60,000 range so it gives you a little bit of a sense of how in depth or not such an evaluation might be. The methods used are highly variable, I'm going to go into that in a moment with a few explicit quality standards or levels of review basically it's the contracting officer who's responsible for often the program itself so whatever is being implemented and also the getting a contracting out the evaluation and assessing whether or not that evaluation meets a quality standard and yields useful findings and then that person is also responsible for ensuring the dissemination of that evaluation so you can see it's kind of a small circle there of implementing and evaluating and there are a number of cases where the implementing partners themselves are responsible for identifying the team to do the evaluation of their programs and I would say that the largely evaluations at USAID have been under designed that is the scopes of work have been quite general in terms of what the evaluation questions are and quite vague with respect to what the appropriate methods are to do the evaluation so that's now and we're hoping that starting in January 2011 the world will look a little bit different or start to look a little bit different so one of the things that I was that falls within my responsibilities is working with a terrific team to draft a new evaluation policy and to kind of shepherd it through the internal and external reviews and then to set things up for hopefully successful implementation of that policy so what I'm going to do is walk through what the key elements of the emerging evaluation policy are we're starting right now around of consultations and so I'm counting this as one of those and there's a presentation later this afternoon at interaction that will go into more detail so a few elements of the new evaluation policy is that we're going to use terminology in consistent ways so when we talk about impact evaluation that's going to be cause and effect evaluation it's going to be the kind of evaluation where you can estimate the net impact of a program where you have a credible counterfactual that is not the way it is currently used by and large so that's going to be a change and then we're using the term performance evaluation to capture quite diverse types of evaluations that use mixed methods to look at progress toward more proximate types of results we are proposing to institute quite aggressive stepped up requirements that for programs over a certain dollar threshold the evaluation will be integrated into program design that there will be performance evaluations after the expenditure of a certain threshold dollar amount and I'm being intentionally vague about what that is because it's the subject of some debate as you can imagine whether it should be a low bar or a high bar and that there will be impact that is cause and effect evaluations for any programs that purport to be proof of concept or pilots or are essentially testing fundamental hypotheses about micro level behavior the third element that I'll highlight is strong and appropriate methods that does not in any way mean always experimental design but it does mean that for certain programs we will require the collection of baseline data and that that data will be held in a repository because one of the things that I've heard more than once is oh yes we did a baseline but it got lost so you laugh, how many people here have been involved with either evaluations or programs where a baseline was done and then in the end it could not be found there's two, I'm sure many others are and you just don't know that that baseline was never found we are requiring in scope of work that the evaluation questions are stated very clearly and that they are tied to the decisions that will be informed by the findings from the evaluation this is a core link to the eventual use of the evaluations we are requiring in scope of work articulation of the social science methods that will be used quantitative and qualitative, the analysis plan and so forth I've looked at a lot of scopes of work and as I said they do tend currently to be quite vague on the methods as in there will be a desk review, there will be field studies or field visits and there will be interviews with key informants that is a very very typical scope of work for an evaluation and there will be required reviews of those scopes of work and draft reports for large and high profile evaluations okay so I've sort of covered this but it cannot be emphasized enough how different our scopes of work are going to start looking and this is really an ambitious agenda as I said clear statements about the evaluation questions, written design with data collection instruments, data analysis plans and dissemination plans focus on the analytic methods that will generate results that are as reproducible as possible that is if you had a different team using the same methods they would arrive at the same findings and conclusions if you ask people right now what is the biggest determinant of whether the evaluation is good or not they will say it was the team so I think that's true right now that it depends on the team but it depends on the team because some teams use better methods than others I try to try to even the playing field a little bit and include also in the scopes of work description of the team members skill requirements and qualifications right now a typical scope of work has the team members affiliations rather than qualifications so they will say one consultant from X institutional contract or one USAID staff member or one CDC staff member more that the evaluation policy recognizes that valuable evaluations aren't free and that there will have to be some earmarked, dedicated resources for evaluation and we're putting in a rule of thumb about what that might be we're also instituting mechanisms to provide technical support through central mechanisms as well as existing ones at the technical bureau and potentially the regional levels we have just started work on a new training program for program managers and evaluation specialists the first training will occur in late January 2011 and we have joined the international initiative for impact evaluation and will be active members of that we are seeking hard to embed the new approach to evaluation at USAID within the presidential initiatives to which we contribute namely global health initiative Feed the Future and the global climate change initiative and we are recognizing the link to monitoring right now on the monitoring side one of the major efforts that's underway in which I am involved quite heavily is to try to streamline some of the reporting processes and look carefully at the nature of the indicators that are being collected and reported to Washington to see how closely they align with what's actually useful either for program management or for learning and accountability on USAID evaluation policy I could talk all day about this but I will not focus on unbiasedness which is a little bit different but a close cousin of independence but it's what we are talking about instead of using independence as the word implementing partners won't be invited to evaluate themselves they will be asked and contractually required to share information that will be useful for evaluation and evaluation teams while they may include USAID and other USG staff will be led by external experts all of this is we propose that I should say we are certainly collecting comments and it has not yet been approved in the agency we are focusing on transparency with registration of evaluations at the outset much like a clinical trials registry and disclosure and dissemination of findings with limited principal exceptions in cases where it would jeopardize security or where there is a clear need not to disseminate all information and last but certainly not least is a focus on the relevance and utility of evaluations let me turn quickly to how this is being manifested in health health as I said has been a leader, a shining star in monitoring and evaluation it's had the resources it's had the disciplinary culture of measurement there has been a long tradition of investment in key data sources not the least of which is the demographic and health survey perhaps the best investment USAID has ever made in health there are strong standardized internationally recognized outcome and impact measures in health where there certainly aren't been working with teams looking at counter insurgency well good luck with that and it makes me long for the under five mortality rate let me tell you a lot of ongoing engagement by US government across the interagency in processes to harmonize with other donors with national systems with multilateral institutions UNAIDS and others and a strong tradition in global health of investing in operations research which is sometimes or the kind of cause and effect evaluation that I was referring to health has also benefited from a huge amount of interagency dialogue about methodological quality appropriate indicators and really drawing on some core technical strengths at the Centers for Disease Control NIH and multilateral institutions monitoring and evaluation is a core principle one of several in the global health initiative and what we're doing and Paul may wish to speak to this more is really trying to build on not supersede existing monitoring and evaluation within the programmatic areas and of course HIV and AIDS is the largest but there's also the president's malaria initiative there's the family planning program all of these separate programs have their own monitoring and evaluation activities we are not placing those by any means but seeking to build on those we're trying to highlight some performance benchmarks that is more process indicators as well as progress toward achieving the health targets that you may all have committed to memory from the information that's been out about the global health initiative and have introduced for performance reporting some new indicators at the outcome level so above the level of outputs that are attributable to usg investments and through the learning agenda in the eight global health initiative plus countries where they're getting an extra dollop of resources and then there's no dollar amount on what that dollop is we're asking country teams to identify core questions about the effect of the GHI principles I'm going to go into that in a minute with respect to indicators as you probably know there are basically two ways in which indicators from the field level are reported up to Washington and then eventually to stakeholders in Washington really you do have to give me a time check so what do you think five minutes okay thank you and so there are two basically two systems one is the fact system that's managed by the F Bureau and the other is the PEPFAR reporting system that I suspect people will talk about so what we've done in this and every year in December the USAID missions and report through facts quite large number of indicators we have for this year this reporting cycle added a number of indicators and there was big tussle over whether we should add to what's already a large number and it was decided that we would add a small number to a large number but it is still adding on top of or in addition to the program specific indicators most of those new indicators are outcome level and accessible through DHS and other international sources so you might ask well why wouldn't we just sit in Washington and collect those and put those into our own database if we want them instead of asking field missions to do so but actually we're trying to be responsive to request that missions have made to be able to report from the country perspective at a kind of higher level and I was just in Bangladesh and one of the technical staff there said we're playing a game and we want to stop just looking at the ground we want to look up and so I like that image and I think that the outcome level indicators permit our teams to kind of look up to where they are going along with the government and other partners we are working on designing an annual report seeking to place as minimal a burden as possible on the field and I will report that there is huge skepticism among folks in the field about whether this is possible for Washington to not place a huge burden on them and an annual report that is responsive to stakeholder interest in Washington about the achievement of process benchmarks and the pace toward targets while at the same time and this is a trick recognizing that part of the intent of the global health initiative is to be is to make investments that are responsive to nationally defined priorities so that there is not really extreme uniformity across GHI plus countries and certainly not across all countries in which the U.S. has health investments so coming up with a coherent way to report on a very diverse set of programs is something that we are working on, right Paul I'm going to go quickly here I don't know if did people get copies of the presentation or we're actually going to have copies on our website but people don't have them in front of them right now so what this is I'm definitely not going to read all this I'm just trying to give a sense of the combination for in this case TB subsector combination indicators that relate to U.S. government funded activities all the way up to those to which we are contributing TB prevalence for example same you can look at this in more detail later same this is a portion of the child health indicators so as you can see a number of these most if not all of the outcome level indicators come from the demographic and health survey briefly the learning agenda is as I said an effort within GHI plus countries to develop a coherent limited not a vast number of studies so that's evaluation and research looking at the value added of the GHI approach so these will be around questions focused on system strengthening country ownership sustainability the focus on girls and women partnership and other GHI principles they are designed to support and be responsive to in country decision making as well as headquarters interest in understanding how the new way of doing business under GHI either accelerates or decelerates we have an open mind toward achieving health impacts and then the broader systemic changes that are sought the components of the learning agenda include questions that are specific to individual countries and are really probably only of interest to those countries and then we hope and expect that there will be a significant share of the learning agenda questions that actually are of interest across more than one of the GHI plus countries where we can do cross country work that uses comparable methods so we can have more externally valid findings and the learning agenda also will require country teams to think through the plan for sharing findings both in country and with other GHI country teams so let me just give a couple of examples of questions I'll take the second one so in country where one of the GHI efforts is around introducing or strengthening results based financing for the provision of healthcare what changes are observed in the delivery and utilization of services and or in health outcomes on the process side what management strategies are used when an incentive program is introduced so this applies we believe to at least one country and maybe more let me give another example how has the emphasis that the U.S. again the second bullet how has the emphasis that the U.S. government country team is placing on women and girl centered programming changing influencing the host government policies and programs so this is largely this would largely be a qualitative study trying to understand in essence the interaction between the principle of women and girl focused healthcare and the principle of country ownership so those are a couple of questions that might be within the learning agenda I won't belabor the point that there are variety of anticipated study methods depending again on whether we're looking at for proof of concept of how to affect micro level behavior change or some broad policy question and then this is my ask of you few things one is that I think we all recognize that we do not yet have anything close to best or even good indicators of some of these multi-dimensional principles whether we ever will I just don't know I mean I don't know if it's a completely quixotic quest or not certainly there's a lot of work being done around the different dimensions of health system strengthening including the health services readiness index which I think we're very optimistic about capacity building country ownership really very problematic from an indicator perspective sustainability maybe a little more hope on that if we break it down into financial and institutional sustainability we certainly need expertise to conduct studies with national and regional counterparts we need opportunities for disseminating findings from the studies under the learning agenda earliest we'd see those would be in the latter part of 2011 and here's the big ask of you and all of the advocacy community all of those who influence the global health space give us all a chance to make mistakes in program design and implementation and what that means is don't give us a chance to make idiot mistakes things that any school child shouldn't do but give us a chance to report on findings that are disappointing without jumping on top of us and saying you've failed that is a huge ask in particularly in a tight budget environment but without that kind of space to honestly report when things aren't going well we really will not have any chance of achieving our vision with respect to transparent, honest and informative evaluation so with that I will close after taking more time than I should thanks very much Ruth thanks very much and I think everyone found it very interesting so no need to apologize but now we'll turn to Paul and Paul you thought you were coming to talk about PEPFAR's approach to strategic information actually you are here now as the tie breaker over the question of is it monitoring that's the orphan child or is it evaluation that's the orphan child okay great so as part of the safe exit it's although that we hadn't gotten together beforehand to discuss the presentations that we've all actually brought to the table but in fact they fit very well together so I'm actually going to try to tie the pieces together at least representational as we'll see that I mean monitoring plays clear roles and evaluation plays clear roles and if anything I would say that the strategic information both monitoring and evaluation are really the step child of all programs that it's marginalized it's an afterthought and although as Ruth described the plan for enmity kind of policy coming out of AID is to integrate into program design we've spoken that mantra for decades and it really doesn't happen all that often but case in point I'll try to actually describe to you what's going on in PEPFAR a little bit of the historical vantage but also the complexity of trying to get data use institutionalized in countries as well as here kind of at USG and that's part of the dream the data dream part but so as we have seen here but as well as I think we all know kind of the arounds that everyone has around the data use as an issue that we all recognize the role of information it's key for any program that's out in the field but as well as kind of be the presidential initiatives or otherwise fundamental to all the work that we do and intuitively rational so I think the question that comes up is kind of like why do we have to talk about that it makes so much sense and yet we are constantly be it in conflict it's too strong but it's trying to convince our program others that we need to do this we need to do it well and we need to do it thoroughly and even after decades of doing this that nobody has really bought into it so it's this unending struggle and so as we move forward or as PEPFAR moves forward there are two key things one is the linkage with GHI which is very closely part and parcel of that same package but it's also getting to countries building these capacities in countries which is key but constantly that's where the monumental task comes from what I want to do is actually approach this kind of very simplistic terms that I think everyone has heard already to the point where you're probably very tired of it I know I am but we've described phase one as the emergency phase phase two is the sustainability phase and in fact it's been much more of a transition from one to two it isn't as though there's been a dramatic change it's been the initial rush to get programs in place but the slow transition towards sustainability that started in phase one but it's accelerating in phase two so if we can kind of for just discussion purposes kind of leave it very simple and dichotomous phase one no one questions you know phenomenal achievements across the board really unique in global health not just HIV documented by numbers and this is what has been one of the other issues around PEPFAR that's been unique a program that is so large and we actually have some evidence for this simultaneously in phase one we had targeted evaluations that evolved into what became public health evaluations and these were really the sets of data that existed at headquarters and nothing else this is all we've got data reported from the field at the highest level indicators and then some TEs and PHEs and in the latter case the TEs, PHEs it's unclear because our ability to actually track them has been somewhat compromised but we're not quite sure how many finished how many just disappeared or where the data even went the other piece of this is that there's still a phenomenal amount of data in the countries and what we're beginning to hear now is we have all these data what do we do with them how do we analyze them how do we use them and from the headquarters side there's only so much we can do so it really is dependent on what is going on in the field but for all intents and purposes here at headquarters we don't know what these are we don't have access to them and we're not sure what to do with them so some of these are the elements that come into play as we move into this second phase which is to revisit our entire approach to try to develop a more comprehensive and integrated strategy toward data so in this second phase what has been really fundamental to kind of the spirit of PEPFAR at this point and here it's linked very close to the GHI is country ownership not going into the various definitions that have emerged but recognizing that we want to push greater responsibilities stewardship ownership to countries for the decision making particularly and for oversight and that's going to require data a lot of data but as well as the ability to think it think through it and use it sustainability is another element of this capacity to building great need out there also moving more toward national systems versus what has been in many cases parallel systems in countries part of the emergency phase was not only to deliver services but to report back to headquarters so the only way to do that quickly was to develop our own systems and which led to inevitable conflicts in countries but we're moving more toward removing the parallel systems and merging, converging on single national systems but not without problems linked to this is the three ones the single M&E system M&E large, I think in strategic information that is very large, lots of demands and a lot of requirements in countries to actually implement this and thinking in parallel to GHI for our purposes but to countries for their own is we're no longer really looking at HIV AIDS in the three one context is looking at health in the three ones. How do we actually support national evolution of M&E systems large to support their health systems and then data use across all elements of the response, we'll talk a little bit more about this. So before jumping into this there are just some fundamentals that I wanted to kind of present to you that explains or doesn't explain describe some of the complexity that we face as we try to really institutionalize data use in countries. Want to look at what data are generated and how, who are the actual audiences for these data and then transitioning from what has been a US based system for so long into a national based system. Here's a chart that may be familiar to you that came out in 2004 from Debra Rugg et al. You see it a lot in UNAIDS documents these days I mean basically it's describing the logic model from kind of output to process to outcome to impact evaluations, describing that all projects should have some output monitoring, some have some process monitoring and so on down the line. Not everyone needs to do impact monitoring and as Ruth described there are some proof of concept pilot studies that would require it but not every project needs to do it. So there are going to be various levels of data that need to be generated and understood and appreciated kind of for each type of project that's out there and you know in parochial here thinking of PEPFAR there are thousands upon thousands of projects that are in place all require some level of monitoring or should and it's unclear exactly how much of that's going on or how well those data are actually being used and as we move down this trajectory our understanding of what's going on diminishes pretty dramatically. So simultaneously I've actually modified this chart that to think in terms of kind of the audiences that need to actually have these data to be able to understand them, to analyze them, and to use them. If we look at the beginning the data that we reported at the site, well we need site managers, program managers and such to actually have access to all the monitoring data, some evaluation data that's going on such that they can actually do an effective job of either continuing to support to modify or to terminate programs. Service site needs it. Some of that gets reported to the district, some to the province, some to the national level. If we get up here finally some of those data then would go up to USG or other donors and then if we flip back to the previous chart when we're looking at different types of studies there's also a global community that's interested particularly impact evaluations and such. So there are huge audiences, all have requirements, all have to have the ability to use data. And that is no small task when we're looking at each country kind of individually but also when we look broadly just in PEPFAR at 30, what 34 countries. So if we look at the transition from USG to national we describe some of it, some of it has been going on for some time mixed success. I mean in some countries what we have is in some program areas a virtual elimination of a parallel USG system. We're using country systems to actually abstract USG data. Not across the board for all programs and certainly not in all countries but that is one of the steps that we need to be taking. The fundamental to this is instilling a culture of data use in these countries. And this means having the service site people, the district, the province, the national actually understand data and understand the role that plays in their own decision making and their ability to actually manage programs. This exists in some places but by far very few of them. It's a culture change in many respects. In our country it's a culture change. We actually get people to use information effectively. We're also required to expand the mechanisms to actually collect these data. I think Gina described kind of dramatically. We can't depend on technology all the time so we have to have some merging not only of kind of by hand but also technological solutions bringing the two, converging the two in terms of the data that are collected but also the systems kind of in place that are used such that you don't replicate you don't kind of multiply the burden on the people that are actually recording and reporting data. And then key to this one of the key elements is actually the human capacity in countries to do this. You know it's not just the person to collect data, analyze data and use data. There's also the policy individuals, the level of policy at the national level, provincial level that will institutionalize the support and understanding of data use. That's the transition that those are some of the goals and it is overlaid with these issues that also come into play at the country ownership. The charts that we saw on types of data, what are we actually talking about? The different audience of data that we're talking about. The capacity in country to generate, analyze and use but we also in terms of the audience we have to be accountable to the USG, the USG dollar that's going out there. There are requirements that we have that need to be reported from countries which is a constant struggle. Anything more that we ask for is thought. That even currently what we have, people don't like it, they constantly fight it. So the growth had pointed out that kind of headquarters is really one of the least popular people or groups of people around. And then there are the global interests and this gets into the higher level. What I'm thinking of here is a higher level, again impact evaluations and such. So the future for PEPFAR data. What we are in the transition, currently we're in the moment of transition toward a much more comprehensive strategy toward data. Kind of generally strategic information more generally. We're also focusing more of our own efforts at headquarters but also in support of country programs to target country priorities. So thinking less of ourselves and our own priorities but not ignoring them entirely and trying to put more emphasis on country national systems and national interests. Greater support, more comprehensive and also more strategic support for capacity building. We've always done in PEPFAR a lot of capacity building but one might ask after what seven years going on eight years what do we have to show for it. Some serious accomplishments but it could be much more organized, much more coherent in terms of trying to bring the very disparate pieces together into a strategic approach such that we actually have a plan in mind, milestones and an exit strategy of some sort which we don't currently. Moving away from the USG toward the national systems and then the global information priorities. How do we integrate all these pieces into this new strategy? And in this last case I cite here the scientific advisory board which is newly formed and forming the Federal Advisory Committee Act at the request of the ambassador to bring not only USG but external expertise to the table to help PEPFAR and link with GHI thinking what are the global priorities, the global questions we need to be thinking about that. Bringing those communities, those expertise to bear which we've not been actually very successful doing before. So our comprehensive approach at this point will be under the umbrella of what we are describing and defining as implementation science. That we're seeing this much more commonly in the literature kind of across USG and otherwise and this represents a piece of PEPFAR's take on it and it's the basically the study of methods to improve, uptake, implementation, translation of research findings into routine and common practices. Improving program effectiveness, optimized efficiency and then this framework what we're hoping is going to provide the structure of methodological rigor and diversity as well as knowledge generation to meet the needs of the program and the global community. This is linked to some of what Ruth is describing for GHI so we're really looking to converge a lot of these pieces and provide a very coherent approach to what we're doing. And part of this convergence I mean we often I think jump to the conclusion that AS really pertains to just the what is on the right side of this continuum just impact evaluation and operations research and in some literature that's where it leads. What we're trying to say is that this structure would apply across the board. So recognizing that there is no necessary strict divide between monitoring evaluation ops research and impact evaluation. They should be clouds or circles that overlap. But thinking that it's a continuation of data generation, data analysis and data use all of which needs to be linked. All of which needs to be brought under a single you know be a framework that is going to have kind of support all aspects of this. So rather than treat them as isolates which we tend to do at this point try to bring all these pieces together. This is just another model that Deborah Rugg and all have published in that same article. But we've seen it in various other UNAIDS reports as well since 2004. But it's looking at all these levels trying to link them all together into a single framework of implementation science. Simultaneously as we do that across the board is the capacity building efforts trying to bring those into a coherent strategy. And covering all elements of this continuum. So from basic monitoring up through impact evaluation how do we most effectively kind of bring this capacity help to develop this capacity in country. We're not so much worried about USG staff. We're not so much worried about headquarters staff. This is really capacity in countries to learn this to understand it and to be able to support those activities in these countries where we work. This has particular implications for the institutional structures. That we can't have a lot of individuals at the district level at the site level with the talent and the abilities to put into play without some level, some support at the highest level in government to recognize we really need data, we need high quality data and we need the people to actually support all elements from simple recording through all levels of reporting. Simultaneously the systems to support the data collection on up through the final reporting use. One of our targets is really going to be at the university's public health schools to really use those for the long term investment. So it's not going in and doing our own workshops, our own trainings, but to work with university colleagues such that the capacity is built in countries. We see several examples both just as examples in Ethiopian Uganda where this has been taking place for some years and we're beginning to see these cohorts that are developing in countries that can actually support a lot of this work. And really what we want is the data use at every level of the system. So the challenges I mean it's monumental and you know we are only small PEPFAR which is a little bit of a contradiction but we can only do so much. It's trying to think how best. I mean linking with GHI, linking with other donors but also really I think improving our linkages in country and getting countries kind of through the spirit of ownership taking on some of the responsibilities but not kind of over burdening them but thinking how do we do this most effectively. It's not easy. It's a huge job. Simultaneously they're looking at supporting their programs but inculcating bringing information data to some of their management of their programs. The USG versus national priorities that this is an ongoing struggle and even at the level of some countries with some indicators they're slightly different. They're not compatible with national indicators. So this is a negotiation from our part. What do we need? What can we give up? Or what can we modify for our own purposes? But this also applies to research as well. I mean there are the interests that we may have but that may not be the case in these national settings. How do we work through that? We have to give priority to the countries but how do we actually meet our own needs as well. And then the headquarters versus the field. Unending tension. It'll never go away. Anything additional we ask for and be it's thinking through some of these strategies as always kind of met with a level of resistance and recognizably people are just these are huge programs they're overworked and one more request. Other things don't go away. It is not zero sum. It's always additive. So we have to think more closely about how we do this. So we also have opportunities. I mean it's strengthening our country partners. This is what really we need to be doing. Not only PEPFAR, we're doing it in GHI and we're doing it in development across the board. We need to strengthen our country partners and we need to focus on this. It may come at a loss to us as USG to a certain extent but we have to be willing to balance some of them. We also need this opportunity. We can take this opportunity to actually be comprehensive in our approach. So implementation science will give us some avenue to that. The comprehensive strategy capacity building is linked to that but we've got to think of it in the grand scheme not in the small monitoring of a small program somewhere. That's important. We have to be aware of it but we've got to think of it. We've got to think of all of it in the large scale. And then we have to take our country experiences, the USG teams but also our national partners to inform what we do, to inform how we do it. And we also need to link this to the scientific advisory board in this case but it's the global community. There are lots of interest, lots of stakeholders. It's a negotiation kind of across the board. How do we do that best? How do we keep our partners as the highest priority through this? Great. Thank you. So I think those were really three great presentations. And thank you all very much for joining us. We have about 35, 40 minutes or so for questions. Ruth, do you have to leave a little bit early? We were going to go until one. If so, we'll try to just front end questions. Okay. So I actually might just start us off with one question as folks are getting their thoughts organized. Well, first of all, Paul, you did a great job of finessing the step-child question. So congratulations on that. But secondly, I think you ended on a very true and sobering comment which is that this is a huge challenge. And I'm thinking back to some of the facts Gina shared with us in her presentation at the length of time it took the foundation in India to start to get this right and how hard it was to change the culture so that data was viewed in a different kind of way and how the systems needed to be readjusted. I'm thinking that we have two to two and a half years of kind of GHI implementation left in this current phase. So I'm wondering if you could say a little bit about sort of what your expectations are of what can be achieved over the next, say, two years in terms of building the internal USG structures that you've described in terms of building the types of partnerships and capacities in countries where we know many of the countries that are most important don't even have functioning vital registration systems. And then thirdly, how we start to build a more coordinated approach to this with some of the other big donor partners that are out there, you know, with the Global Fund, with the monies and the attention that's being aligned around maternal and child health. How is the dialogue and the thought process that you all are generating here, how is that going to be externalized to those other partners and how are negotiations going to unfold among developing a more common view? So I know that's kind of a big question. You can take whatever part of it you would like, but certainly some reflections I think would be quite helpful. And then Paula can make face as well as speak and then correct me. Well, those were an awful lot of questions there. So I do think you know, sort of when you spell out the grand vision it is very daunting both because of the time constraints. I mean it's a big lift no matter what, but so we're under a lot of time pressure and we are also under a lot of budget pressure. I think anybody who has their eyes open and looks at the FY 11, 12, 13 budget is not doing so with a smile on his or her face. It doesn't look pretty. So I think that we're definitely aware of those constraints. Let me highlight a few areas where I think we can look back in two and a half years and say that we've really achieved something. These are examples. I hope that they come true, but I think there will be some wins. And then I'm going to come back to one of the challenges after I list the few wins. One is that I think we can make some real progress on health system strengthening indicators that have some meaning and some connection to the kinds of investments that the U.S. government makes. And I think that would be a huge contribution that GHI could make. It's so intentionally focused on strengthening health systems, but when you read the what we're talking about, it still seems a little vague, a little ethereal, and there's certainly a huge amount of skepticism among those who are focused on particular kinds of health impacts about what the value added is of making systemic investments. And on any given day, there is a short-term, medium-term tradeoff. So I think that if we could make progress, and I highlighted briefly the Health Services Readiness Index, which I think is one possible approach to measuring some aspects of the strength of health services in ways that are tied to provision of U.S. support for, for example, pharmaceutical management, human resources, capacity development, infrastructure, and other commodities provision, other elements. And that's something that's being done within an interagency context, but even more importantly with multi-lateral partners, and particularly WHO. So that's one thing. Better health systems strengthening indicators that are validated, that are useful and used, and that are accepted internationally. Second is that within the learning agendas for the GHI plus countries I'm hoping and expecting that there will be some really important studies that are done that yield insights, for example, around what some strategies, successful strategies are to foster sustainability and both institutional and financial. And then the third is perhaps more in the realm of communications than either monitoring or evaluation. And that is that I am hopeful that we can use the outcome data for the different GHI target areas in creative ways, visualizing it in creative ways that really communicates to policy makers. The correlation, for example, between us putting money in particular subnational areas and the indicators in those areas, hopefully improving. So again it's more of a communications than perhaps a technical focus, but I'm hoping we can do that. And then just returning to the challenge before handing it over to Paul to correct whatever I've said wrong. The challenge, one of the key challenges and you flagged it is that actually GHI does not have a defined structure for strategic information. And so right now we're a bit of a ragtag bunch of, we're an interagency working group. We meet very regularly, we have a good working relationship, but for none of us is this our day job. And so clearly if we're going to make information a core component of GHI in a meaningful way, we need things like staffing, we need an identifiable home base in one or another of the agencies. So I see the thought bubbles over your head. This is hard, it's hard because of the money and it's hard because of the dynamics of the agencies. So that's something that we're very currently working on together trying to solve. Paul. I would never think to try to correct you. I'll take a slightly different bent on it because I concur kind of wholeheartedly what Ruth has described. I'm thinking kind of more in the longer term, what can we hope, given the current budget kind of issues that we're facing, but also were those not in play, I think the USG has a history of relatively short term interests and their interests would fade anyway. And I think a lot of the issues that we're describing require decades to actually achieve successfully. So how do we keep the interest going? How do we keep the resources going? And how do we actually be most supportive to our country partners? And I think at this stage one of those pieces would be coming up or working with our partners to come up with strategies in countries that are broadly based very thoughtful in terms of how do they achieve the levels of capacity that they need to and the steps that need to be taken. So a planning, a strategy of some sort. Just so they have a roadmap or scorecard to figure out what they need to do, whether it's the US that's involved or another donor, but at least allowing kind of a longer term vision than is typically the case I think with a lot of our programs which looked a year or two years for kind of short term benefit. Linked to this of course is discussions with Congress that in terms of what we generate in this kind of a trajectory is different than the numbers that we've reported generated in the past. So our accomplishments measured by outputs will not look the same, measured by outcomes will not look the same but how do we educate our own government and our own kind of people that this is not a bad thing. In fact it's a good thing because where we're looking at are these other accomplishments which kind of brings us to what Ruth is talking about. There are other sets of indicators and measures that we need to institute, create an institute that track the other accomplishments in these countries that aren't linked to kind of program outputs. You asked about other donors, how do we bring them to play. Well anyway we can, we invite them to play. The Ambassador Goosby has kind of made it known that he sees the Global Fund as sort of the future for a multilateral response to HIV kind of globally. There's a lot of work that needs to be done in the Global Fund itself in relationship with Global Fund and the like, but I think that is true. It can't be USG, it can't be just Global Fund that we all actually have to think of it as a multilateral global response to the epidemic. No I'm not going to come. We'll come back to you. So let's take some questions from the floor. I just ask you please to repeat your association and make your question as brief as you can so that we can get in lots of folks. So on the lady right here please in the second row. And we'll take questions three at a time. I'm overwhelmed. There is sort of a wow factor here that you obviously have described. And in many ways in this last answer you really answered several questions I'm going to ask. You alluded to congressional relations you alluded to budget issues. I'd like to know if you have thought out whether in fact once you get moving you are going to shift in your definitions of what sustainability is could be. Can you introduce yourself please? I'm Jesse Hackes and I have a small consulting firm called World Reach Communications. Thanks and a second question over here please. Hi I'm Steve Harvey with University Research. We're a implementing partner I think is the term that people were using. So thanks to all three panelists for some very compelling presentations. The focus of my question is for Gina although I want to ask for comments from Paul and Ruth as well. Gina you described in the Avahan project a pretty apparently successful approach at getting a health information management system to work. Health management information system to work. This from the perspective of someone who spends a lot of time in the field is a tremendous challenge. Beyond the tools that you talked about one of the principal challenges I've seen is motivating the people who need to collect the information to collect it given a context of lack of resources, difficult logistics transportation and so on. So getting the supervisors out to the field to collect the data. And secondly when you talk about instituting a culture of data use, data for decision making often what I see in the field is we're pretty good at teaching people how to collect data we're pretty good at teaching people how to report data. We're not very good at teaching people how to analyze data and use it for their own purposes for their own programming. So I wonder if you would talk about how Avahan did this and a couple of particular questions one is did you provide pay for supervisors and your peer volunteers to participate in data collection? If so how do you sustain that over time when the project ends? If not what did you do that worked? What kind of lessons would you suggest for others beyond the tools that you developed to make this work? And you talked about having a package developing a package through Avahan. Is that a package that would be available for others to think about implementing in other situations? I'll stop at that because it's going quite well. Thank you. Let's take a third question to gentlemen right here on the end of the row please. We'll get you a mic. Hi there from University of Maryland. I have very specific questions I would like to bring two examples and see how we look at them from evaluations and how we can improve the situation in the future. One is social marketing projects. There have been always criticism that we don't have rigorous monitoring or evaluation in the social marketing. These are all technical issues or how we can improve the monitoring and evaluation tools and the techniques to bring this marketing effort into the main stream of the evaluation of the health impacts or in population one. The second issue, Ruth, I know you visited Bangladesh recently and as you know for last 12 or 14 years Bangladesh fertility has stalled. Some people will say it has reached the plateau. Whatever it says, you know it is very unfortunate in some ways because US government and other donors spend million billion dollars in this. Is that the problem with the design of the programs or the monitoring or the policy or post governments initiatives? I'm sure there are some indications we can get from the monitoring before we come to this stage that the impact will not be strong. How we can overcome this kind of situation in the future? Thanks very much. Why don't we start with the first question about sustainability which was is the definition of sustainability going to change? I'm glad I don't have to answer the question but do we have a volunteer here who might want to? I'm paraphrasing. Then you can correct me. My response is two-fold. One is that we really haven't defined it very well now so we can change it anyway we want. No one is going to know. Recognizing the intent of your question I don't think that really the definition would change. There is still a goal in our programmatic efforts to support countries to reach a level of sustainability. That should always hold. There is an outcome that we think is valuable and given the current political or it's more budget constraints if that constrains us a little bit more that may constrain how we go about doing it but I still think the level of sustainability should or the goal of sustainability should remain the same. I'm not sure if that answers your question or if I understood your question. Maybe just sort of hopefully compliment you. One of the things that we've been thinking a lot about is how the work with countries on economic governance issues can be in service of health sector management and financing and so I guess I would when I think about sustainability it does tend to focus more in my head on financial sustainability and it's a problem that can only be solved by getting out of the health sector that is getting into issues of macro economic stabilization, trade policy the factors that lead to either fast or slow economic growth, taxation issues and so I don't know that the definition would change but I have hope and expectation that our strategies to achieve sustainability of health service delivery will rise above sector specific actions which I think are manifestly quite limited in their impact. Yeah I have a lot to say about this but I'll be brief and we can read after but I would say that India also had a data reporting culture and as I said the data was used every five years in a national planning strategy essentially as far as I could tell and so the motivation was really using data in the ecosystem that I described where there were frequent reviews and data was used and indicators based on kind of a phased scale up were the focus of that so we didn't require people to have tremendous analytical ability on the data they just looked at the data for what their goal was at their level at the time and skills increased as you ascended up the ladder yes supervisors were paid and peer outreach workers were paid to do the job not just to collect the data so that was part of it but I would say at least for the peer educators it was really the empowerment of being able to make decisions about the data they were responsible and they had authority to make decisions to get to the coverage or whatever goal was going on and yes the package is available I can tell you about it it's been published it should be on the website shortly. And our last question about social marketing and approaches to M&E and fertility issues in Bangladesh if anybody would like to take that on so I'll tackle a little piece of that although it's clearly a big issue I guess you know when we talk about evaluation of anything so in this case evaluation of a social marketing project you just have to define more narrowly what the evaluation question is and what kind of decisions you'd want the evaluation to inform so is the question what is the most efficient way to do social marketing in which case you'd structure an evaluation to perhaps you know look across different social marketing strategies and look at unit costs or some other efficiency measures or a contrasting question might be is the social marketing more or less effective than achieving the same amount of couple years protection or contraceptive prevalence rate through other means of distributing commodities so and that would suggest a completely different kind of evaluation so I'm not punting I'm forcing a little bit more precision around what is the question and the only way to figure out what the question is is to think ahead because an evaluation is going to take two years from beginning to end minimum to really get your final you know conclusions probably so you have to think ahead what are the decisions for which I need this information and if there's absolutely a commitment on the part of the government and the donors that social marketing programs will continue then clearly the question needs to be around operational issues and efficiency issues if the question is should we drop this thing or keep it going then the question has to be this comparison between what happens if you have it in place versus don't have it in place so I'm belaboring the point but believe me this is a point that needs to be belabored and it's the only way we're going to get evaluation reports that anybody wants to read and then on the issue of stalled we have to admit I did not understand your question so if you could ask it again and maybe I'll understand it the situation that we are facing today I've seen that five years before ten years before you know was it something that we didn't foresee through our monitoring evaluation because one of the things is that in your admission the prediction is it a problem with science or is it a problem with your mechanism or politics or so I'll just endorse the sort of core idea behind what you're saying which is choosing means of monitoring the kinds of indicators that are proxies always for something we care about in ways that do provide a kind of early warning but the specific case of Bangladesh you know far better than I do I suspect so let's take a second set of three questions please over in the front row here on the left so it's almost 2011 and we've heard about a set of measures for the global health initiative that are not fully defined yet which is a little worrisome and what I'm wondering is what is it that could be done in the next six months or something to better align incentives to move all these agencies towards these common monitoring and evaluation questions is there something that we could think about doing in the short term within our existing limitations that would help with this? Thanks and right behind Amanda please Shannon. Hi I'm Shannon Hader from Futures Group so first of all it's been so lovely and exciting to hear the data use data use data demand data use across the panel so thank you guys everything from the you know core example with Abhahan to just the philosophical underpinning from both the other presentation so my question comes from the data use junkie route which is I think what's really nice compared to even maybe ten years ago I think a lot of our front line staff and implementing partners no longer see basic M&E as dessert you know it's sort of been hit home that it's part of core accountability it's tied to your funding we're sort of not asking but data demand and data use I think we're still very far away from that I think Abhahan and experiences through the health policy program and measure evaluation show that actually creating a culture of data down it's not magic it's hard work it's just it's actual they're strategic and programmatic and evidence based ways to accomplish it but it takes those tools and bodies on the ground to make it happen oftentimes the transformative nature of data use has felt the most at the front line implementers site you know and the district level and then the quality of the decisions being made at higher levels factors that don't come out in our routine output indicators that are reported to our HQ kind of levels what we've been seeing a lot that if you've you know if front line decision makers maybe the contracting officer or the Kotar have been involved in a data use experience themselves seeing as believing they've decided wow this is transformative and a good investment but otherwise we see a lot of disconnect between headquarters in the field that sort of per string decision making levels of oh that's really really nice but we can't afford it cut it out you know let's cut that out of the budget cut that out of the element or we'll buy into everything else but not that element and we've been brainstorming how do we mobilize sort of core funds add on to country activities to show you know to see when do one teach one show that it works so my question is what have you been thinking about on the rubber hits the road from concept to mechanism how to institutionalize this in our funding decisions such that it does hit the road in countries more routinely as part of the projects that we're doing. Thanks Janet and let's take one other question please in the row right here the gentleman please right thanks. Hello my name is Bob Wirebuck I'm from the M Health Alliance and we sort of have a take on all of this that's very much influenced by mobile health and I think we're at an inflection point right now with the numbers of cell phones out there with cloud computing that we can really get down to not having a dichotomy between monitoring and evaluation but having a continuum where you're collecting patient centric data and then that data is used and turned into data that can be used for decision making because right now you have this monitoring and evaluation dichotomy you have people who have to fill out forms at the end of the day after their real work is done and the quality of the data has been found lacking in many instances in those areas so is there any plan to try and move forward on leveraging some of these new technologies to help solve this problem. Great thanks so let's come to Amanda's question first and if someone would like to take it again. Yeah maybe I don't know if Paul wants to I can start so first of all it's just not that dire. Amanda. Amanda has my old job so she has a much easier job than I do now. So first of all all of the programmatic monitoring and evaluation where there are established indicators it's all that's all in place going along chugging along the new generation indicators for PEPBAR so what's still information is these kind of you know what's the value added of the GHI approach. It's not trivial it's important and it's not fully baked so there's reason for paying attention to it but it's not like we won't know anything we won't be able to report anything. So that's point one. Point two is that I actually think that there's you know part of what's been going on is that the particularly the GHI plus countries which are believe it or not ahead of the others in defining well what is this new GHI experience at the country level. They have just submitted their draft strategies so there's an iterative process with figuring out well what are the right measures because you have to know well what are people actually going to be doing where is the investment going to be made so I guess I don't your question seemed to imply that it had to do with aligning incentives across the agencies I actually don't feel like that's the issue. I feel like the issue is we've had a lot of uncertainty about the resources available and a lot of new processes being built around developing strategies and defining what GHI is at the country level and you know no secret to anybody we haven't had the kind of either resources or leadership structure that PEPFAR benefited from at the beginning and so but I do not actually think that it's an issue of you know the agencies are all fighting like cats and dogs about monitoring and evaluation. Did you want to add? Would you like to agree with me? Great. And then Shannon's question which was really about how do you institutionalize the value of data collection and use and analysis I don't know if someone would like to it's quite a short summary but I think that's the... Excellent question for someone that is so familiar with it. I mean to the extent that me at least from perspective it's a requirement it's sort of mandated without it's actually done is a big different issue. I think some of that is related to the disconnect between headquarters and we don't know really what's going on in many respects but it's also country teams really don't have the time to actually go out and see what's going on at that kind of be at the site level or even sometimes some aggregate of those that it's a piece you know I can't think of the appropriate superlative but it's a as you know very well a huge, huge burden to do this but that's one piece. The other piece is that disconnect the marginalization of strategic information kind of across the board and it's all the program kind of be the managers be it even in USG at headquarters don't think of information as really so fundamental and needs to be built in at the beginning. It's an afterthought it's like what happened so how do we figure this out and I know we've all been in conversations time and time and time again so I would ask you what's the secret how do we actually go about and institutionalize this across the board. Some of it's policy and which would be almost easy to do but it's the ability to enforce I think it's going to be the other piece of it is where we run into issues. I think the last question was on the M Health and the power of mobile technology to kind of bridge the monitoring and evaluation divide and make it more a continuum of interpretation and data use. So let me take this and others can add. We had a meeting now a couple of months ago for USG folks who were engaged in the Global Health Initiative including the GHI plus country team representatives to learn about and discuss the applicability of several different kinds of innovative approaches to evaluation so that included evaluation platforms presented by folks from Johns Hopkins, mobile telephony geographic information system, geospatial analysis and a couple of other topics and I think that spurred some thinking on the part of the GHI country teams about what the so there's enthusiasm and I think a lot of potential. That said I don't think personally I do not think that the mass collection of data or evaluations that you will eventually I'm caricaturing what you were saying but just having access to a huge amount of data particularly patient centered data that could potentially be used to answer some hypothetical evaluation questions I don't think that's the way to go I think you really need to start with a question and construct a study design that corresponds to what you want to learn and the analogy in my head is there was a lot of enthusiasm in the US about the use of outcomes research where you had masses of information from say the Medicaid and Medicare database and from that you were supposed to be able to learn enormous amounts of things about risk factors and behavior and all sorts of other things and it's not in the end there's plenty of people who do that work but I think that's proven to be far less satisfactory than intentionally designed evaluation and research studies that start with a question and collect often fresh data to answer those questions so I don't want to throw my own personal sort of cold water on this but it's definitely not going to in my head be enough to use even vastly improved monitoring data to answer evaluation questions I'd like to take this for the perspective monitoring and not evaluation and I agree with Ruth I think that the advantages of mobile phone technology bypass sort of paper formats and paper being lost but you still don't address the issue of data use I mean if people don't use data on paper I'm not sure they're going to use it electronically where they don't even have it in front of them anymore so I really think it's not so much the data collection although that's an issue of really data use that this doesn't necessarily help with. Well I think there's a number of folks here who are data junkies and I think we could probably stay quite a bit longer in talk but unfortunately I don't think we're able to do that right now although maybe we'll try to have a second session at some point in the future. I do want to ask our panelists if they have any closing thoughts that they want to share and maybe they don't so actually I will put Ruth one closing question to you which is mentioned when you started that you're going to be doing some consultations on the presentation something later on today I think you said an interaction do you want to say any more about that process and how people could engage with it. Sure so there's there'll be a number of presentations around town around about the new evaluation policy in somewhat more depth and I went into today to get feedback to see how horrified people look and to otherwise information that will inform the final product the first well the next available opportunity is at 330 at interaction today and we'll have others as I said around town if you invite us we will come to talk to you and you're whoever you can haul into your building. So we will have the PowerPoints with the presenters permission they'll be on our website later on today but please join me in thanking them for really just a terrific discussion.