 My name is Aleksa Brang, I am a scientist with the international and social sciences, particularly working with the IPMS, Improving Productive Market Access Project of Ethiopian farmers. The training was about resource-based monitoring and evaluation. And basically capacity of monitoring and evaluation in Ethiopia is really not there. It's very, very, very weak and very low. And then the government now has started implementing this grand project called Agricultural Grows Program in four of the major regions, or four of the island regions in Ethiopia. And since IPMS itself had an in-built monitoring and evaluation process, and out of which a manual on resource-based monitoring and evaluation was developed, so we were happy and volunteered to help in building that capacity. So this program, this training program is as a result of that. The trainees came from the four regions, from Arati, Cryorombia and the south. And these are all AGP personnel and also members of the AGP Technical Committee. First of all, we tried to introduce what results-based monitoring and evaluation is. Why is it that it is needed in the first place? How it is different from the traditional implementation-based monitoring and evaluation? And then we went on to discuss the foundations of conducting resource-based monitoring and evaluation, which means basically the preparation of the logic model of the project or the performance chain of the project. And then from the logic model of the performance chain of the project, the development of what we call performance measurement, performance framework, which essentially means mapping out the relationship between resources, activities, outcomes and impact. And then from there, we discussed also how to develop performance measurement framework, which essentially serves as a basis for data collection to do the monitoring evaluation proper. And then we discussed about participatory monitoring and evaluation because these days the need for stakeholder involvement in monitoring evaluation has become very important and significant. And therefore, we then discussed why participatory monitoring evaluation is needed and of course also what the different techniques could be in doing participatory monitoring evaluation. And then from there, we went on to discuss how indicators can be selected and then how indicators can be specified so that they can be better measured in the field. And then from there, we went on into how data collection can be organized and implemented and of course how data analysis can also be done. And then based on this, we discussed how or what are the different approaches to develop monitoring evaluation report and how that can be disseminated. There are different methods of disseminating monitoring evaluation report. So we discussed different dissemination methods and of course we emphasized during this session the need, the critical need for reporting results in time because monitoring and evaluation results that are reported in time are more or less not very useful. And therefore then from there, we discussed critically the need to institutionalize and sustain monitoring evaluation unit or activity in a particular organization. And then so in this respect, we discussed why is it that so far the institutionalization of monitoring evaluation is not that strong in many developing countries and of course in ETP also. And then we discussed also what can be done to better institutionalize and sustain monitoring evaluation units in a particular organization and also in the different bureaus of the regions. So my overall assessment of this, by the way, one important aspect of this training was that the training, the lecture component of this training was more or less 30% of the time. The rest was used for practical sessions, practical exercises and then reporting on practical results. That way, I think also as we can see from the evolution that the training is made that helped really the participants to better internalize and also appreciate the practical application methods of this particular evaluation. So lecture time was about 30% of the whole training period and then 40% of the time was for practical exercise of the trainees to do their practical work and then 40% of the time was for the groups again to come back and report to the whole group. So what's expected from the trainees, maybe I think Cataclysm did too. First of all, this was a TOT training and therefore after the training they are supposed to go to their respective regions and then give training to the rest of the staff there who are involved in monitoring and evaluation. Not only at the regional level but also at the Zora level and then from there down to the Waredah and then from there also to the P11 to the DS. That's one. The second thing is they're supposed to lead the overall monitoring and evaluation activity of the AGP at the regional level which means they may be for instance involved in developing indicators. They could be involved in leading and guiding data collection activities. They definitely will be responsible to generate reports to be submitted either to the original officials or to the ministry of culture or to donors. So you can classify the responsibility to basically one is to the synthesis of the TOT they have to go down and train people who are necessary for the work and then they also are people who should now be able to lead and guide and be responsible for implementing the monitoring evaluation and also writing reports and submitting reports in time.