 All right, I think we will get started here so that we can sort of stay in time with this session. Hello and welcome everyone to this joint skill sharing session on the guidebook for monitoring and evaluating ecosystem based adaptation. My name is Luise Richter and I'm working for the BMU Iki funded global project on mainstreaming EBA which is implemented by GIZ. I will be your facilitator for this session, but as I just mentioned, this is a joint session that we're doing done by GIZ the UN environment program World Conservation and Monitoring Center and the friends of EBA network. It's great to have you here and we look forward to this upcoming hour and having interesting inputs and lively discussions with you in this time. Before we dive into the topic I would like to make some technical remarks. So first of all, the session will be recorded and published on the CBA event web page afterwards. So for those of you who would not like to be recorded, I would kindly ask you to leave the session now it will be available for you to watch afterwards. I would also kindly ask all of you to mute yourselves and turn your videos off during the inputs, except of course if you're a speaker or a presenter. It does not mean that you cannot actively contribute to the session. We have a chat box and you can always post your questions and comments in there. We will get back to these after the input. And since this is a skill sharing session we would also really like to hear from you and learn from your experiences. For this we have prepared two little surveys, and we will make room for discussion during this session, and we invite you to participate in both. We will post the links to the surveys. And yeah we look forward to your contributions. If you have any technical problems please just write in the chat and we will do our best to help you out. So let's return to what the session is actually about, which is the new guidebook for monitoring and evaluating ecosystem based adaptation. Perfect the link has just been posted, just in case you haven't had a chance to actually look at the guidebook yet. The guidebook has been published by the GIZ implemented global project mainstreaming EBA but it has been developed in collaboration with UNEPW CMC as part of a FIBA working group on monitoring and evaluation. And today we have Emily Goodwin with us who's a program officer at IUCN and she's central to the FIBA secretariat. And I would kindly ask Emily to maybe briefly tell us about what FIBA is and which role working groups play within FIBA. Yes, thank you for the introduction Louisa. I'll stay very quick being conscious of time, but for those of you who are not familiar. FIBA is a global collaborative network of more than 80 different agencies and organizations involved in ecosystem based adaptation, working jointly to share experiences and knowledge to improve the implementation of EBA activities on the ground. And through this collaboration to have a stronger and more strategic learning and policy influence on EBA. The term EBA was introduced now almost 12 years ago and EBA has really paved the way for the uptake of this idea of working with nature is a cornerstone of adaptation strategies to address climate risks, biodiversity crisis and human well being all together. FIBA really works to collaboratively synthesize all of this different stakeholder knowledge on EBA and disseminate this knowledge by convening the global EBA community around different events workshops and expert working groups. And the M&E guidebook was produced in collaboration with an expert working group on monitoring and evaluation, including many different FIBA members. We're proud that such a diversity of FIBA members have worked together with GIZ and WCMC on contributing to this guidebook. And we look forward to continuing this work with the FIBA network to mainstream mainstream the guidance and the guidebook to support the development of effective monitoring and evaluation across our members EBA projects around the world. At this big global level, this approach to consistently evaluating EBA measures allows us to better scale up with concrete and tangible evidence, the value addition of working with nature for climate adaptation around the world. So now to get us started we have a couple of quick questions via Mentimeter, which you all may be familiar with. So let me just share the link quickly in the chat. So if everyone could use that link to get to Mentimeter, or there's also a code if you go on your phones to www.menti.com and type in that code is two different options to get there. So I see that I'm just going to share my screen so we can see the results sort of in live stream. If anyone is having trouble accessing Minty just let us know and we can help you out. So I'll just leave it open for just a couple seconds we just wanted to get the chance to know who is joining us today on this call. Looks like some people are still up very late over in Asia. The Asia conference had a lot of early risers and people staying late as well. I'm just going to click through to the next question. Just so we can get a feel of the room. How would you rank your experience in designing monitoring and evaluation. Have you done this in projects that you've worked on do you know much about this as we sort of dive into the work of the guidebook. There are no wrong answers. Everyone is. Everyone here. All right so it looks like a lot of people in the room are ranking themselves having a little bit of experience doing this. So it will be great to sort of take those experiences that you've had and apply them to the guidebook and see where the guidebook can help the work that is ongoing and then as a last question. I'm just cured. Well, now it looks like it looks like this question has not worked out perfectly in Mentimeter so maybe we'll leave it for now we were just curious the sort of if people have been engaged in EBA projects specifically before if they're monitoring and evaluation measures were applied to others different CBA projects. But this is I will I will skip this question for now because it looks like something has not gone through all the way. I'll hand it back over to Louisa. You can mention that option one is yes option two is no and we can just click after you mentioned. Yeah, that is true that is a good way to do it so let's say yeah. Yes and two is no. And three is nothing. Can you repeat the options Emily and maybe share your screen again. I don't know if you're familiar with this but we'll say option one is yes, you're familiar with EBA you've worked on EBA projects before, or two is no this is totally new to you. Can you share the screen again. See the results. You know, I was watching the interesting results and keeping it from you guys. So it looks like we have a pretty interesting distribution actually it seems like a lot of you have worked on EBA but it's also new to some of some of us in the room. So I will hand it back over to Louisa and we will dive into the guidebook from here. Yes, perfect. Thank you very much Emily. So, yeah, let's now move on and have a look at the agenda. Because, yeah, we have some inputs prepared for you in a first step. Sylvia, Sylvia Vikander from UNEPW CMC will give us an overview of the guidebook and share how it's structured and what you can find in there. And afterwards I will dive into the development of indicators in an EBA project in Vietnam where we also used a theory of change. And after that we have another survey and yeah this this is a skill sharing session so we really want to hear from you as well. And so we hope that from from the survey we can have an interesting discussion with you. And of course we also want to answer your questions. So with this, I would like to hand over to Sylvia and the floor is yours. Thanks Louisa let's just get the presentation back. Right, I hope you can all see that now. Yep. Great. Yeah, so as as Louisa has already introduced the skill shares about Emily EBA and also to give you an overview of the guidebook so these were the partners involved in publishing collaborating and producing the guidebook as Louisa has mentioned. The goal for right now is for me to just give you a brief overview of the guidebook to get you acquainted with its contents, and hopefully to entice you to take a closer look. And I think we've posted the link in the chat. So feel free to browse. Before I get into the guidebook I just wanted to give you a little bit about the rationale as to why we even bothered developing a guidebook for monitoring and evaluating EBA. So as I'm sure this audience is very well aware, we're at a point where we absolutely have no option but to adapt to negative climate change impact. EBA is an important approach for doing so. But unfortunately there are quite a few uncertainties involved in EBA due to its social ecological complexities and interlinkages. Fear not because there are ways of dealing with uncertainty, including using M&E because monitoring and evaluation provides basically the foundation for adaptive management, which means it's key to managing uncertainties, including those involved in EBA. And so we need M&E for understanding whether or not as well as why an intervention is achieving its objectives. And M&E for adaptation more broadly speaking is quite tricky and it's even more the case for EBA, as I already mentioned, it's, you know, operating in a social ecological system. And meaningful M&E as in something that will actually tell you about results and outcomes of EBA therefore often gets neglected because it's not entirely straightforward. So that's why in short we developed the guidebook which is here to help address some of those problems and we had received a lot via different channels of requests for support in this area because it is both critical to implementing safe and effective EBA as well as it being challenging. So what does the guidebook do? Well it provides an overview of the process for designing and implementing effective M&E for EBA interventions on the ground. So it kind of goes through intricacies and challenges associated with monitoring and evaluating EBA. And it really places emphasis on evaluating outcomes and ideally impacts rather than, you know, what often happens in projects which is looking at inputs and processes in a monitoring system. And the guidebook is primarily aimed at practitioners and planners who design and implement EBA on the ground. It can obviously be used by others including in research and generally it will help those who want to assess and understand results of EBA interventions. And we recommend using this guidebook ideally in the early stages of designing an EBA intervention but it can certainly also be of use once a project or an intervention is already underway. For example to improve the original logical framework that was developed or any ongoing M&E processes and the guidebook can also help design a midterm review as well as a terminal evaluation of a project for example. And it starts, so the guidebook is structured as follows. It starts, you know, with a background section that introduces key terms and concepts for understanding both ecosystem-based adaptation and monitoring and evaluation so that it can kind of be accessible to people who have, you know, varying degrees of experience in both of these areas. And it goes into a bit of detail on the complexities and challenges associated with M&E for EBA as well as adaptation more broadly because understanding these complexities is indeed key to being able to address them or at least manage them. And then it is basically structured around these four steps which I'll go into more detail on in a minute. And each section also includes a little summary at the start so that you can quickly see what the section is about and you can navigate through the guidebook, you know, we're trying to try to make this user friendly. Every section also has an additional use for resources section which often will include complementary information sources such as detailed guidance on developing certain methodologies, basically more detailed resources than what is outlined in the M&E guidebook itself. We've also included a lot of case study boxes in the guidebook to give you an idea of some of the kind of best practice components in practice from around the world. And there are also annexes that have a variety of practical examples and other information to help you implement the content of the guidebook. And so taking a slightly closer look at the four steps and while I go into them, don't forget or please keep in mind that there is no one-size-fits-all approach for monitoring and evaluating EBA. It is, as you'll probably all be aware, very context specific, but these four steps are broad ones that any project team of an EBA intervention can follow. So they will not go into the detail of what you need to do when you are operating in a mangrove ecosystem, for example. But the steps will be, you know, are written in a way so that they can be picked up by someone working in coastal zones and dry lands or wherever you're operating basically. So project teams can use these as a basis for designing and implementing robust M&E systems anywhere. And step one is all about developing a results framework. So it discusses the need for setting clear objectives and mapping the pathway for achieving these objectives. And that explains how results frameworks can assist you in doing so. And it tells you a little bit about the different types of results framework, but it does recommend using a theory of change approach, which is one type of results framework that is basically more and more recognized in the wider community as well as being the kind of most suitable results framework for adaptation projects, including EBA because they're long term and complex. So then the guidebook expands or step one expands on when and broadly how to use this theory of change approach, some potential limitations to keep in mind also what it can look like. Step two then goes into defining indicators, baselines and targets. And so it introduces different types of indicators available for M&E and highlights the importance of focusing on outcomes and impact in the outcome and impact indicators. In addition to your typical, you know, process and input indicators, because the former will really allow you to understand effectiveness. And it provides some general guidance for selecting indicators. Indicators will be highly context specific. So, you know, there's no sort of go to list for indicators, but the guidebook does tell you how you can go about selecting, identifying or developing them yourself. And it highlights the importance of setting a baseline and identifying targets, so that you can actually measure the indicators. And so step three really goes into operationalizing the M&E system. So how can you put everything that you've been planning and steps one and two into practice. And some of those elements are choosing the right evaluation design, important considerations about data types and elements of effective and efficient data collection, entry analysis and interpretation because those are all key to actually understanding what you are collecting in your M&E system. And then step four goes into using and communicating the results. So we don't want to have a, you know, situation where you've collected a lot of data and you don't do anything with it. So it discusses the need to use M&E results, both for, let's call it an internal process. So we're using it to inform adaptive management. How can you improve the intervention that's being implemented, as well as communicating to external audiences. And, you know, why it's important to do so for different audiences like donors, communities, policymakers, researchers or people in the wider adaptation community. And so that's just a whistle stop tour of what you'll find in the guidebook, which, like I said, I think it's in the link, or you can also go to adaptationcommunity.net under resources and find it as well as lots of other useful tools and methods. So that's that's all for me for now, but thanks for your attention. And I will stop sharing and hand over to Luisa, who will tell you a little bit more about a an example from Vietnam. Thanks a lot, Sylvia. All right, let me share my screen. Okay, can you give me a quick sign if this is working. Yes. Perfect. Thanks. Yes. So I will talk a little bit about the development of indicators for a monitoring and evaluation system in a project in Vietnam that I worked in. And just to give you a little bit of context. This was a project called strategic mainstreaming of ecosystem based adaptation in Vietnam. And you can also find more information about the project overall on the panorama platform. We've included a link in this presentation as well, where multiple examples from this project are included. So it was a project that was implemented between 2014 and 2019 on behalf of the Ministry of natural resources and environment in Vietnam and the German BMU and its international climate initiative. Yeah, it was then more on an implementation level implemented by the Institute of strategy and policy on natural resources and environment and GIZ so funding from Monterey and BMU and then implemented by ice boundary and GIZ. The project supported the development of effective EBA approaches in Vietnam through for instance the integration of EBA into national policy frameworks through different awareness raising activities that were both done for stakeholders at national and provincial levels. But then we also did capacity development activities really for local communities because we had two project sites, which you can see here in the little map that I included. And in these pilot areas we had concrete EBA measures that we were implementing from forest enrichment, also including livelihood activities over capacity building. Yeah, so M&E was really a main focus of this project and developing an M&E system in particular indicators for the project. So when we got started on developing indicators, we based a lot of our sort of way to go on recommendations that were given in a concept note that had been written for the project and in this concept note. There was a lot of connection to the GIZ guidebook adaptation made to measure this guidebook suggests developing and an M&E system based on five steps as you can see here on the right side. And what we noticed after looking at this for a while and doing all sorts of research and looking into how we'd actually go about it was that if you have good baseline data which is core to developing a good M&E system but if you have that in place from the beginning of your project, it's sufficient to focus on the steps three to five, which are developing a results framework defining indicators and setting a baseline, and then operationalizing the monitoring system that you have developed. So this is sort of what we focused on because we had our baseline in place. So instead of developing a results framework as described here, we decided to go for a theory of change model because of the research that we'd been doing and some of the experiences that some of my colleagues had. And we also decided to then use another five step mechanism for actually identifying indicators, which I think is really helpful for actually breaking it down and developing indicators step by step. So what we did, this is just a rough overview of a draft theory of change that we had was, yeah, we made a rather simple theory of change, which was developed in close collaboration with both my colleagues, who worked in the provinces and who worked with the communities in the pilot sites a lot and a lot of talks and discussions that we had during field visits, which stretched out over roughly two weeks to actually discuss how and what we should should measure things. And we had set up an overall objective for our theory of change, which you can see an orange at the top. So this is just the draft for one of the pilot sites we did this for for both pilot sites. And, yeah, we had this this overall objective for the theory of change, based on the objectives of the project. And what we did then was we went down to the activity level and actually looked at what what has been done what are we doing and the project at the moment. And from this we worked our way up. So from identifying activities and we developed outputs and then outcomes and results and impacts. And as you can see, we always tried to see where all these different steps were interlinked. And what was really important for us and also extremely helpful for me a big learning experience was to include assumptions in this process because it really showed us how and where things might go different from what we are currently expecting them to be and how and why we might need to change. So I think it gives you, you know, it really is sort of the heart of a theory of change, showing you how and why things might go different and that you will need to stay flexible with with your theory of change, which I think is a great advantage of of this model. And overall, the theory of change for me was very helpful to understand where we were coming from and where we wanted to go in this process. And as I said also to see how and why we might need to do things differently. But then it was particularly helpful for identifying indicators because what we did with this was, we essentially had for all the different levels so for an output and outcome. A result level we for all these levels we developed indicators and we already essentially had our themes for these ready. I will actually get back to the slide in a moment but so what we did for the identification of indicators was then to apply these five steps that I already mentioned before which were defining a subject. And again, I'm just going to go back for this one defining a subject was essentially what you can see in this slide. So defining a subject meant going into the output level and looking at one of these boxes and saying okay this is our subject and now from this having a subject we can actually move on and develop an indicator for the specific subject. We did that for all the different boxes on all the different levels and we always did that checking in or I always did that checking in with and developing is together with my colleagues in Vietnam in the provinces and also being in close contact with the communities. So, when you have your subjects defined for all these different levels and sort of time frames that are connected to them. We then moved on to specifying quantity, quantity of change and quality of change. We then defined a time horizon. And then if you have any sort of specific disaggregation that you want to use it might be gender. It could be something else as well. You can add that to it as well and then you essentially combine all these five steps into one. And this is just an example of an outcome indicator so as you can see up here also with a rather short time frame. So, for example, what you can have out of this is over two years 50% of the households and who are being village and 30% of the population in the additionally selected communities in Quang Binh province, particularly women youth union and farmer union members have gained knowledge and experience on climate change have seen its implications in practice and are sharing their knowledge with others. So this is of course, it's very extensive, but at the same time it includes a lot of different factors that you do want to have in an indicator. So, you know, sort of breaking it down into different steps and then combining it into one was really helpful for us to actually understand what we needed in an indicator. And as mentioned, we repeated this procedure for all the different themes, which were linked to raising awareness raising activities, training income generating activities like plantation activities and so on. And for the different time frames that we had developed here so output outcome and impact level. So, some things about this worked out well and others were definitely more challenging. I'm going to start with what worked well from from our perspective so we developed this in a very participatory way which I think was really core for sort of setting the indicators right. So actually, you know, going to the provinces and being in the communities. We did all the talks, of course in Vietnamese and then my colleagues translated, which wasn't always easy but it was also at the same time very nice to have this, this opportunity to actually hear what people thought would be necessary to measure over a longer period of time. So this was core. Then also the partner staff and the communities receive training and how to operationalize the MND system. And as many of you will probably know when talking about EBA we need to look at environmental economic and social aspects at the same time. So, including all all these different categories in our development of indicators and actually coming up with indicators for these different aspects was also also really important and I think is core for for making a good indicator system when you want to do MND for EBA. Then as mentioned before using a theory of change was very helpful because it was very hands on and I think for for all of us who were in the developing team, it also actually made us understand processes much better ourselves. And then also using this five step model for developing indicators so breaking it down into five smaller steps and then sort of sort of putting it back together worked very well. And then of course some things were more challenging. This was, for instance, actually measuring knowledge and capacity. As you saw in the one indicator I also read out. EBA is often linked to sort of changes in people's awareness and understanding of of a situation of climate change overall and measuring this is only possible to limit a degree in the end you will often talk to people you will focus more on qualitative indicators, which is good you should have a combination of qualitative and quantitative indicators, but still it sometimes becomes difficult to actually say how do we measure this and you could also see that in the indicator I read out there was a certain vagueness to it. Because in the end what you can do is you can ask people if they feel like they have a better understanding of something. So this was definitely a challenge that we faced. Then the classical one timeframes for measuring EBA indicators overall and for EBA activities. EBA activities only become effective after many years and they go past project cycles which is also definitely the case with this project which as I said earlier ended officially in 2019. And of course it does become difficult to also follow up on how the M&E system continues to be used we tried to make sure that the system is actually being used and understood by the people who live and work in the area. So what we did was we passed the task on to the provincial departments and developed a manual for the implementation and usage of the M&E tables, and we did training both on the ground in the communities and for the partners. However, it is hard to follow up on this. I have a little bit of an update on how the system has been used but it does stay a little vague and it's hard to follow up on how it's being used right now. And I'm sure it's also hard to actually make sure that it keeps going. Then language and terminology was also a challenge. So what can happen is that you define indicators and then the people who go and try to measure them have another understanding of a or idea of what should be measured. In comparison to what the other side then understands when you have an interview or a discussion then understands what should be measured. So making sure that everyone understands and is on the same page of hey we want to look at this and how has this evolved is difficult and was a problem also in this case when people went back later on to actually make sure that everyone really still understood what exactly are we trying to look at. So being very clear and your terminology and also having everyone involved being able to actually explain it well is important. So just a brief insight in terms of what has happened since. I, as I already mentioned, it's not easy to get a follow up on this, but since project implementation started in 2016. There have been some results to certain process and outcome indicators. However, sort of the long term monitoring that is not in place yet but that's also sort of like it lies in the nature of the thing that yeah the the monitoring hasn't been going on for long enough. So, I know from talking to my colleagues in Vietnam that forest enrichment measures have started to contribute to higher forest density, and also to richer composition of forests, and that the first are better maintained measures that they are healthier and can provide better services, goods and services to people. And they have also shared with me that capacity building measures have led to yeah people being able to also share their knowledge with other farmers in the area. And I know that, for instance, the coastal forest enrichment that we've been focusing on and one of the provinces have actually led to an increased forest density, which in turn has resulted in sort of better services for the reduction of drifting sand. So these are just some of the insights that I could get from from my colleagues but as you see I don't really have numbers on this. So, yeah, sort of making M&E long term and keeping on following up on how it's being implemented is a challenge. I would like to end with my presentation so I'm going to stop sharing the screen. And with this, we would actually like to move on to another little survey that we have prepared, because, as I already mentioned a few times we would like all of you to actively participate and share your examples. And share your knowledge on M&E at this point and we hope to kick off a good discussion with you through this, these, these little questions that we have prepared. And it would also be great if afterwards you're happy to share something if you just unmute yourself and share it sort of with your own voice with us. If you have questions or comments on the presentation, please post those in the chat and then we'll get back to those along the way. So I will hand over to Emily again. Emily are you talking I can't hear you. I'm trying to make, make sure I was unmuted there but yes, perfect so it is actually the same link that's already in the chat to the same Mentimeter link, if everyone can join us there. In the first, we just have sort of two questions in the second part and the questions are one of, I mean, with these two presentations which of these guidebook components. Do you find in your work to be the most challenging process in designing and implementing monitoring and evaluation. I think even I can, I mean, and Sylvia and Louisa please chime in as these are following up, but I think this is something we've seen pretty consistently in presentations of the guidebook and then sort of feedback on the guidebook this question of defining indicator space lines and targets seems to be a challenge sort of across the board for you, BBA. So it seems to come out on on top in the ranking of what is the most challenging with when speaking with different audiences. Did we put our questions directly in the chat or go ahead and unmute option. So if you have questions about the presentations. Go ahead and pop them in the chat, and we will get back to them after this little bit, but then for sort of our open discussion. We thought you can just, you know, raise your hand and unmute yourself, unmute yourself. Let's just go to the final slide then that we have here, which is a more open ended question that you have examples of implementing monitoring evaluation that you want to share. Anything you put in will be, whether that be a success or a failure, anything you put in will be anonymous and show up on the screen here and we can sort of start a conversation with these examples. With these examples presented, whether that was a project you directly worked on or you know about or success or failure just to sort of get the conversation going. And then maybe Sylvia, maybe we can Sylvia and Louisa maybe we can start answering questions while this is up and people are maybe submitting answers. Yeah, definitely. So I see here that we have a couple of questions from Mutjaba Ali and so one is about if this. The EBA, Emani guidebook can be used in projects that all only have a small environmental focus or does it require an exclusively environmentally focused project. I would say across all projects, there are obviously certain Emani components that they stay the same across all projects so you can use this guidebook for other non ecosystem based adaptation projects and those ones will also So the non EBA projects will still basically benefit from the steps that are outlined in this guidebook what this guidebook does of course is that it focuses on on ecosystem based adaptation so it draws attention to issues that you need to consider in an EBA project that will link to you know the fact that there are a lot of ecological components and social components involved that interact over often even longer time frames than in other adaptation projects, especially those, you know, using field infrastructure. So, in short, you can use this guidebook will, you can use it for projects that have only a small environmental component. I'm also seeing that you are asking about the difference between the two guidebooks. So the EBA Emani guidebook and the one Louisa mentioned which is a more general adaptation guidebook I believe is what you are referring to. And which was developed by GIZ some years back and again the differences you know EBA has its own set of own set of peculiarities let's say and characteristics due to its nature of you know working with ecosystems and people. It has its own nature to help people adapt to climate change, and there are a lot of additional considerations you have to take into account at EBA monitoring and evaluating EBA has its own set of challenges. So the monitor this guidebook that we've introduced here is specifically focused on EBA projects and will give you all of those additional components that you need to be thinking about which would not be mentioned in the previously developed guidebook that is more generally focused on climate change adaptation so it's it'll the the previous one has a much broader focus than the EBA guidebook. Maybe quickly adding to that Sylvia. I think this was also definitely what we experienced like if we'd had an EBA Emani guidebook that would have been super useful at that point because we ran into quite a few of the challenges that are mentioned in the guidebook. And yeah I think for us it would have been very useful because we you know as Sylvia mentioned the the adaptation made to measure is is is more generally on adaptation and yeah we sort of had to specify it from there for our own own purposes. I've also seen the question was the post project monitoring budget built in and for how long. So yes it was, but not for too long so what we did was the sort of the GIZ involvement in the project ended around a year or a year and a half before the project officially ended so this phase was actually used to hand everything over to the partners. And this was not only for the M&E system but also for for the other part. So there was a lot of like handing over and yeah making sure that M&E would continue. So we had a little bit of a budget planned in for for instance the training phase and for then also always checking still checking in here and there and hearing how things were going. But this was very very much for a limited period of time so maximum a year and a half so definitely not long enough for what you would need for M&E. Our first entry or our first response to the question. That's great thanks very much for sharing the link to that that's super. If anyone else has has anything similar. I would like to pop it in on on Mentimeter equally if you just want to tell us a bit about your experience of working on M&E for EBA. You can also unmute yourself because we are we are trying to have a discussion I realized it's a little bit less organic than sort of one year together in person but do feel free to yeah share share examples and or challenges and of course solutions if you have, you know, overcome any anything in particular. So don't be shy. In the meantime, we can have a couple more questions have come in. Could you speak to the challenges and best practices for assessing long term impact, e.g. with regard to attribution of results to specific project. Good question. Yes. So, indeed, there are, as is implied in this question. Many challenges associated with assessing long term impact. And for them being simply a logistical one in the sense of projects are often on a short time frame much shorter than it takes, especially in an EBA project for results to be visible I mean if you are. If you are reforesting an area, I mean that are naturally regenerating a forest that can take, you know, many, many years and so you will obviously need to factor in that timeframe before you can try to understand whether the ecosystem is delivering the adaptation to, you know, the beneficiaries in question. So, you know, challenges include the timeframe, the, the costs involved in monitoring that over the that timeframe and many, many more which are all outlined in the guidebook. So I won't go into the background section of the M&E guidebook for EBA. There is a whole section about that so I won't go into it too much here. So some of those challenges are specific to EBA and others overlap with other adaptation projects in general. And it's not always straightforward to overcome those challenges but throughout the guidebook we have also included recommendations and suggestions as well as examples of how people have addressed those. I mean, you know, what could be best practice for assessing long term impact. So, one example that's included in the guidebook for example is that in order to assess, to overcome this challenge of the long term time scale, there's also sort of the capacity needed to understand or to carry out reliable M&E is that IECN have worked with university partners and research centers in West African countries, and basically the M&E system has been integrated into their long term research projects and they were working with different academics to link up between, you know, more socioeconomically oriented academics and researchers and ecological researchers to design a long term M&E process. So, if you take a look, so that's just one example and other, you know, relates to integrating M&E processes into local institutions and so on and so forth. But we have included suggestions to that in the guidebook. I see that someone else just made a little post in the mentee that it would be useful to be able to view examples from more projects or indicators targets, definitely agreed it would be super nice and useful also for us to hear about more examples because it is a challenge that we're facing that there don't seem to be too many M&E systems for EBA out there yet. So the more examples we can collect of this, the better it is to learn from each other. So if you do have something, as Emily said, no matter how well it worked out I think it's always good to share and learn and especially in this field, we all need this. So please go ahead and share. I'm also seeing some questions related to, so there's related to indicators so there's a question about, you know, examples of ways to standardize indicators so that they can be compared across programs and regions while still preserving the need for context specific metrics and then a question about does the EBA guidebook includes standard indicators to measure the benefits to people to ensure the EBA interventions are effective but not just business as usual. So the guidebook itself does not include a list of indicators because that could be endless. But it does point to sources, other sources that have indicator lists for example both in sort of a guidebook format or things that are available online and that is key when you come to looking for indicators. So a lot EBA brings together work from sectors that have been doing work for a long time and have thought about this a lot. So you can look to, you know, look to the biodiversity, the ecology community for well established indicators in, you know, that will tell you about the ecosystem status, health, what kinds of services it's providing, you know, look to the development community, communities that have established indicators, you know, about adaptation benefits you're hoping to see resulting from the the measures. So there are lots of lists that already exist on indicators so in while they need to be well indicators obviously need to be context specific and work in the community that you're you're implementing something in you don't always need to start from scratch so the guidebook will point you in the direction of such lists in the kind of in the useful additional resources section and about coming back to these the issue of standardizing so having some standardized indicators to be able to aggregate from across you know this kind of site level focus. So yes, I mean if you are working, for example on a program that has multiple projects within it. I mean, what would be ideal is to coordinate among the various sites. And especially if you're working in a region that will likely have, you know, a similar ecosystem type, you can agree with that your, your, the program you're implementing across each site will have, you know, a set of standard of standardized or the same indicator, while also maybe having some more locally specific indicators so then at least you will have a kind of a subset of indicators that you've used that, let's say your five project sites that will aggregate up to a program level. And then you can use the same approach if you have this kind of influence. At a program level or multi program level as well. And of course key to this is working closely and communicating with whoever is involved in that locality in that province whatever unit you're operating in, or within the organization that is implementing programs to to coordinate I mean that that's all about communication among projects programs and partners. So it is certainly possible to agree on, you know, a kind of a set of indicators that will be used everywhere, in addition to locally specific ones. And then there are also ways of using proxy indicators, for example, to then feed into one indicator at a higher level, which can help you get around sort of the issue of needing context specific indicators, while also having some idea what's happening at a more aggregated level. Lots of good questions coming in. Yeah. There was also someone Eliza I think raised her hand I don't know if you're still here and want to actually share something. I would like to ask, we were working right now with IDC project in one of the affected areas here in the Philippines. And right now, actually, the project is a three year project, integrating different communities together, working together. And we are drafting the indicators. Actually, we really don't have the indicator yet, because we are identifying within the communities, based on the VCA or vulnerability capacity assessment. What are the needs of the community? So what would be the timeline for us to accept if we are implementing the program already and to review the indicators as well? Is it, I mean, is it every year or at least for every year project to have a certain review and then check the indicators? We do have to come to an end soon, but I don't know, Sylvia, if you want to give a quick answer to this one. You were breaking up a little bit, Eliza, but did I understand that you were trying to get a sense of the time it could take to develop indicators using the approach used in Vietnam or just generally speaking? Yeah, it's some kind of that, because we are still working with the community in terms of drafting the indicators that we will create. Yeah, I mean, it would be interesting to hear from Luisa how long it took using the five-step approach, because there are obviously lots of different approaches to developing indicators. In any case, they should always, always, always link back to the theory of change that is developed if there is a theory of change in place. That's one thing I would say. But so that the approaches would certainly vary in terms of time frame, but like, for example, Luisa's case that she presented in Vietnam used a very specific participatory process for, you know, using that five-step approach for indicator development. So maybe you can tell us a little bit about how long that took. Yeah, sure. So, like, we, I think, needed something around four months probably to develop the system from when we started four to five months, from when we started going out and actually talking to the communities. It was a long review process between me and my colleagues always sending things back and forth and discussing how exactly we should go about this. So you do need a little bit of time, I think, to also, you know, let that ideas sink in and then rework it a little bit, take it back with you and develop it further. Plus not everyone in your team might be, might know, like, equally well how to work on such a system and develop it. So you also need to make sure that everyone sort of on board and understands the process well. So planning some time for that is important. I also understood a little bit from your question that you're currently in the vulnerability assessment phase and when you should then start developing indicators. If your vulnerability assessment is essentially your baseline, I think it's good to get that in place. And our project we had a sort of like a baseline drawn at the very, very beginning and then a more extensive vulnerability assessment later on, which was definitely influencing your M&E system, but it wasn't sort of the only source because we had the baseline data from before but if this is sort of your core baseline data, I think it's worth waiting for this phase to be done and then starting developing the indicators because you do want to have a good baseline to relate things to. I think we will come to an end now because we're definitely over time. We've been asked a few times if we can share our presentations. The recording will definitely be shared. I don't know exactly how it works with sharing the presentation, but I think we'd definitely be happy to share that. This is maybe a question to the technical team. I don't know if you can quickly come in and give us an information on this, how presentations will be shared or if they will be shared? They will be uploaded as possibly within three days by the end of the festival hopefully sooner, but I'm sure you will get an email on how to access everything. Perfect. Thanks a lot. Okay, then I think we will close at this point to not take up more time, but thank you very much for these really interesting questions and discussions. It was really great to be here with you and we wish you a nice afternoon or evening depending on where you are. I just wanted to add that feel free to reach out to us at any point in relation to this work. The guidebook exists now, but we hope to use it as a source of starting off conversations and getting some good work going in this area. So yeah, if you have questions, you know, get in touch and especially we're always looking for examples and good practice cases. So, yeah, feel free to be in touch and form a, you know, good community of practice around this. I will quickly post my email address in the chat so that you have mine and then maybe Emily and Sylvia you can do the same. Yeah, I think my email should be in the presentation. Yeah, and on the website, but we've just posted them in the chat box in case anyone is interested. All right. Thanks everyone. Bye bye. Bye.