 Rwy'n cael ei wneud am ysgrifennu'r sefydliadau cyffredinol o'r ffordd cyffredinol cyffredinol. Rwy'n cael ei ffordd cyffredinol o'r ffordd cyffredinol o'r ffordd cyffredinol o'r ffordd cyffredinol cyffredinol. Rwy'n cael ei fod yn ei. Gweithio. Felly, rydym yn eu bodgylcheddau hanfodol, mae'n eu bodgylcheddau hanfodol o'r ffordd cyffredinol a'r ffordd cyffredinol o'r ffordd cyffredinol o'r ffordd cyffredinol. Rym ni'n fneddio y ffordd cyffredinol am gyd-ffordd cyffredinol o'r ffordd cyffredinol ond fawr i'r ymgyrch yn gymgyrch gyda'u llwyddiadau ar y gyfnodol a'r gwybod ar gyfer llwyddiadau ar gyfer llwyddiadau gan gweithio ei ddim yn ei ddifuig ar gyfer y dyfodol i gweithio'r cyfnodol yma, i'r cyfnodol yma o'r progexau mae'r FBC. Felly, rwy'n rhoi i'r ddigon i ddim yn y cesenni. Mae ymddug i'r cyfnodol i'r cyfnodol ar y ddweud y cyfnodol yn ganosbrydol Liang the foundations for democratic and accountable governance Or deepening the degree of democratic and accountable governance and secondly the extent to which a donor funded programme like making or voices count actually incentivises and rewards the kind of careful, critical, evaluations approach that lots of us might be committed to I want to just make the caveat before we go into detail that this isn't in any way an intention to abortion any blame oedd ydych chi'n gwneud amser rydyn ni oedd y dylai ddweud o'r programau o'r ddod o'r ddod. Roedd y cyfnodd gyda'r ysgol gan gweithio y pryd-grifennu ymddangos o ddod o'r programu yn ychydig o'r cyfrifennu sy'n ei wneud o'r ymdyn nhw cwmwyllt, o'r hyffordd o'r cyfrifennu cyfnodd yn ymgyrchol cyfnodd. Mae ydw i'r cyfrifennu, oherwydd o'n cyfrifennu cyfrifennu cyfrifennu cyfrifennu cyfrifennu, a'r cyfnodd gyda sy'n ceisio i gyd yn ei ddechrau'r cyffredinol sy'n argymell. Mae'r cyfnodd ar y cyfnodd gyffredinol sy'n cyffredinol yn y cyfnodd gyda'r gyffredinol yn y Uc. Mae'n ddifud o'r ffordd a'r ddodd yn ddodd yn ei ddodd. Mae'n rhai'r cyfnodd yn ddodd yn y ddodd. Yn y ddysgu, rwy'n gwybod i'n dod i'w ysgrifennu cyffredinol arall, ac i f Suzanne yn medd isto pedag iawn y ffael iawn fwy lle llawiennol summoned ar y cyfnodol iddiadod wych. Fy ridei gwydoedd a hoff solidarol aRoedd digwydd yn Pacsen y adnoddau也可以ll nesaf ar hyn sy'n mededud yn falch peaceful ac tref continuing ac os yn ddefnyddiannol ac mae'r cyflwyfydol yn ymddangos gweinwyr iawn, eu cyflwyfyddiant, y cyflwyfyddiant, ymdwyllfa rhifciolau ac mae'r ffynol agennaethogau o'r fydd yn yma yna'r obeyd y gallu cerddau cyflwyffordd yn ymddangos ar y cerddau cyflwyffordd o ran y cyflwyffordd gwaith oedden nhw wedi'i cyflwyffordd â'r cerddau cyflwyffordd o'r cyflwyffordd ac mae'r cyflwyffordd o'r cyflwyffordd o'r cyflwyffordd. Lleidwch i'r ffordd a llwfawr i'r ddigon yn ysgrifennu i'r tîm gwrs yma, a bwyd yn iawn i'ch gweithwch eich cyfnod o'r ffwrdd a'r rhan o'r ffordd a'r rhan o'r ffordd i'r ffordd a'r ffordd o'r ffordd i'r ffordd i'r rhan o'r ffordd i'r ffordd o'r ffordd. Felly, nad yna ddim eich gweithio cyffredinol, yma'n 7 o 8 o blwyddyn, I worked with a colleague on a review of the impact and effectiveness of transparency and accountability initiatives and at that stage hardly any of the initiatives that we reviewed were tech-based initiatives and we talked about impact and effectiveness as separate but connected and we defined effectiveness as whether initiatives were implemented as planned and we defined impact as being about what they achieve pointing out that transparency and accountability initiatives often seek quite varied objectives in themselves right through from developmental kind of material benefits of development type objectives through to democratic objectives through to citizen empowerment type objectives and the reason we found it useful to distinguish these was that they're different but connected and in a sensible theory of change you would have both you would have the inputs that are meant to have an effect of a certain kind and then you would have a series of assumptions holding those together in a theory of change which if they all hold good mean that the things which are going on as inputs and outputs which are effective all add up to the outcome and the impact. Tech approaches might well be effective within tech's own terms and within their own parameters or they might not be I'm sure we can all think of tech initiatives that haven't been but when you look at them from a civic tech perspective what I want to argue is the question that's relevant is not that it's not about whether they are effective within their own parameters the relevant question is whether they had any impact on the underlying governance problems that might have given rise to the service delivery deficit or to the lack of opportunity for citizen voices to shape policy or to the unclosed feedback loops that might need remedying by technological fixes so it is important to know whether the tech was effective and that might be part of the impact story but it's not the answer to the impact question. As we argued in 2010 in that piece of work I was talking about in relation to transparency and accountability impacts in general the best approach to understanding governance impact is to use a theory of change type approach a theory based approach to evaluation that identifies and examines all of the assumptions that are supposed assumed to link inputs and outputs and outcomes and impact and the hierarchies in a governance programme or a governance project that the interrelationships between those can be quite complex and the assumptions can be quite buried and a theory based approach to evaluation which has been around for a long time but on which there's there's been a lot of publication and work in the governance sector over the last decade or so is in my mind the best way to do it. So that's the theory out there but then let's talk about making all voices count and in synthesizing all of the lessons from the making all voices count programme into that publication up there about appropriating technology we found it useful to differentiate between sort of clusters of messages that relate to different clusters of things that we funded and things that we saw from the things that we funded. On the one hand tech solutions to discrete service delivery problems the programme funded a lot of those perhaps especially early on um with innovation grants and there's a hope there's a block of three messages in the in that synthesis report that refer to you know how technologies can play decisive roles if the problem is a lack of data or planning information common design flaws that tech for transparency or tech for accountability initiatives often fall into um and how obviously transparency and information and openness are not sufficient in themselves to generate accountability then there's a second block of messages which are about attempts to apply technologies to some of the broader more systemic type governance challenges again a number of projects funded by MAVC were of this type and here there's some messages which for me who had come at this programme from quite a skeptical perspective were were were new and were interesting and were refreshing so recognition that technologies really can support social mobilization they really can support collective action by connecting up citizens they can open up and hold open new spaces for engagement that might not have existed before between government actors state actors and citizens and they can help to empower citizens and to strengthen their agency for for engaging with actors more powerful than themselves and then there was a third block of lessons which are more about the much more transformative and difficult and long term project of building democratic and accountable governance really laying those foundations where we had to say we found that technologies in themselves actually currently contribute very little then it's not to say they're not there of course they're there because technologies are here in all of our everyday lives and in the everyday lives of of governance advocates and activists in in countries all over the world but that the actual contribution of the technologies to that governance impact was pretty much limited not to say it will always be like that there's potential for further developments that that might mean that that changes but that's what we had to say at the moment so um we um I think we we need to conclude from the body of MAVC evidence overall that solving those discrete information problems in service delivery using technologies might achieve the aims of the tech initiative but that the aims of a tech initiative if they are at all realistic and many of the proposals that were put to us in MAVC stated quite unrealistic objectives to start with but if they are at all realistic they tend to stop a long way short of laying the foundations of democratic accountable governance systems which might be fine you need things to happen at all levels but there was there's been a very strong tendency in the field as we all know for over claiming in terms of the objectives of of tech projects but the problem that I think we all really need to focus on is it's much easier and much quicker to assess the effectiveness of tech solutions so those tech effects and to capture those in quantifiable indicators in the short term then it is to assess the governance impact of initiatives that might use technologies along with other approaches as part of a much more complex strategy to make much more complex changes happen in governance systems and sometimes what gets monitored or evaluated is the easier thing to monitor or evaluate rather than the most relevant thing so I'm going to hand over to Duncan to give some illustrations from making all voices count of how these tensions played out. Sorry this was my one about the foundation sorry. Okay brilliant thanks Rosie um so I just wanted to start at the original theory of change behind the program okay so going back to 2012 four donors sat in a room developed quite a complex theory of change that guided the development of the program this theory of change contained assumptions about governance which were well grounded in the kind of cutting edge of governance literature that existed at that time but there were also many assumptions in there that were more around the potential of technologies to play roles in governance so at that time there was a lot less evidence on that aspect of the theory of change and actually if you look at it that some of these were more kind of wishful thinking rather than you know evidence-based assumptions. So part of Rosie and myself's role as leading the research evidence learning component had to include testing some of these tech assumptions that were built into that theory of change and that was to kind of seek kind of more information more knowledge to help refine the program as it went along to test those assumptions and if that you know in terms of program design that sounds great okay. What I'm going to talk about is where the kind of ambitious learning focus program design hits the reality of a number of pressures which constrain the ability to refine and adapt in light of what's being learned. So first off the evidence-free assumptions about tech in the original program were not in reality open to serious contestation. Contesting them would have challenged actually some of the core functions and setup of the program. And that wasn't something that was really open to question. So they were things that were you know the consortium partners were particularly wedded to or particular donors were wedded to particular approaches and actually having a more contested conversation about what those things looked like and the best approaches to use that wasn't really on the table at a number of points. So at the start of the program in the interest of quickly issuing an open call and starting to disperse grants we were told actively not to work with innovation and scaling grantees to develop project level theories of change. While doing so would have been costly and slowed up would have slowed up the granting process it was also a missed opportunity to really appraise and help the development of more realistic proposals and also identify much more appropriate ways of measuring the kind of outcomes and impacts of those initiatives. We also experienced a number of tensions around log frame indicators. We all mentioned log frames, our eyes roll but actually at some points they can actually be a useful program management tool. These indicators were drawn from the program's theory of change but didn't really differentiate between the diversity of the kinds of work grantees were doing. So throughout there were tensions over your comparing apples with oranges. You know there were particular indicators around reach engagement which weren't necessarily the projects we funded didn't necessarily speak to those those higher level aggregate indicators. So for example if you were trying to reach huge numbers of people reach figures it's really important quite useful measure but if actually what was going to make the difference was quite you know very careful political intermediation you know what is it that you're measuring now that's useful. So this led to a number of problems and I'm just in the interest of time going to skip over some of those. Okay now let's get into it. Yeah so some of us took the position that actually it was really important to measure the things that mattered. Okay so maximising reach engagement weren't necessarily the most important things. Okay so our position didn't actually hold that much sway when there were particular political drivers that meant having those top level figures although meaningless in a lot of cases were needed to make the political case particular at donor level. Another kind of issue around indicators and reporting was around being able to produce gender disaggregated data. We're all very keen to be able to demonstrate that our work results in greater gender equity but actually a lot of the theories of change behind some of the projects supported relied on the perceived privacy and anonymity that the technology offer options offered them in reporting problems. So not having to go in in front of perhaps the person you need to complain about to you know go and put something in a suggestion box you know there's quite a critical part of whether people might engage with that technology or not. So in some cases we were we were under a lot of pressure to produce that gender disaggregated data which was counter to the theory of change of the particular project. Halfway through the program there was an opportunity to revise some of those log frame indicators. We all agreed that actually some of these things were particularly useful as an indicator of what different projects were achieving or not. Recognising the problem that the problem of the kind of apples and oranges issue said okay perhaps these aren't the right things to be monitoring. But again there was a problem whereby some of our donors needed to have a time series across the entire entire program where you've got consistent indicators for the four and a half years so you could see the pattern over time and it wasn't politically acceptable to not have that. And then finally I just wanted to talk about distortions and the program contained multiple incentives to distort or fudge or grossly simplify or dumb down on all the way up and down the A chain from individual grantees proposing what they were going to do and framing it according to perhaps an oversimplistic framing that the program led with. Or actually you know reporting on what did they say they did versus what did they actually do. Yeah I've got one minute left. Or consortium members in terms of filling out log frame indicators or reporting up or down through to donor representatives themselves having to have conversations with you know the very top level of ministries. So I think we'd all agree that learning is a really important thing in terms of improving our practice but we need to recognise the quality of evidence on which learning is based. Just throwing it out there as a question how many in the room would be happy to educate your kids based on facts derived from the content of grant reports? Moll over that question. Three very quick takeaways from this. The first one Civic Tech clearly fulfills that central principle of development theory that was famously summed up by Andrew Natsios who I think just finished being the administrator of USAID which he was for four years in the early 2000s. That those development programmes which are most precisely measured are the least transformational and that those which are the most transformational are the least measurable. Secondly the organisational culture and drivers of donor funded innovation programmes are very often in multiple tensions with painstaking evaluative approaches that are needed to assess impact in governance work. Some donors do find these tensions easier to resolve and navigate than others but the tensions are definitely there. And thirdly and finally although it's harder governance impact is not unmeasurable I really don't want to take away from this session to be oh you just can't measure governance impact. There's a whole literature there's a whole body of experience on how to assess impact in governance work it's just located in different fields from most civic tech work it's located in the governance field and in the development evaluation literature but in MAVC given how the highest prioritised log frame indicators were going to tell us very little about the extent and mechanisms of governance impact we did later on in the programme to start using some of these methods probably too late for it to be maximally useful but there are lots and lots of approaches to doing it and I think you know this is a call for the civic tech community to start turning to those methods and start to use them to evaluate their impact better. Ultimately the field needs to take some steps to avoid letting the least transformative work be the best measured. Okay thank you very much both of Rosie and Duncan very interesting talk.