 Well, hello out there. Good afternoon. Good evening or good morning, depending on where you're joining us from today. Welcome to Engineering for Change, or E4C for short. Today we're very pleased to bring you the latest in our 2017 webinar series on the topic of excuses, optimism, and complexity navigating impact measurement in social innovation. My name is Yana Aranda and I'm the Director of Programs here at Engineering for Change. I'll be your moderator for today's webinar. If you're following us on Twitter today, I'd like to invite you to join the conversation with our dedicated hashtag, hashtag E4C webinars. I'd like to take a moment now to tell you a bit more about today's webinar. Measuring a pact of social innovation projects can be approached in a variety of ways. There are many competing opinions for what can be measured, what can't, and why. As there is no set of regulations for social innovation practice to date, practitioners must decide how to measure impact ethically to ensure that interventions are helping rather than hurting communities. Today, we are joined by Lauren Weinstein and Chris Vanstone of the Australian Center for Social Innovation, or TAXI, who will help us to understand social innovation measurement techniques that aim to accelerate learning through the design and implementation process. I'd like to welcome both of you and thank you for joining us today. Before we get rolling, I'd also like to thank the E4C webinar series team. If anybody out there has questions about the series or would like to make a recommendation for future topics or speakers, we invite you to contact the team via the email address visible on the slide. Webinars at Engineering4Change.org. Today's webinar is part of the E4C professional development offering. Information on upcoming installments in our series, as well as archived videos of past presentations, can be found on the E4C webinar's webpage, as well as our YouTube channel. Both of those URLs are listed on this slide and can also be found on our platform. Before we move on to our presenters, I'd like to tell you a bit about E4C and who we are. E4C is an knowledge organization in the global community of over 1 million engineers, designers, development practitioners, and social scientists leveraging technology to solve quality of life challenges faced by underserved communities. These can include access to clean water and sanitation, sustainable energy, improved agriculture, and more. We invite you to join E4C by becoming a member. Membership is free and provides access to current news, data on hundreds of essential technologies in our social libraries, professional development resources, and opportunities such as jobs and fellowships. E4C members enjoy unique user experience based on their site behavior and engagement. Essentially, the more you interact with the E4C site, the better we will be able to serve you resources aligned to your interests. We invite you to join E4C's passionate global community and contribute to making people's lives better across the world. Check out our website to learn more and sign up. E4C has two webinars coming up in February. Our next webinar will be in collaboration with the Impact Design Hub on Thursday, February 23rd at 11 a.m. Eastern Standard Time. We will be discussing development engineering practice as part of our topic of supporting development actors to practice impact design. A week later, we'll do a technical deep dive on the topic of the role of robotics in global development with Raj Manavan, who is the founder and CEO of the Humanitarian Robotics Technology. Check out the E4C professional development page for more information and re-registration details. If you're already an E4C member, we'll be sending you an invitation to both webinars directly. Now, a few housekeeping items before we get started. I'd love to see where everyone is from today. So in the chat window, which is located at the bottom right of your screen, please type your location. If the chat is not open on your screen, you can access it by clicking the chat icon in the top right corner of the screen. So I'll get us started in actually my location. Coming to you all from New York. There we go. I see folks are already answering in the Q&A window and I have folks from Indiana and Pennsylvania. But again, we'd like to encourage everyone to use the chat window to answer this question so that you can get accustomed to it. See all over the states, Florida, Pennsylvania, North Carolina, Indiana. Of course, we have folks from Australia. Phoenix, Arizona, Costa Rica. Again, please do use the chat window to answer. Denver, Germany. Lovely, lovely. For those of you who use, again, don't see the chat window. You can click on the icons in the top right-hand corner of your screen like this chat window. We'll pop it open. Any technical questions and administrative problems or just comments should go into the chat window. You can find a private chat that you first see that way and you can find us located there in the dropdown. You can also use the chat window to type any remarks. But during the webinar, please use the Q&A window, which is located below the chat, to type in questions that are directly for their presenters. That way you can keep track of them and not lose your question in the mix of chatter. If you are listening to the audio broadcast and you encounter any troubles, try hitting stop and then start. We also want to try opening up WebEx in a different browser. Following the webinar to request a certificate of completion showing one professional development hour or PDH for the session, please follow the instructions seen at the top of the E for C professional development page. Again, the URL is listed right here. So thank you all for entering your locations. Great to see folks from the UN in Australia and all over the United States and beyond. We're really glad to have you here. Now with this, I'm going to go ahead and move on to introduce our presenters. We have two wonderful presenters today. Chris Vance Jones is the Chief Innovation Officer at Taxi. He leads the innovation at this group. He started his career as a product designer, designing biscuits, cameras and razors, but has spent the last 14 years working with interdisciplinary teams and communities to co-design solutions to social problems. And he is joined by his colleague Lauren Weinstein, who is a senior social innovator at Taxi and leads co-design and systems change. She's a multidisciplinary designer with a background in sociology and social design. Her experience ranges from ICT development in Nigeria to disability service innovation in Australia. We're very excited to have all these fantastic speakers join us today. And with that, I'm going to turn it over to them to share their insights. Good afternoon, everyone. I'm Lauren Weinstein and I have Chris Vance Jones here. Hi. We're really excited for the opportunity to present to all of you today a little bit about our journey in monitoring and evaluation. We work on an organization called Taxi, the Australian Center for Social Innovation. And I'll be handing over to Chris to introduce you a little bit to what it is that we do at Taxi. But first, I just want to talk about some of the things that inspired us to explore the value of monitoring and evaluation in our work in social innovation and some of the things that we were running up against as we tried to evaluate the impact of our work at Taxi. So when we started to look at what was the actual impact we were having on lots of different types of social innovation projects, we ran into a couple of different assumptions that either we were having or our partners were having or some of the other organizations in the space were having. We found that people often thought monitoring and evaluation was a real check on the progress that had happened to date in a project, more of a project management approach. Or the final decided on whether or not a project should be continued or not or that final assessment. Is it good, is it bad, does it work, does it not work? We also heard a lot about how evaluations would be the content you need for a pitch for future funding. So if you've got the evaluation saying the project's good, then surely you'll be able to fund the project moving forward. Another thing that we heard a lot about was a gold standard in monitoring and evaluation. So there would be one type of monitoring and evaluation tool or mechanism or approach, and that would be the one that you would need to use and you could use it for any kind of project or at any stage in a project. And that would be the bar that you would really want to look to, for example, maybe in RCT, something like that. We also had this assumption ourselves a bit at the beginning that monitoring and evaluation is something that would restrict the innovation process, that it would hold us back from iterating and coming up with creative ideas, that it could constrain the types of more disruptive work that we wanted to do and systems change. And lastly, we've heard this across lots of different practitioners. I think lots of people sit in different camps on this, but we've heard many people say that it's not possible to measure social impact, that you just can't know how people are exactly affected or how people are benefiting from different types of design or social innovation projects, because these projects are so interrelated and context is always changing. We've learned a couple of things about each of these things. And I think that this is what really set the stage for taxi to go on a learning journey, because we felt it was important to at least find a way to think about how we can understand how to improve our impact or understand what impact we were at to begin with. But we'll talk a little bit more about what we've learned across all of these things, but first I'll hand over to Chris to tell you a little bit more about taxi and what we do here in Australia. Good morning and afternoon, everybody. So we're coming and broadcasting to you today from Adelaide, South Australia, and I want to recognise that we're on the land, the traditional lands of the Ghana people and pay our acknowledgements to Ghana Elders past and present and into the future as well. So taxi is the Australian Centre for Social Innovation and really we were set up by the South Australian government, although we've always been independent of government, to find new solutions to long-standing social challenges. We help organisations get insight into problems and opportunities using a human-centred design methodology. We design and develop new solutions, so often that services, but also policy solutions. We help organisations build their innovation capability and we take on much more ambitious projects which are about shifting, catalyzing innovation across systems for the greater good. And of course you can read more about our work on taxi.org.au. Our team, as you would have got a flavour of, is very much multidisciplinary. It's drawn for people with backgrounds in social science and design, but also in frontline practice in policy development and in business innovation. We focus our work in three areas. On work with families, particularly in the child protection space, where we're really trying to catalyze Australia thinking differently about child protection and what it takes to enable all children to thrive. We work with older people. We've got a particular interest there with the baby boomers and we're currently running a big systems change project that's looking about home and housing for people, for boomers in particular. And Lauren will talk a little bit more about some of the work that we've been doing in disability where we're really trying to realise the promise of a big recent policy change here in Australia called the National Disability Insurance Service which is promising individualised funding for all people with a disability and a shift in that market setup. So we're really trying to help organisations develop products and services and build capabilities so we can actually realise those ambitions and the ambitions of people with a disability. We work with NGOs. We work with state government. We work with federal governments. We work with full-purpose business and increasingly we're working with foundations and philanthropy as well. So it's quite broad our work, but it is all situated in these, mostly situated in these three focus areas and all situated in Australia in tiny bits in New Zealand for Robert from New Zealand. So I want to say a little bit about our approach to innovation and I think you guys will be familiar with the sort of standard sort of four-stage design process and this is how we think about it and just to give a bit of a context because this is going to be the framework that we use to talk about the different kinds of monitoring and evaluation and learning that we're doing and we're exploring at each of these four stages. So we think about it as in an innovation process. Really you're starting with a great deal of uncertainty. You might not know what the problem is. You might not know who your beneficiaries are or who your customers are or what they value, but then over time a good innovation process, whether you're taking a human centre design process or another one, it's really about reducing that uncertainty. So you've got triangle one going down over time. So we talk about it as four stages, the discover stage, which is really about identifying opportunities. This is typically where on the ground field work is happening, an ethnographic approach, as well as a sort of review of the systems in which we're working with. The design stage where typically we're prototyping new kinds of solutions, be they policy or services. The trial stage and this is sort of this lack of two stages more traditionally where the kind of conversation about monitoring evaluation is the trial stage where you're trying to develop an evidence base for a particular solution and the spread stage where you've got something that works but you're trying to spread the impact by spreading to other sites or scaling up depending on your approach there. So really simple model that we use and we're going to use it to introduce to you the different kinds of evaluation and monitoring and learning approaches we take across each of these four stages. And so really the purpose of innovation for us is really then about testing assumptions. This is just another diagram that we often use. We often think about our workers in the office, in places where we are now naming and framing the assumptions. What are the things that we think might be true and then getting out of the office to test those assumptions in the real world and it's an iterative loop. And I suspect this will be really familiar to lots of you. So this loop that you see here, it's something that we iterate on throughout the entire duration of a program cycle. So from that discover phase all the way to spread. And it really informs the way that we learn across projects. And those could be projects in any of the sectors that Chris talked about before. And we're also working in some new sectors as well, including homelessness and youth employment. We've even worked in health as well. But fundamentally our main priority at taxi is really to see improved outcomes for people. We want to see people living their best lives. And for a long time we knew that the families we were working with were really benefiting for the programs that we were running and the services that we were designing that they really enjoyed them. They would tell us stories about how they had increased social networks or they were becoming more confident or some families had less relationships that weren't so positive and they were moving on to better work opportunities and schooling opportunities. These are the kinds of things that we wanted to be able to articulate, but traditional monitoring and evaluation approaches didn't quite capture all those stories. A lot of the times we couldn't find the right mechanisms to really celebrate the nuance and the complexity of how these projects were running. And so in a way that the families and the communities that we work with and that our partners and that our funders could all appreciate as well as our own staff to understand what exactly was the whole point of doing this kind of work. And so while we recognized that monitoring and evaluation was really important, we often found that the evaluations we were using were kinds coming at that later end of the process that Chris was talking about. And some of the recommendations we were hearing about projects were really valuable and we thought, well, this has been actually quite useful to integrate into the project design much earlier. We know that it works, but maybe there are some things that we could have tweaked or iterated on much sooner. And so today we want to tell you a story about how or why we've kind of abandoned this idea of monitoring and evaluation and moved to the idea of monitoring evaluation and learning. We think that this is, we think that there's a real opportunity to think about what monitoring and evaluation and learning looks like as a process and parallel with innovation and something that can inform our work even at the earliest stages of what we do. We are really starting to address this more in a meaty way just now as we're a pretty young organization. So we're on a learning journey too to understand what exactly does it mean to do really vigorous monitoring, evaluation and learning in a variety of different projects in a variety of different contexts. So we're not necessarily experts that we want to share with you. A lot of the things that we've found to be really useful and also the types of things that surprised us along the way because we believe that measuring impact is really a way that we can improve the types of outcomes that we deliver for people as well as accountable to doing the better work for the people that we serve and the people that we partner with. So back to this diagram because this is going to be a little bit of the structure or the framework for how we describe some of the different things that we've tried in monitoring and evaluation. So we basically, we've done a lot of work in taxi across all of these four stages but a lot of times, of course, our projects start in the discover phase and sometimes even funders ask us to just do the discover work or just discover to design. And that's not typically the place where people are asking you to do monitoring and evaluation. And we're starting to starting to feel like it would be a bit too risky to wait to evaluate or understand our impact or understand just how good the project was or if we had really done things we set out to do three years after we started the project the project had already come to a trial phase. So we've been working with an organization called Clear Horizon in Melbourne which is really a human-centered focused monitoring and evaluation organization who's been helping us think about what are the flexible ways to do this what are ways to really understand how do we hold ourselves accountable to the things that we set out to do and discover phase in a way that keeps the door open for us to explore again what all the opportunities are. I'll share with you an example from one project that we're working on right now in the discover phase. So this is a little bit about what Chris was referencing before. In Australia there's been quite a big shift towards the national disability insurance scheme which is going to be giving individuals and families choice in how they spend their disability pensions on what services they want from which agencies they want and this is really putting the control back into the hands of the people who are going to be benefiting from services and so we're supporting one larger nonprofit organization in Australia to think about how they can develop three new innovative services that really speak to the needs and the wants and the values of the people who would be benefiting from them. So people in Australia with disability across different age ranges across different types of disability and complexities of disability and what we're exploring in this piece of work is really what would radically different disability services look like if we actually designed them with for and by people with disability and what I mean by that is that we've brought on people with disability to be part of our research team and also the co-design team so we're researching what people with disability and their families would really want in services in supports that help them reach their goals and ambitions and we're doing all of that all of that is being informed by people with lived experiences either caring for someone with disability or a person with disability and throughout this work we realize that it's incredibly important to understand what the questions are that we're setting out to answer even before we get into that design phase so in a lot of projects that I've been in before we have a set of questions and we go out and do a bunch of qualitative research and it's quite hard to know how you're tracking on the types of questions you're setting out to understand what we think we think Mel has a really important role in this space in the discovery phase before implementing anything just to really help you structure and track your research progress and rigorously know which assumptions you're naming which assumptions you're testing to know just how much you've learned how true those learnings are and what the gaps still are so we have come up with different sets of questions that we think are really important to answer in terms of monitoring evaluation and learning in these different phases so in this phase what we're really trying to understand is do we even design the right project and so we would have hoped to have named that in the beginning of the project but we're constantly checking that because we know context can change and our learnings are going to change over time so we may have had some assumptions about what the right approach would have been at the front end of the project but as we get into it we want to make sure that that's still right we want to make sure that we're starting to understand what the barriers are and what new opportunities are and we want to be able to hold ourselves accountable to the intent for this project so for us it's really important that we're going to be able to do research with and design new services with people with disability and so we need to make sure that we're really holding ourselves accountable to including the people that we say we want to include and we also need to make sure that we are checking back in with the respondents that we've taken with us on this journey so we want to make sure that ethically the families that we've engaged with are still feeling okay about the stories they shared with us we also want to make sure that the way that we have understood the things that they've told us and the ideas that we've come up with are still articulating their stories with integrity so we go back to the people that we've spoke to at the beginning and check in with them about how do they feel about these ideas are these the things that you were explaining to us that we get it right I'll show you quick examples what some of these frameworks would look like so now this is just kind of a boring table but actually what it helps us do is check back in every day after research to understand if we're making progress and so basically have three lines of inquiry and we're trying to understand what users want in that first row and we're also trying to understand if any of these services are actually financially viable will they be sustainable for the long term and every day when we go out and do research and have interviews and use generative research tools we want to make sure that we're actually answering the questions we set out to answer that if we have new questions we're adding them in here where if we're not answering certain questions what is it about our approach to research that's creating that gap in our knowledge so that's one small way that we're starting to kind of structure and think about just how much we're learning in a discover phase we move more into a design phase I want to share an example of a project that I've been working on in the child protection space so this project is called rethinking restoration and it's funded by the Sydney Meyer fund it's a three year project and we're partnered with a government family and community services agency in New South Wales and we're also partnered with an academic research team and a monitoring and evaluation team and so in this project what we're really trying to understand is how we could potentially redesign the child protection system so that helps more children live at home with their families and thrive over the long term and what we've found in the spaces of course that is quite a complex space there are entrenched practices within the child protection system there's lots of socio economic and cultural factors that come into play here it's not a real simple answer we can't just break off one part of the system and say fix that piece and then the rest of the pieces to fall into place it's also we have real people with real lives at stake and there are really big risks associated with trying to test new things with families and their children so monitoring and evaluation is incredibly important to us in this project and we want to make sure that we are monitoring and evaluating before we even begin to pilot and we didn't understand exactly what that might look like which is why we asked that organization Clear Horizon to support us with that because we knew that we wanted to be able to tell from the earliest stages of the design as we were prototyping what it was that we were learning and what those early indicators were that we had the right idea for what we should be doing and we have the right hunch around our theory for how we're going to create behavior change for people and that we're protecting the people that we're working with at the same time. So some of the things that we're looking at understanding during this kind of phase is what are our assumptions about how this design will actually work and behave and that might be within the service or the system or the program itself or amongst individuals. We really want to test the logic of our design. So if we say people are going to use these three services to help them build capacity in these ways and then ultimately they're going to end up living happier, healthier thriving lives, we want to make sure that all of that is not just an assumption on our part that it's actually going to work so that needs to be back to evidence and we also need to try things in the real world. And then we also want to make sure that we are collecting data that indicates early outcomes. So for this kind of project we're not going to be able to see within a really short amount of time lots of children returning home and living safely with their parents once they've been separated but what we can see are really small indicators that things are starting to get better. Because we want to know at the earliest stages that things are starting to improve, don't want to wait until five years later and then say, oh well that doesn't work, we're sorry. We want to make sure that as we're testing and iterating that we're learning from each of the activities that we do and each of the elements of the prototype. So for this project this is just a small picture of a theory of change that we were trying to map out for one of the pilots or prototypes that we were working on and so what you'll see as you can see the quite tiny writing is that all the way at the top we've got some really high level broad goals and outcomes that we think this type of thinking can contribute to and then below that masking tape line is actually the end of prototype outcomes. So these are the types of things that we know we can be held accountable for within this project and then you see those little yellow dots those are things that we've named as things that we want to be able to measure and we've come up with strategies to identify exactly what is the data we would need to collect to measure those different things so like our children happy and thriving and indicating attachment with their families and are they showing that they're having lots of healthy productive play our families building some of their social capital working towards higher education or having behavior change in terms of what they think good parenting is surrounding themselves with positive role models all things like that and so for each of those yellow dots actually there would be a different way that we would collect that data and some of it might be from the people who are participating in the project themselves some of it might be from us some of it might be from our partners and some of it might be through just us hearing anecdotes and some of it might be more quantitative data but that's all dependent on what is the question that we're trying to answer so what we'll do is collect data all throughout the prototype and continually say well this seems like it's working this seems like it's not why do we need to check in on what do we need to iterate and change for the next the next piece of this work so I'll hand over to Chris to talk a little bit about what things look like in the next stage of a design project but first I'll probably introduce that a project that Taxi's been working on for quite a long time that's more in the latter stages of that design and innovation program cycle Thanks Lauren something that's really interesting about what Lauren's just been talking about is we're using we find ourselves now using a mail approach in the early stages of the process that's really where the primary stakeholders of that process are the teams that are doing other taxi teams that are doing the work and so it's not something that we're doing to sort of satisfy funders, it's something that we're going to do that we're doing to kind of improve how we do our work at the very earliest stages but I'm going to talk about the latter stages where yeah of course then you start to think of some other different stakeholders that are the key audiences for mail work so really I think what I'm going to share sort of adds on top of what Lauren's just been talking about in that you still want to know are we doing the right kinds of projects and you still want to know is the design of this thing still right So we're talking a little bit about family by family and you can read more about this online at familybyfamily.org.au and this was really one of the earliest projects of taxi about six years ago and it's been designed through human sense design methodology which really started with the question how can we enable more families to thrive and fewer to require crisis services it's really working in that child protection space which is a very hot topic in many places but very hot here in Australia and essentially through that process we developed and in response to that question we developed up a model where we find families that have been through tough times and train them up to support families currently in tough times so it's very much a peer-to-peer support model and the professionals are in the background supporting the families to make change and so in this project we actually didn't do any of the mail work that Lauren's just been talking about in the discovery stage or the design stage certainly not with the sort of rigor that Lauren's been sharing but we did in the trial stage and it's probably been in that trial stage for the last three years so we're just on the verge of that next stage so really what's because we've taken whilst we didn't do it with such rigor but because we've taken that approach really thinking about how can we build in ways of monitoring into the program itself so there's this particular nice tool which originally started on paper and an iPad that enables families to self-identify the changes that they want to make in their lives and then track progress against those changes so it's something that we developed in that design stage primarily for families so families could see change in their own lives but now we're using it to collect data and aggregate that data so we've got the data on the impact of the program but so that's one thing that we're collecting in this stage but of course then we also at this stage are trying to develop up an evidence base for the potential funders of this ongoing funders and investors of this work so we found ourselves looking at some different kinds of things so people are really interested in things like unit cost they're really interested in does this actually increase usage of services where people want to see them increased and decreased usage of services where they want to see them decreased how does this compare with like programs and what's the cost benefits of this particular approach so if you want to dig into the details there's a lot of all those are online at familybyfamily.org.au and really some of the stuff that we did to get that sort of data in here was some some comparison between a like sites work which didn't have this intervention to understand were we actually making a difference in service usage it turned out that yes we were but that wasn't necessarily sustained as long as we'd like to see it we're able to extrapolate from that reduced service usage in this particular case reduced notifications of child protection issues compared to a cost saving and quite a conservative cost saving so we know for family by family or quite a conservative estimate so we know for every dollar you invest in family by family $11 is saved for government through reduced usage of crisis services child protection services and we know that it compares favourably on a unit cost with other programs but I think there's a really so that's sort of not perhaps surprising stuff when we talk about measurement and evaluation we often start here but I think what's been an important caveat to all of that is I think particularly in government systems just because you've got great data doesn't mean that people want to invest and so we've been doing a lot of work more recently to really try and understand what are all the other things that we need to package up for the value proposition for investing in family by family because political decision-making processes are not certainly in Australia are not rational processes they can be quite emotional it's about linking together all sorts of different political agendas, timing is really important so there's a whole bunch so I guess the point I wanted to make is great to have evaluation but that's not necessarily going to be sufficient to create a case for an ongoing investment process and now we're just on the edge of that spread stage with family by family so very recently we've just created the family by family global hub which sounds very exciting but it's just two people and we're supporting service delivery in two sites in Australia but also in conversation with Canada and with New Zealand to support the growth of the program and to other states in Australia as well I guess the crux of the conversation here when it comes to measurement and evaluation has been how can we actually build this is a bit of background and so taxi is currently delivering all of these all of these programs but our ambition is to transfer that to not-for-profit service providers we see ourselves as very much as an incubator of new service models we think we've kind of done that and so now we're looking at different kinds of structures to get investment into this to get shared ownership of this and to get the model itself spreading nationally and internationally but we've been thinking really hard and it's not a question that we've answered yet but how do we set up a really good approach to monitoring and evaluation and learning that really enables local sites to do a number of things so to see are they delivering the program as designed is that creating the intended impact and if not does that mean the design of the program itself needs to change is the global hub providing an appropriate kind of support as well as aggregating the change across all of those different sites and that gets quite complex in terms of the systems that you need to be able to do that efficiently and effectively so I think I'm going to hand back on to Laura so across all of the work that I've done before taxi and that I've gotten to do at taxi I think what really rings true is that it doesn't matter how good your intentions are how much you want to make an impact for people how much good you think you're doing at the end of the day really need to know what kind of change we're making and we need to know that with quite a bit of rigor and we need to understand what that impact is so we can continue to make more of that good kind of impact and elevate the work that we do over time and what we've come to see in this type of work is really that the monitoring and evaluation and learning is really about getting clear about what it is that you want to achieve and acknowledging what you know and what you might not know being able to collect the right data to demonstrate that but then most importantly reflecting and learning on that data over time over the course of the entire program cycle and what really stood out to us is that some of those assumptions that we had at the very beginning and going through this project and kind of coming up with this trialing this more monitoring evaluation and learning approach is that we have a bit of a different take on some of those assumptions now so I think what we've come to see is that monitoring evaluation and learning has a role at every stage of the process it doesn't need to just be in the trial and spread stage there's lots that can be done in discovery and design evaluations are really just one piece of the funder pitch there's lots of things that funders are going to need to know there's lots of things that they're going to be curious about and we shouldn't do monitoring evaluations and learning just for the funders we need to do it for ourselves and for for our beneficiaries in the communities that we work with as well that monitoring evaluation and learning works best when you draw from a portfolio of different approaches and that when you choose the methods that match the purpose so there'll be a time and a place for an RCT for most significant change for for baseline assessment for being able to gather those anecdotes about stories and what people are experiencing in terms of change over time but that all depends on the question that you're trying to answer and the type of work that you're trying to do so tailoring it to what's most appropriate what's going to help you learn the most about the impact of the project we think that monitoring evaluation rather than restricting our work actually helps us accelerate the rigor of our learning because it helps us name what exactly is working and not working at really early stages and then gives us some data to be able to pivot and iterate on and then above all I think social impacts are definitely measurable it might be an excuse to say they're not something that they are just have to find kind of the right ways and approaches to be able to identify what it is that you want to measure and how so I think we'll just, I think that's all time so we'll just leave it there and I think open up for questions so thank you so much to both of you for a really fruitful and deep discussion so one question has already come in I'm going to kick it off with that one this question is regarding root cause analysis and specifically the listener says that they didn't hear about root cause analysis in the process does that enter the approach and if so can you speak to where that's not an approach that we have used in our work it's not something that we felt that we needed to use I think in the early stages we're trying to understand what contributes to what contributes to particular social challenges as well as what can contribute to better outcomes but as a method it's not one that we've used I think what we've tried to do a lot of is in that discovery phase is understand what exactly is the issue at hand and what's the complexity of other factors that are contributing to that at a variety of different levels so what's happening at the community level what's happening amongst services and institutions what's happening and more strategic policy level that might be contributing to that what can we learn from existing evidence to say these are some of the things that we should be exploring I think yeah I think that as soon as yeah I think we were really open to using lots of different methods and this is kind of a learning journey for us as well and so when this time and the place is appropriate I'm sure we'll be exploring something like root cause analysis as well got you so on that notion of using a variety of approaches in order to really understand the picture and what's really happening can you mention the use of anecdotal data as part of the mix can you give us a little bit of a sense of do you have some guidelines or some best practice that you've already developed around a balanced mix of approaches at any one point is it like 30% anecdotal 20% quantitative can you give us any insights regarding that I think that's something that we're trying to work out right now as we sort of engage in this media approach to mel across all of our work is yeah what is the right mix of methods and approaches at those different stages and yes in generalising in the earlier stages you are well it also depends on the kinds of projects right you have to generally have more anecdotal data at the start but then there's also I wonder if that's it's also possible to turn anecdotal data into something that is more statistical or something that can give you a kind of picture across the board of impact with using things like most significant change approach yeah and I think in terms of things like family by family that was a lot all that anecdotal data was how we came to understand that it was really working for people so we would hear stories about how you know things were really tough before family by family but family by family is the one thing that changed you know the family's life was better and that they don't know where they would be without it and they can talk about specific things around what their link up with their sharing family did was a better place but yet when the government see an evaluation those weren't really the things that they were that were going to be able to help them make the case that they should fund it for another year and so while those things were incredibly valuable to us internally to help us understand what was working for people or specific things that people didn't like often we would respond to things that people said okay you know this is this is not really what we want and then iterate based on that became more of an I guess our way of gauging what needed to change and iterate but we were I think we're just now trying to improve our sophistication around saying okay these anecdotes are actually really valuable things that government and funders and external stakeholders need to hear as well we can do the types of evaluations that you think are necessary but these but anecdotes are really the way that you can tell the story and provide texture around what that real experience is for people got you and pulling on that thread a little bit and I apologize if anybody out there is not hearing me while I see you one comment about my audio maybe being not that great so we'll check that out but at this time I have no other alternatives so I'm going to continue with respect to collecting anecdotes or stories or insights considering that we are here on engineering for changes platform and I think many of us are curious to know how how you approach that you mentioned during your talk that you used iPads with questionnaires and other methods can you speak to us a little bit about how you ensure that you collect that information from what I imagine is a spread of families or you know participants in a way that allows you to also organize it you know analyze it and do whatever else is necessary in order to tell the narrative quickly yeah so I think how we do that looks different at different stages and so yeah certainly in those latter stages it's thinking about what are the sorts of systems that you can build into the solution that can collect that anecdotal evidence and systematize and aggregate that and with family by family hopefully we'll be doing that across multiple sites in multiple countries so practically family by family the tool there is something called the bubble diagram where families fill in a bubble at the center of the page where now on an iPad where they say what they want to achieve the contributing factors to that is sort of a mini theory of change really and then they progress on a 10-point thermometer scale from beginning middle sorry before, during and after a link up and that gives this data on the kinds of goals that families are looking to change and their assessment of progress against those goals in the earlier stages the anecdotal data collection is it's a lot less well it has structures but it's different I think in some of the earlier stages but we try to name some of the indicators that at least people are feeling to say for example if it's in a research project we would want to know that people are feeling comfortable and comfortable and engaged in interviewing with us and that's something that they want to do and they feel that that's voluntary we want to know that some of the stories that we've heard of that to them are we're describing them with integrity they're the types of things that they would have felt comfortable communicating outwardly so it takes a little bit more of an accuracy and an ethics strategy and I guess some of the ways that we've done this and I'll just reflect on the first project that I talked about that we had two quite long interviews with families and then we analyzed the data that they shared with us and I went back to them with summaries of that to get their feedback on that and just kind of check if those were the kinds of things that they were really communicating to us and then would pull out some of the key quotes that we felt needed that we felt really described some of the I guess the pains and the gains or the things that were helping them and some of the things that would indicate that there was an opportunity to do something differently and so that yeah and other projects that I've been on that might be categorized quite rigorously in the spreadsheet and we would have maybe different categories of topics that we wanted to collect that anecdotal evidence around I think in in the design phase we would need to name all the indicators that we're looking for and be able to collate the data that we're collecting from families also from different services and from government agencies and be able to analyze that and show that more widely with our agency. Right, and how do you manage the associated risks for example, you know, participant exhaustion or this can be both in regional projects or local projects but also international where folks feel like they keep getting asked the same questions over and over again by different agencies and it just eventually just kind of refuse to talk about it or kind of over it so to speak. Yeah I think that's a really good question so there's a couple of ways that we do that one is if it's in a design phase of a sorry, a trial or spread phase of the project we try to make sure that that's built into the experience so the bubbles that Chris was talking about that's very much part of families experience in participating in that service so rather than feeling attractive to them that's part of what they do yet we also use that data to understand how well the program is operating so we try to double up to make sure that we can embed the questions that we need to ask in part of the experience for people in interview stages we make sure that families understand exactly what they're committing to before we even get started so to say this is actually going to be three interviews this is the amount of time these are the kinds of questions that we are going to be asking of you is that something that you'd like to commit to at any stage you can drop out and we make that really clear and give them the option to do that one of the things that happens in the child protection space is that families are often asked to tell their story over and over and over again to different service providers and so that can be not only exhausting but traumatic and so we try to make sure that the way that we ask people to share their stories with us is what we would call potentially therapeutic and empowering so we find different kind of maybe paper based drawing strategies or different kinds of safe ways to communicate a story that really helps people end on a high and have a real conversational engagement with people so it's not just us extracting information from them I think another part that plays a role in that is the fact that we often get to move these projects beyond research and into design which means that people have the option to start to turn their ideas into reality into some sort of design opportunity and so they tell us this thing is really bothering me and then we can work on how might we make that better and work on that together so it doesn't just feel like we've come in and asked you some questions and then we're never going to see you again that actually if you wanted to be a part of this over the long term you could and we found that that's been something that people are really excited about I think we're also in Australia we're quite lucky in that there's not a ton of other organizations that are also asking the same people the same questions so usually this is kind of the one opportunity that some of the families that we're working with get to do this so not experience so much of that fatigue but we do try to be aware of that as the outset I think it's really about being clear about what people's commitment is I know that I've experienced that in other projects internationally and I think one of the ways that we try to do that best is to really embed the data collection in whatever the design is so we can kind of reduce that fatigue over their engagement with the project definitely I think managing expectations early on is just really an important part of it that speaks to how critical it is to plan early and really understand what your strategy is going to be around MEL so we have one question that I have from a participant that I want to make sure we address before we close out and it is have you considered using relational modeling techniques in your approach I haven't tried that before but I'm really interested to look into it if there are some suggestions or experiences that other people want to share through the chat we'd be really keen to hear that Oh that's lovely and we definitely encourage our listeners to connect with the speakers as you can see their email addresses are listed on the slide in front of you and we're also welcome to reach out to the Webinar if you'd like to make a recommendation anonymously or something to that effect but it's good to see that there's a dialogue happening here and we welcome that dialogue at all of our webinars with that we are approaching time and I just want to go to that question slide a little too late but I would like to thank everybody for attending participating today's webinar and thank you for seeking your professional development hours for this webinar please reference the code listed on the slide when applying for your certificate if we didn't get to your questions or you have questions that arise after this webinar is complete please feel free to email us at Webinar at engineeringforchanged.org and of course we invite you to become an E4C member to get information about upcoming webinars with that I'd like to thank Lauren and Chris thank you so much for joining us today for your busy morning to help us understand a little bit more about what you are doing at Taxi and how Mal can be integrated into the practice of all social relation projects thank you everyone have a good morning, good afternoon or good evening wherever you may be and we'll catch you on the next E4C webinar bye-bye