 So I started my career as a medical researcher before transitioning to consulting just over four years ago. My scientific training has shaped the way that I think about approaching problems, developing hypotheses and collecting evidence to make a robust judgment. This is no different to how I approach my role as a consultant in the evaluation of new space. I work predominantly for government research and non-profit clients and work mostly in the areas of science, R&D and health. But I also dapple in a wide range of skills and disciplines. So with that background in mind, I wanted to talk through some of the similarities and differences from my perspective between social impact measurement and evaluation. There are two different schools of thoughts on these ideas. Social impact measurement and evaluation are labels for sets of communities, cultures and attitudes. They often involve work of different scale, resources, timeframes and use different techniques. So one of the questions I wanted to ask you today is are labels important? Doesn't matter what we call ourselves and our work and where do our communities fit? So is evaluation a subset of social impact measurement? Is social impact measurement a subset of evaluation? Or are they two completely separate ideas? And if so, what are we doing here today? And if we are one, why do we have different names? So from my perspective, both social impact measurement and evaluation involve critical thinking and inquiry. They involve testing assumptions and asking questions in order to seek a deeper understanding. They both compile evidence to then assess and make a judgment about the effect of a program. And they both bring accountability to the way that programs, organizations and society operates. Another interesting point about social impact measurement and evaluation is that they're both discipline agnostic. So they draw on the best of economics, philosophy and environmental and social sciences. To me, social impact measurement is practical and applied. It involves more timely assessment, often using fewer resources. Social impact measurement is a new and exciting emerging field of practice. On the other hand, evaluation is more rigorous. It can involve more resources, be more resource intensive. It involves complex assessments and often uses detailed frameworks to build an evidence-based and it's a more well-established practice. So within the work we do, there are some elements that are more likely to be distinctly social impact measurement and some that are more distinctly evaluation. But there's also a lot of overlap. And I think we can think of these two concepts as more of a spectrum rather than a distinct set of practices. We also know that this is true of the members of each society. So some of you will be members of both Simna and AES. So there's clearly overlap in the types of work you engage in and the ideas that you subscribe to. Another thought I wanted to raise is designing for purpose. Designing for purpose is essential for every evaluator or social impact measure. Each program and policy is very different and they serve different people, context needs and time frames and often have very different resources associated with them. This is also true for assessment of these programs. So there are pros and cons to both ends of the spectrum of social impact measurement and evaluation. So for example, in the extreme, it's not always practical or appropriate to conduct a robust randomized control trial evaluation. This may not deliver the best results for the context and the people involved. On the other hand, on the social impact measurement end of the scale, a simple measurement that delivers practical insights ready for implementation as the program is delivered may not satisfy those seeking to make low risk decisions or maintain program fidelity and continuity over life of the program. So as a consultant, I seek to ensure that my work reflects these differences and is fit for purpose whatever that purpose is. So if the cultures are continuum, then the big question for me becomes how can we learn from each other and spread the importance of the work that we do? There's been a push in recent years to bring more accountability into programs and policies across government nonprofits and the private sector. And this has led to more evaluation units and teams seeking to work with and alongside program delivery and policy. However, these evaluation units can face challenges working with the organization and demonstrating their usefulness. They're often engaged late in the life of a program and can be negatively perceived as critical assessors. Some of the big challenges that I've come across in my work are how can we better, how can program designers better incorporate planning and measurement into evaluation? So this includes building inflexibility as a program evolves, but also including iterative learning and performance improvements over the life of the program instead of a simple end of program review. Another big question is how do we instill the importance of evaluation or social impact measurement? So how can organizations support their staff to better value this way of thinking? To understand that it's not something to be feared, but instead is something that can improve performance and drive change more quickly, efficiently and appropriately. So the three key messages I want you to take away from the talk for discussion later on. Our label is important if we exist on a spectrum. It's important to design for purpose, whatever that purpose and progressing our work matters more than our differences. Thank you. And I'll hand over to Simon. Thank you, Laura. Lots, lots in that, lots in that. And I will just share my screen, first of all, so one moment. So first of all, knowledge that I'm on the lands of the Butta-Rong people, my respects to elders past, present and emerging. And as much of what Laura said, which I inherently agree with, makes a lot of sense. I want to take an angle though in describing what I think some of the differences and similarities are by, first of all, reflecting on this fantastic band called Tool. Some of you may have heard of Tool before. I had an album in 2001 called Lateralis and the album cover of Lateralis had layers. It was these plastic sheets and it had a different part of the body that was representing. And so the human body has so many different systems. There's the skeletal system, there's the muscular system, there's the emphatic respiratory system, digestive systems. There was quite an extraordinary album cover. And I think that analogy works when we're thinking about identity of those in the evaluation community and those in the social impact community. So there's a lot, necessarily, there's going to be quite a lot of overlap and there's also going to be a lot of differences in what we are, what we're describing. That's the case because we're trying to apply lots of different thinking depending on a specific context. So just first a shout out to Fresh Spectrum who have some sensational cartoons to use for presentations like this. Just taking evaluation as an example. And I hope this resonates with most of you. When someone asks about what evaluation is like, the real question is what isn't evaluation? It's really broad. And I started my life as a philosopher, moved into strategy consulting, and then started working strategy type of projects and measurement and evaluation type of projects for within the social sector. And the answer to a question like this is nearly always it depends. Or I can find a way of making that work. There's so many different ways of this cartoon I feel answers the question at the heart of this discussion today about the difference is that, well, yeah, there's actually not many differences because it could be anything when it comes to our own identities and how we're thinking about it. Laura touched a little bit on this, but I want to emphasize it because I find this is central to how we think about evaluators and the social impact measurement community. My whole professional life has been based around getting this right in scoping work for any client. It's a question about rigor, it's understanding how far you go to get a meaningful answer, who it's for, the audience, and the purpose. Why are we doing it? Laura was sort of talking about evaluation being more rigorous, possibly inherently. I put that out there that some evaluations with that label and evaluators don't necessarily apply that high threshold of rigor. And so that identity associated with how we approach rigor and how we approach purpose fit for purpose could mean I've got an hour with you. Let me work out what's going to be most appropriate to help you make a better decision. So just want to make sure that we all hold these three concepts around rigor, audience, and purpose. Think of them as the holy trinity, as we're considering the differences and similarities. It also becomes quite central to our identity. Are you the person who will always ask the next question about rigor to make sure it's really going to be right? Am I taking another year, but is it really going to be right? Or are you the type of person who goes, I like to find the audience as the community, and therefore, what's rigorous might take a year or five years to be able to work through what's required. So just hold these three as you're thinking about the differences and similarities between the two communities. As you can see from my background, I sort of thought this particular cartoon was rather apt. I'm often asked this question, particularly with 200 days of lockdown right now. Daddy, do you like my picture? If you'd like me to be objective, I'd have to create a rubric. So go back to the purpose, audience, and rigor. Depending on if you're an evaluator or if you work within social impact measurements, your rubrics may be similar. They may be quite different. There's a strong overlap between different methods and approaches in how people are thinking about this. And the key part here is considering what's required to make a meaningful answer. So in this case, if my audience is my daughter, purpose is to have a positive, happy interaction, the level of rigor is almost non-existent. But if it was like the Archival Prize, and my daughter was showing me her creation, I'm like, okay, now, just not good enough. There's going to be some of the differences that we all face, regardless of which community we're in, as we consider what's appropriate, what's fit for purpose, and what's the appropriate level of rigor. One thing, oh, I'm sorry. And we're back. These, there's a couple of cartoons here I want to highlight, just to emphasize some of the problems with different methods and approaches and the identities that we all have. Sometimes rigor can be synonymous with being, having a rigid approach to what's required, or it might be forced by a particular client, be it an organization or department. One thing, and then sorry, the next one is just around being adaptable to environments that fit for purpose message comes into play. It may appear through these cartoons that I'm being critical of the evaluation community. There's a really important point I want to emphasize right now, which is that the evaluation community is much more established than the social and transmission community there and the similar community. That's why there are cartoons, which I'm able to find very easily. I can't find cartoons for the similar and social and transmission world. Evaluation identity is much more defined, has a longer history, and has had more evolutions. And the professional pathways clearly articulated. It's very different. It's to the similar community where we're currently at the moment. The social impact measurement identity is newer and it's not that same long history. It's attracted people from all sorts of different disciplines. And there isn't a clear professional pathway, although that is evolving with social value international in the pathways for accredited practitioners. So this does become a question of culture and identity. And I do straddle both communities and presented at AES conferences and other webinars before and also led the establishment of Simna. And I just want to emphasize that there's many similarities, but there's significant overlaps as well between the two communities. The key difference though are the cultural differences and the identity that sits between people who primarily associate with the AES or primarily associate with Simna. And for the final thing I wanted to do is just to point to the vision and mission for both organizations. Now, I'm not sure how revamped, new, refreshed, alive the AES vision and mission may be, but I think that speaks strongly to the very focused direction of the identity associated with the AES. And for Simna, this is a vision and mission that we've worked on recently and beginning to share more widely. There's a difference in focus and approach to what we're both potentially trying to achieve. And therefore how we're approaching the work of measuring change over time and evaluating it at specific points in time. I might pause there and I'll actually open up to Laura, as you all would have been aware that we would have had lots of time to be able to work through this given lockdowns across the country. But Laura, I'm interested if you've got any initial reflections first of all. Thanks Simon. And I will stop here. Very interested to hear you talk about the adaptability of ourselves and a lot of people on these calls to the clients that we might have or the way that we practice evaluation, being able to adapt with different resources and time frames and frameworks to deliver fit for purpose measurement. I wonder if it's also about the adaptability of the clients that we work with. From my perspective doing most of my work with government, I've never seen social impact measurement on a tender, always evaluation. And I wonder if that's because of its more established reputation or why that is. And I'm curious to hear, I guess, who are the clients that you predominantly work with and is it them wanting social impact measurement or evaluation? So I think the words are very important. You highlighted up front about labels and our labels important. The word measurement, is it synonymous with or could be a parallel with monitoring. And so there's monitoring and evaluation or monitoring and evaluation learning. MEL is often used and requested from different government type of clients as well as with some of the larger nonprofit organizations and increasingly spreading. So I wonder if there's an important overlap here between the evaluation community, which necessarily thinks about monitoring as an essential part of the work that happens and social impact measurement, which has that monitoring angle, but it's probably pushing towards making a decision faster. So taking the information and making the decision faster. So beginning to see a lot more of that happening, even with sophisticated government clients where it's been pitched as an evaluation, but necessarily it's more working together. So it starts to become more of a development evaluation, developmental evaluation approach. And that's what I've seen in the evaluation community over time, much more of that convergence to we're going to work together over time and we'll see what happens, which I see as being much more of a parallel with the social impact measurement thinking and approach. And then curious. So I raised at the end of my talk a question about perceptions and how I'm often perceived and how my colleagues are perceived. I'm curious to hear whether you face any of those challenges in being perceived as kind of an outsider that's come into audit almost rather than work with to deliver outcomes. The word label is important. I'll go back to that point. I think it's excellent that you raised that and that fed into the culture and identity that we all bring to any work that we are doing. And so I do think that there's that word audit is critical. I don't think of the work that I do generally as being auditing. It's often working more around, well, what does this mean? Where does this come from? Where do we want to go with it? So much more even if it's a more traditional evaluation. It's much more, well, the world is changing. The world is dynamic. The culture and identity that's sort of brought from the social impact measurement perspective of the world means that we're going to adapt necessarily. We're going to help do something that's going to help you do something different sooner rather than later. So perhaps the difference there is the timeframe of when we're trying to get results that will be useful. And that will change them. And I know we've gotten people from all different types of sectors who are working in different types of sectors on the call. But that purpose audience rig a bit comes into play very strongly once again. But do you see that identity that you bring when you're working with a client that comes to the fore and how the interactions take shape? Definitely. And I think it's again, it's all about that circle you put up of of audience rig a purpose. Because I don't think I think there's a cultural feeling around the particular project and how the project is set up and tightly defined, even if the same term is used. So yeah, so I think the sentiment changes, but it might still be called an evaluation, for example. Yeah. Okay. The other, this other one thing I think that you've triggered for me as well with this, there's a more of the culture with social impact measurement can or can be around the investment community and more of the private sector trying to work out what has changed. So some of the different clients working with have got that quite overt impact investing lens. And so it means that well, it's going to be much more related to specific changes that can relate back to payments, for example. So payment by results contract or social impact bonds and talking about social impact measurement in that way. That's quite a different mindset, culture identity to we've got a policy, it's $50 million. We've got dedicated to these programs. We're going to invest in it. Let's see what happens. It's a four year evaluations. Let's start now. Two years time. Let's hear the report. What's going on? Quite a different mindset and timeframes again around that. Definitely. And I think to some extent, when you're talking about social impact measurement for payment by results or social impact bonds, if the government run contracts that have private investors, often those are even much more rigorous in my experience because they have such a high level of accountability in order for government to pay out results to private investors. Indeed, which is where if there was one comment you made up front, which is around evaluators being more rigorous, I think at times social impact measures or the social measure word has to be incredibly rigorous and possibly more. So I find that that's where I see the similarities between the two different approaches, which is where the vision and the mission of both organizations comes up. What are we trying to achieve from this? And that's maybe where you highlighted some of this or that, what we can learn from each other. It's like, well, is there something that we're directing our energies towards? Is it good evaluations or is it a different type of world? And I find that part and that drive really interesting as a part of this because I do see that the audience purpose rigor, looking at time frames, how we scope projects, adaptability, overlap of different methods and approaches being a big part of the similarities of what we can all take and apply in our professions. I agree more. Maybe it's a good time to throw it open to the polls. So Francesca's set up some Mentimeter polling to get us all involved and communicating what you all think. So if you can log on to the link that Francesca's just posted in the chat there'll be a couple of questions there. Francesca, did you want to run us through that? Thanks, Laura. So I've just put the link in the chat. Hopefully everyone can see that and there are four questions. So you're able just to go through them and then I will put the results up on the screen. I think this was the point where we have the music in the background and I do recommend Tools Lateralis. There's some background music if anyone's feeling inspired. As everyone's completing the questions, I guess one thing is, like Laura and I spoke a few times before this, there were so many different angles that we could have taken for this, for sharing a few thoughts and taking that discussion too. So just highlighting that. We could have gone into all the different methods and approaches and the similarities for them. Notice one of the questions up there around different examples. Challenges are so many different examples that we could highlight, which is why we're just probably going to be sitting more at that conceptual level because you all would have had lots of experience in doing that across very different contexts. So maybe if we move on to that first slide. Okay. That's really interesting to see. So a lot of people working in just evaluation, but also a decent number working in both. So okay, and perhaps use the chat everyone. For those who are working in evaluation or just said just evaluation, I'm interested in whether monitoring, the monitoring and evaluation work, whether you can see that that is being similar to social impact measurement. So I thought it would be helpful because I find that a very interesting comment. I was not expecting that. I thought that most people would be putting both, but clearly there's such different identities between the different communities and the culture and way of approaching it. Laura. And also neither. I'm curious who those neither are and what you do for work. Or whether there's people learning here. So people undertaking studies in or masters in this area that are seeking to work. There we go. So Claudette studying evaluation and perhaps seeking to work in a space in the future or just learn a little bit more about it. It's a good point season as well that monitoring are relevant to both. And so and I guess I just vividly remember when I first learned about developmental evaluation and I'm like, this is what I do when I work in partnership. This is this makes a lot of sense to me. Why is this evaluation? So hence the overlap between them. So it's quite interesting to see that. And on Susan's point, Simon, do you see monitoring as a part of your work in social impact measurement or it is social impact measurement? Just to be clear, I see my work as working with clients to work out what the where to go and whether they're doing well or not. So that's the broad labels. Now, social impact measurement is a component of that and evaluation is a component of that. So I think that's that's important. So monitoring is necessarily a part of both that you need to be able to monitor how the program is being delivered, what's going on to go to make to get the evidence collected over time to be able to make those assessments and those assessments could come through with those different culture and identities. An interesting point from Caitlin there as well. So if you have a responsive and embedded evaluation team, can't it be both? And Caitlin, that's that's sort of where I naturally went to. And I find this quite revealing and interesting as we're working through this now. And that point from Jess at the end, I still haven't figured out the difference seems to be a different origin and an origin story. And so those different cultures and identities. And so that's where I want back, you know, maybe maybe in the version tomorrow, we could do a quick my Briggs test or some other personality test understand whether that culture and identity actually comes through in the type of people who identify as evaluators versus those who identify more social impact measurement. So maybe we move on to the second polling question. Okay, well, that answers our question. That is, I feel, sense of relief, Laura, how do you feel? Oh, me too. Pretty definitive answer, I think. I think that matches with what everybody's saying in the chat as well, that there is a lot of overlap between the two. Yeah. Yeah. And then it becomes just noticing point from Kathy around and in evaluation expertise. And I think this is where the different methods and approaches sort of come to the fore and how we can all think of ourselves as professionals working in this space. This space is to support organizations, either being embedded as a part of it or coming from the outside and being able to work out what's where to go and what's working or not. And there's a whole bunch of different approaches and tools we need to use. And I think Ellen also said that as well from the outside. So practicing evaluation, but trying to embed more social impact measurement in that approach. Yes. So interesting people approaching it from two different sides, but trying to get to a place where you can use both techniques as needed. Okay. And maybe on to the next question. This is the word cloud question. What's the one where the best describes what we can learn from each other? We've got a lot of purpose, collaboration, language. Language is an interesting one. Yeah. I'm drawn to the two which are bigger, which is the impact and the purpose. I find that quite important, actually. Because that's where we're both with our different mindsets and identity, trying to strive towards create impact, possibly with purpose or having clear purpose around that. And I think it goes back to the purpose and that rigor audience purpose again, by saying that impact and that purpose that you'll create will be different for different clients and different purposes. So, yeah, it's being able to best deliver those outcomes for whoever you're working with. Ego is an interesting question. Yeah. Who mentioned Ego? And do they want to type in the chat what they mean by that? It's a really interesting word. While we're waiting for Ego, that for me is wonderful to see because it does relate to our professional identities. And so we've got a lot riding on the label, back to your point Laura, the label that we associate with ourselves. Like who are we? And that's where I was trying to highlight that the evaluator identity is quite strong. And social impact measure isn't an identity. It's more there's a community around social impact measurement that's developing through Simna. Interesting. So I think Jesse was talking about the ego and saying it's something we need to manage as we consider our identity as Simon was just talking about being able to be honest with ourselves. I think about where we are and what we seek to do, which is really important. So that might relate to the rigor point and being humble. It's like, first framing that is how far do we need to go to get a meaningful answer? And it's appreciating what we don't know in those environments. And that might be based on time, skills, the community we might be working in, the area might be working in as well. It's a very important point. So maybe moving on to the last question. So this touches on the skills. So our evaluators and the social impact measurement community using the same skills and it seems to be quite a bit of overlap there in the skills that are being used. So I think that matches with what we've heard already. And it points to some of the comments and that sense of regardless of which community you normally affiliate with or what work you're selling or how you've positioned within an organization, the different methods and approaches you use could fit under both. So it feels like that's probably more convergence than I thought might be the case. I'm conscious of a comment before from Kate it was about the different sets of skills as an adjective sort of describing the evaluator identity, a qualitative evaluator versus a quantitative evaluator. And I think that just points to the multiple different identities that exist in both communities and the approaches that we are talking about right now. And I think Kathy makes a really good point in the chat as well by saying, you know, we probably have the same core goal of how can we make resources more most efficiently used to improve the way that we work and the way the world operates. So that's a really common purpose that we can strive to. So maybe I can pop this out there. So this is now, you know, turning up the heat. We're 40 minutes into this webinar. So just turned up the heat a little bit. The AES vision is specifically about quality evaluation rather than it being this is the view of the world and what's required. And I find I found that really interesting. Now I also know that websites and material can be old. I know that SIMNOS is and we've got a need to be able to update that. But I just wanted to turn the dial up a little bit because that for me is a really important point. If that's the case, then that changes part of our identity and our culture and what we're all striving to achieve. Because then we can agree of some core different ways of approaching monitoring measurement evaluation. So that's also hopefully a spark for some chat comments too. Definitely. Not just to you, Lauren, not just to you. I think what you're saying is we need to update some of the language. It's not just the language thing. This is a core around who we are now. I emphasize that the evaluation identity and evaluator identity is much more progressed than the similar one. So in some respects, it's easier for a smaller group of people to come together and make a statement about where we're heading and what it looks like versus a larger group, clear professional pathways, master's degrees. It's all there laid out. So it's not the same. And it's not just changing some words. There's something really core here about how we think of ourselves and how we affiliate with that. And do you think it's how people come to the practice, whether they are formally learning how to be an evaluator through formal study versus maybe picking it up on the job in social impact management? I think so. And I think that points to, I think that's actually really important. And that's what's, there's pros and cons for this and they're equal. But the pro around that with the similar in social impact measurement world is that it's open to many, all, give it a shot, come and learn what's appropriate for you to do right now versus what you highlighted up front law, which I think is right, which is a higher baseline level of rigor within the evaluation community to be able to do that work. Pros and cons for both. But I do think that's definitely part of it. And a really good point there from Alan as well, saying maybe the AES vision reflects the evaluation culture being focused on objectively robustness technique. Yeah. That last part of it is interesting. I've not yet met an evaluator who doesn't also want to change the world for the better. And being clear about what that world actually needs to look like, we all have different views about what the world looks like. And do we need to be able to come to a view about what that is in terms of our professional practices? Or being object, what does objective mean without it being underpinned by a set of values? So all of that starts and this is the bit where we could unravel this and take one word and spend an hour talking about it as well. So what does objectivity mean when you're coming in with your own cultural biases, with your own professional biases based on your own experiences too? And I think your cultural and professional biases certainly affect what projects you choose to take on if you have the luxury of choice. Yeah. And often people and I myself work in areas that I think are going to be most impactful. So I certainly try to progress society in a way that I see is the most valuable, but everybody will have different perspectives on what that value, where that value sits. Well said. Very well said. I think Ken makes an interesting point as well about again picking up on how people come to evaluation or how people come to social impact measurement, saying you never thought he'd be an evaluator. And I'd completely agree. I never thought I'd work in evaluation either. So I don't have formal training in evaluation. It's certainly been much more on the job learning. So it's interesting the different routes you take to get there. Yeah. Yeah. But I think it is becoming much more common to have some form of formal education in this area. Indeed. Indeed. Well, that's a lot of discussions there. It's pretty good. Thanks very much for Lauren Simon. That's a great presentation. We have some questions already that came through. Some of you already answered. So here's a question because there were so many self-identified evaluators in the room. So Simon, would you be able to provide a brief overview or example of the view or example of what a social impact measurement project would look like? And, you know, for different contexts and audience, just very briefly how they look like this project for you. Sure. Sure. So I will start. This is where I've probably avoided a bit throughout but using the labels and approaches. I think that a social impact measurement project can often sound like, hey, we're doing an outcomes framework. So setting up the process for being able to collect the data over time. But inherent in that is also the cultural change that's required. So a social impact measurement project is often got that focus with working with the organization and the team and being able to establish the right tools and data collection approaches, being able to then work out what to do with that data and be able to stick with that over time. One example of a social impact measurement project at the moment working on it's been, it's already been a year and it's got another year and it's an ongoing relationship. It's with CAS children and family services out in Ballarat and being able to work with them over the past year to sort of set up the thinking around what are the changes for their many different programs and then that's the logic models and coming up with an overarching theory of change and then subsequently working out what exactly do they need to measure and then what to do with that. And that's where I think the cultural difference may come through, may come through. I say may because undoubtedly many of the evaluation community would be doing that as well but then be able to then say, what does this mean? What can we learn within the organization as they're collecting data? So at that stage now of some data being collected, presented in dashboards, what changes do we make now? Hopefully that's all right without going off from too much of a tangent for a long time. Okay, we have another question here so Laura and Simon can address that. So the audience of the purpose of things you both mentioned is very important, something that you had in your presentation as well. Can you give some examples of what kinds of approaches work well or better for different audience or contexts? Laura, I spoke just before, please. Really interesting question. I think it often depends, so for example with the purpose question, the purpose of the evaluation in the work that I do often it's very much an independent arms length. The client doesn't want to have very much involvement at all. They want this to be a very much an independent process. So they'll often set up what the evaluation should look like, who it should involve, what kind of questions it should ask, what methods it should use and then it just steps back and says off you go do your work and come back and tell us what your findings are. We don't want to influence the way that those findings develop, we just want to have those presented at the end. So I think that's a very good example of how the purpose which was a very independent approach has meant that we've shaped our process so that we largely run the show and deliver independent findings at the end. On the kind of the opposite end of the scale, if it's much more of a participatory engagement approach it will be working with a client and working with communities on an ongoing basis that will often involve designing how the evaluation is set up, who should be engaged in that process and then doing that on a really regular basis. So saying this is our initial thinking on the evaluation, getting feedback on that, workshopping it with clients, workshopping it with reference groups and then doing that throughout the process just to make sure that we're on track and we're delivering what's needed and that's a much more engaged approach for the purpose of feeling like there is engagement and having that engagement but also embedding that learning. So as the evaluation findings emerge the client and the community can learn about those findings. Simon any other thoughts on that? Just to highlight that there's so much overlap in what we're talking about, we're just using slightly different labels at different points and then I wonder if the devil in the detail around the types of clients, so as you're describing that Laura I couldn't help but go oh of course that's the same as what I'm doing with a particular government client I'm working with right now. So I wonder if and trouble is that sometimes we can't talk about it explicitly. So in something isn't out in the public domain there's confidential information we can't talk about it so that might be a barrier around talking about some of this but it just feels like there's so much which is similar we've just got slightly different labels and I wonder if still underpinning that is a difference in mindset or approach in how we are going about it. So when you were talking about working in community I'm like okay what does that look like is that where you're here in the voice we're thinking about data sovereignty with indigenous communities are we thinking about who owns which part of it and then how the information is used afterwards to make decisions and then where resources might subsequently be used. I feel like there's like on the surface it feels like there's a lot of similarities devil in the detail sectors might be different and then how we're going about it in practice might differ even within and I don't think this is a social impact measurement evaluated difference but in each of us as individuals or organizationally how we approach work. Definitely and that community can be different things as you say it might be working with indigenous communities looking at data sovereignty and those sorts of issues. It could also just be a reference group and that is your audience and them you are delivering purpose so it's making sure that you have their buy-in and their engagement throughout the process so you're right it is in the detail. I have a couple of questions from Eleonore she asked me that she wants to ask you guys directly. So there have been some really interesting ones coming through from the chat thanks Paula there was this idea of developmental evaluation was mentioned as an increasingly popular approach in the evaluation community. Laura I'd be really interested to hear more about how this particular approach overlaps with social impact from your perspective and then Simon yourself as well. Very interesting question I mean for me and based on learnings from Simon today as well I take away that evaluation from my perspective is often much more length and social impact measurement is much more kind of working engaged with the community and delivering those findings kind of as you go to make sure that you can make those changes as you need it as a program rolls out. Developmental evaluation I would say would have some similarities in the approach in that you would be seeking to work with your client or with whatever that community might be as a program rolls out to make sure that you can make those progress changes as you go but I see maybe it's where the effort is distributed and I would say a developmental evaluation would culminate in more effort delivered towards the end of a project whereas with social impact measurement I wonder whether that's distributed more evenly throughout a project. Be curious to hear Simon's thoughts on that. Yes so many thoughts great question great question I've always struggled that developmental evaluation was part of evaluation I come from a background as a strategy management consultant and I looked at developmental evaluation is I'm working in partnership with you let's learn things are complex it's dynamic let's do it right away so let's make changes then right away so I find that that part of me I've always struggled with and that for me helped me appreciate when I first touched on this probably about eight or so years ago I was an under step better understanding about evaluation mindsets and what we've been talking about today. I do think that with that with the approach for developmental evaluation and where projects are scoped with an evaluator mindset coming in versus someone who's coming from more of the social impact measurement mindset there's a different sense of what's possible and where it goes and particularly with social impact measurement it should lead to a point in time which becomes the evaluation but there's a point in time where we say this is which has worked or not so it's making those calls about what's what's required so again I think this is labels that we're talking with the white labels that we are using we need to get us really crisp and clear around what we mean by them and where and where that goes. And I think just to add one more point to that I think often with the clients I work with there can be a lot of fear in the way that a program might have changed over the course of evaluation and I mentioned in my talk with maintaining fidelity of a program and continuity of a program and you know if you change the program too much you might not be able to evaluate it against its original aims so I think that's that's often seen as an issue that needs to be overcome rather than an exciting embrace of new data so that emerges and how we you know make this fit for purpose as we go so I think it is the mindset that you entered that with. That's a massive assumption then that what we set out to start with the objective and that's not or shouldn't change and we're not adapting for example an emergency mindset around we've got and this is where I think developmental evaluation starts to push towards more of the innovative different types of programs where it's like well we're not sure what the the objective might change therefore let's use more of an emergency theory sort of that underpins exploring what the consequences changes may be in the data that we're collecting at the time so there's quite a lot in that point about we set out to achieve x have we achieved x versus we set out to achieve x there could have been y or z or a to f which changes the whole dynamic of it and I wonder if it's because a lot of the work that I do is in a final final term evaluations of programs that have been running for five years right when the program is set up the way of thinking was we need to deliver exactly as we said and we will measure on that in five years and so there's very much a trueness to the original purpose rather than the purpose as it might evolve we might move to some closing remarks we've only got about two minutes to go before I imagine quite a few people will have to leave but then of course there are the 15 minutes where we continue exploring these concepts because there's now lots of action happening in the chat and lots of things to keep going with but thank you both yeah thanks I want to thank all participants today for making this event with two communities a great success and with the first event so hopefully we can do that later in the future thank you as well for Simon Laura for leaving the dialogue developing the presentation answering as many questions as they could and I love the great presentation from Laura in this spectrum and I really like the analogy from Simon so yeah thanks for the presentations the key takeaways for me are the social impact measurement and evaluation of both discipline agnostic have overlaps but they use slightly different words both involves critical thinking compile evidence to make judgment and bring a accountability seems an emerging field of practice and evaluations well established and with evaluation attracting people from different parts one of the key difference that I've heard a lot during the session was the between those two communities is the cultural difference and the labels used um so thank you very much for everyone we do have a lot your feedbacks very much a pretty appreciated we're just having a final meant to meet a link that just just in the chat now so just provide feedback to this on the session we'll leave this zoom open for under 15 minutes for whoever still have time and want to ask questions and depending on the number of people we still have in the zoom we can break out into rooms um and then we just can hang out here for another 15 minutes if you want or we can just stay the same room and then just answer the other questions that it just came up here at the last minute so stay if you can if not um thank you very much and hope to see you in the next webinar I'll take off with just uh picking up on on one of Kathy's points in the chat um do you think either social impact measurement or evaluation thinking fits better with social innovation and for purpose startups startups I think that's a really interesting point um and something something Simon that maybe you deal with more um considering how I work I would say I work much more exclusively in the evaluation space whereas you might be a bit more of a crossover that spectrum so Kathy I think uh that comes back to the uh associated um identities with each label and my sense is with something which is newer smaller startup innovative uh going in with less of that rigorous we've got to reach a certain threshold and more of a let's learn what we can mindset and if we've got that spectrum and social impact measurement is on the generally the less rigorous spectrum then that's probably going to be more appropriate mindset to take to those um type of projects programs organizations with the biggest caveat go back to purpose audience and rigor because it may be that you've had a private equity investor um give you a hundred million dollars to try something out and it's going to change the world and it might be different um purpose audience and the level of rigor required to be able to prove that up can I make a comment is that okay yes so I find most of the time that the social impact measurement stuff works and then along the journey there feels like there's the need for a little mini evaluation it's almost like a little pilot evaluation to test a product and it's usually to get investors or to know whether to scale so I think it's quite interesting that when I'm doing that kind of work I don't I yeah I don't feel like the social impact approach always gets me to where I need to go so it's an interesting area Laura um really good point and I wonder what is it that uh what what do you see is that gap between where you can get to the social impact measurement and where you need to get to it's usually when there's a need to convince a stakeholder and so you need something that's um either more independent which is a point that you made earlier so it's an outside in evidence base or it's got um it's more defined that you're proving that something is happening but often it's only a piece of the solution so you're not evaluating the whole thing but the ongoing evidence isn't quite enough it's almost like you need to stop do a little evaluation that's independent a bit more rigorous to get a decision made and then you can keep going okay can I I can't just give it because I think that that is that's I think a classic example of where once you get if you start with the audience purpose and rigor if you're really clear on that let's not worry about the social impact measurement or evaluation labels because what you can then do is go to that next level below which is what's the appropriate method or approach that we need to use to be able to provide what's required for the based on that purpose audience and rigor and that's where possibly these labels and so it comes back to one of Laura's first points about you know are labels important yes can they be detracting from what's really core and the similarities and so this is the you know similarity differences maybe this is very more on the we're probably talking about the same sort of thing when it comes to it but maybe there's a mindset of adaptability that might have a bit more of a bit towards one domain or other. It's interesting because I've even been involved in doing what have been portrayed as reviews or management reviews and then you go well this looks like sort of classic evaluation all said we want you to do an evaluation of this program and what they're looking at is their management structure skills training and all which it would be almost classically be called a review so I think the language is just really sloppy at the one level but irrelevant at another level I think you're absolutely right about the whole eternity Simon. I often see reviews with evaluation frameworks so I'd completely agree with that which is to your point Laura which is the critical thinking which is the similarity between them so it's critical thinking requires a structure that's one well that's culturally loaded but let's work with that for the moment other critical thinking may not have to have that kind of approach but you know for most of what we're talking about here having a structure is important and that structure can relate to a framework that you can then use to guide your thinking so most of social impact measurement well will also be about the mindset and the and the structured thinking that you can have that allows you to be able to start to draw insights and create more information to make decisions and both have evidence for that the nature of that evidence the volume of that evidence could be quite different yeah. Do you see those two you know chains of thoughts those two communities at some point being one or you know becoming one in the near future? Great question, Laura. Ah it gets a really good question something I was going to ask earlier but thought it might have been a bit too antagonistic but I'll go for it. Considering Simon was set up after AES you obviously saw a gap in the market that you could feel curious to hear what that was and and why there was a need to set up a new community that AES obviously wasn't meeting the needs of. Yeah sure. Great question. So and look the way why I'd like to answer that is that there is a very strong evaluator identity and the world of social impact measurement and the people attracted to it do not identify as evaluators and the approach and the thinking that sort of sits behind Simna is much more broad based it's much more about people who come from very different types of disciplines who are trying to work out whether things are working well and how to be able to track that over time there's probably more of a the money ankle so the investment community and that affiliation initially with Simna was much more strongly with the social return on investment world. So there's a few different genesis differences that are coming through yeah as a part of that of that story but it's it's it there's always and there's always going to be within the AES community as well like different groups that form with different ways of thinking about the world so that's I don't see that being necessary that if we're going to come together so I don't think there is I think there's a way of there's different mindsets there's different cultures and identity and that's healthy and there's a different focus for where those groups begin to move which is why we don't always have mergers between similar types of organizations because it's like well there's a slightly different bent to how they're approaching the world you know it's probably one question is what we riff on for about an hour and Jess has a question there yeah just if social impact assessment and social impact measurement am I seeing two different things and then I understand that social impact assessment has come out of environmental impact assessment and yeah so I suppose that's my first question but then I've done a bit of academic reading on social impact assessment hopefully I don't know if that's different to measurement and then also on evaluation theory and there appears to be no overlap between the authors and like is anybody writing papers on on these similar things because they're definitely sometimes overlapping like that's the message from this and yet they're not talking to each other in any of the academic literature I've read great question actually in preparation for this I did look up the academic literature and I found a paper that was written after a joint conference in the States where they asked the question of how do these two communities fit together so I'll see if I can find it while we're chatting and pop it in the chat that would be great thank it might be an opportunity for collaboration similar a joint paper but even but even some of that's really just your point it's really big because I think all of those have different Genesis stories and different people and where they want to do their thinking do their approach it didn't even occur to me when I was talking to Laura to look up something from the academic literature it's just not my world I do not go there so and I'm not saying that's on the back of everyone in the community there's actually lots of people looking at a few right now who actually go down that path and make sure that there is what has been the evidence that's been published in the academic literature but there might be a different mindset from those different worlds based on those Genesis stories as well about where it shared and then because it's there's such big industries like there's a lot of social impact assessment if it's more focused around environmental analysis that's a huge industry and there's a huge focus around that so you can have like you can imagine the the circles that have some level of overlap but they're not there's the end of end diagram it's not too much of an overlap I've just put this paper in the chat it's quite interesting yeah fantastic thank you um and assessment and um measurement uh just different Genesis stories like I said I think that says yeah okay well I mean sorry there will be techniques used in social impact assessment that have parallels with everything that we've discussed today but social impact assessments have been much more aligned with environmental assessments overall and having as the central part and then there's some social uh overlay rather than being central so it's like where's the starting point of what we're trying to do in the end we're all there's money that's probably been invested in a thing we need to work out whether the thing is working or not and then we need to work out where we what we want to do differently in the future or we're trying to work out what to do at the start so that's where all of this comes into and it's a different type of critical thinking and analysis and we've also got these different labels associated with it I think you're right it comes back to that accountability yeah for fear of dominating too much I think that accountability point's really interesting because one of the reasons that I sort of have then gone into qualifications and academic reading on evaluation and social impact assessment is because I think that quality is not always there because the professionalization of these sectors is relatively new and I see the real value in having different cultures and different streams of something that is related but are we also in danger of yeah perhaps not reaching some level of professional standard and not sharing some of our learnings because it's more informal and more disjointed great great points can I chip in is that okay yeah just because I run and have been doing it for a decade the social value and social return on investment um accredited training and part of the story that we have up front is that it took us 2000 years to get to the stage where we've got our accounting systems so just counting money we've got profit and loss statements we've got balance sheets so it's taken a long time to develop those professional standards and with that particular aspect of social impact measurement or the SROI social value world the thing is that we well it's going to take in 2000 years to get to those standards so let's just remind ourselves if we want to get the right accounting for value rather than just accounting for cash it's going to take time so I hear what you're saying and I think it takes time to be able to build up those identities and the professional pathways associated with it our social impact measurement is more on the the sense of well we're going to be trying to do something and learn now so we're going to be able to take different approaches and be able to to do that so that we can get clearer about what's working what's not what changes do we need to make but I hear what you're saying um and it'd be nice to see the evolution happen faster no go ahead Greg I was going to say on the other hand and this doesn't always make me popular in the AES community there are risks you know we all know about of over professionalizing you end up in your own cul-de-sac where you become self-referencing inward looking and elitist and exclusionary and defeat the very purpose that a lot of us have talked about today which is about social change and improvement and organization improvement and if we become a bit precious about our little patch um and I'm not suggesting that's what you're saying just but I mean that can be the risk of going um in attempting to deal with the sort of problem Simon said is you know let's get clear about how and why we do things and getting a common vocabulary we can end up you know going too far the other way absolutely and I think so that's just one of the problems or the challenges that I raised as well is because of this later development of or professionalization as you say of these ways of thinking or characterization of these ways of thinking uh there are challenges in performing an evaluation role or performing a social impact measurement role um with as I mentioned my evaluation units being established in in government departments or in um in large nonprofits I'm finding that the evaluation units that I work with are very um embracing of evaluation and and measurement and accountability and they want that to drive performance but a lot of the broader kind of department or organization aren't because they haven't that message of the value of evaluation hasn't yet spread and so I think until that does there's going to be some challenges with how we perform our work um thank you thanks great questions well I'm conscious that we have reached our over time of 15 minutes and then I can see that from a few people left here that gone for much longer um but I think we do need to close the session and um thank everyone who sticks around and um was interested in the webinar we are running a webinar tomorrow as well so if you