 Hello all, and thank you very much for making the time to come to this session this evening. We're here. My name is Mandy Charman from the Centre for Excellence in Child and Family Welfare. I'm representing the Victorian Regional Committee for the AES tonight and I wanted to tell you we're so happy to have you all here to hear this great session. Before we start the introductions, I'd like to acknowledge the traditional owners of the land with which we are each on tonight. For me, it's the Warrangere peoples and the lovely Macedan rangers. And we'd like to acknowledge Elders past, present and future and any Aboriginal people in our group. Tonight we have a real treat. We have Mission Australia's presentation on Mission Australia's journey to establish a whole of organisational M and E framework. We have Rachel Christie with us, the National Manager and Cherie Parith, impact management and evaluation manager. Rachel Christie leads the Mission Australia team for evidence and insights. She is extensive humanitarian and international development experience in program design, monitoring and evaluation and leads the Centre for Evidence and Insights, which has led the development of the M and E approach to data and evidence. Parith is the impact measurement and evaluation manager, is responsible for embedding impact measurement and evaluation across the Mission Australia. She's a wide range of experience establishing organisational M and E systems and delivering evaluations in the not-for-profit and university sectors. She has tertiary qualifications in environmental science and community development and a master's in evaluation. Today's session is designed in two parts. Rachel is going to start us off with a discussion about establishing the foundations for M and E within Mission Australia. And then Cherie is going to take over and talk about a specific example with Mission Australia's homelessness services as to how the system works in a specific context. So I think this will provide a really rich insight into how different levels of M and E systems work across a large organisation like Mission Australia. In the speech section, there will be 10 minutes question and answer. So feel free to put those questions in the chat box and I'll go through them and put them to both Rachel and Cherie at the appropriate time. Be aware that we are taping this session so feel free to turn off your cameras if that is how you feel more comfortable. If you have any tech issues, please just put a message in the chat and I'll try and help you. Great. I'm going to hand you over now to Rachel. Thank you very much, Rachel, for making time tonight. Thank you. Thanks, Mandy. Let's just have a look at my screen. Can I have a thumbs up to see if you can see my slides? Great. Right. Do you want to just pop it in slideshow? Oh, it kind of is actually. Thanks, man. Sorry, everyone. I'm sorry. All right, let's do it like that. Does that work? Yeah, thanks all so much for coming. It's it's really. Yeah, it's it's frankly really lovely to talk to you. Before I start, I just want to acknowledge the traditional learners of the land that I'm on. I'm on Gamakor land today and pay my respects to elders, past, present and emerging. And just as a way of acknowledging the land that you're on, it really encouraged you in the chat just to acknowledge the land that that you're calling dialing in from today and just pop that in the chat and because wherever we are across Australia, we're on Aboriginal land. So thanks very much. Also, I'm kind of a little bit more of a have a chat kind of person. So really, I want to show you a few slides, but really, I'd really love for it to be a little bit interactive. So I'd really love to hear your thoughts, your comments as we go through. Like Mandy said, there'll be a time to to chat after I've kind of run through some of this. So yeah, please put your questions in the chat, comments or anything like that. I'd really love to to hear from from you. So like Mandy said, I'm going to be talking about this first section called Building the Foundation for Organizational Monitoring and Evaluation and Learning Systems. And then Cherie, we're going to have questions. And then Cherie is going to talk through an example. So yeah. So this is what we're talking about. I suppose it's really important before I start to actually give you some kind of context for the size, maybe the scope of Mission Australia. So when you when we're actually talking about the system, though, we understand we've got some kind of understanding of the size and the scale. So Mission Australia works across the country. We today operate four hundred and sixty two services across the country in a range of in areas. We work in rural. We work in remote. We're on Mornington Island. We're in inner city Sydney. We're quite diverse. So we've got a turnover of about three hundred million dollars a year and sixty funders, one hundred and forty contracts and serving in the kind of vicinity of one hundred and fifty to one hundred and seventy thousand people a year. So that's kind of the size and the scale of the of the work that we do. And it's yeah, so it's it's not not small. And I guess before I kind of go into some of the things that we have done to lay the foundation for a to be able to be able to have a monitoring evaluation learning system. I need to talk a little bit about the sector and I'm conscious that some of you are dialing in and from maybe an international development context. Rather than a domestic or from a not a social services sector as well. You could be perhaps in the environmental sector or something. So really just wanted to kind of a painter picture of some of the sector changes in the last five years that we've seen. You know, obviously, Covid has been a big driver in change of work practices moving from you know, desktops and offices to more mobile and flexible work situations. We've also seen a real change of the last couple of years in the funder contracts. The complexity of funded data required has like shot up astronomically. There are also funders are also asking for increasing outcomes, indicators, not just output indicators. They're looking for data sets. They're looking for automated processes and APIs and those kind of technical elements, not just a spreadsheet or a nice pretty picture once a year. So we're seeing this kind of movement from the bottom of the slide to the top from siloed systems to more integrated systems where with consent, a client can actually clients data can be shared between services to to provide a more holistic client experience. And so we're seeing this change and this change has got massive implications for the way services are provided and the way Mission Australia has had to have has had to adapt to both meeting these requirements and providing an environment that actually champions outcomes for clients and not just provides a kind of churn of services, but actually moves into outcomes and impact and changing of systems. So in terms of the Mission Australia journey, this is a bit of an overview of where we've come in the past number of years. So and I guess before I kind of run into our approach and I'm kind of going to give you probably kind of six key enablers to or foundational pieces for building a whole of organizational M&E system. And I suppose it's probably important for me to kind of clarify here that I don't really think about M&E as being the it's more about for me in the way that I conceptualize the way that data and evidence and the way us client a client impact is understood is really about data learning at all levels of the organization. It's really about using data to drive decision making. And that's a kind of a cultural approach that matters as much to a frontline case worker in a social work context, as well as a financial manager in or a property procurement manager. And so when we're thinking about I think about this is so much bigger than M&E. And even though we're going to talk about M&E today, because it's really important. And I think that's one of the the end states that you're looking for. And that is an organization we're looking for from a data perspective and from like a senior management perspective, which is the level that I see is really we're looking at much more inclusive really about the data points. We're looking at not just client outcomes, but we're also looking at process outcomes and performance. And we're looking for, you know, progress against budget. And we're looking for case worker ratios and the understanding the the qualifications in our workforce. So so when I'm talking about and when I think about data, that that's where I'm thinking. And so just to give some an idea about and, you know, when I'm talking about this kind of like data decision making, it really incorporates strategy. It incorporates data literacy and your staff technology automation. These are all these things that pull together to to create the foundations where M&E at a system level is possible. So yeah, so I'm going to talk through six key enablers. But this is our journey first. First kind of a couple of years on the left hand side of the slide. It's really about getting good data in. We were looking at impact measurement rolling out at scale. And when we say impact measurement, it's more of a brand impact measurement rather than a technical, you know, more of a kind of definition. So impact measurement is actually a brand that we have rolled out to Mission Australia. It's essentially a pre and post outcomes measurement tool. It's a survey quantitative. But when you're talking about impact measurement for four hundred and sixty two distinctly different services, you could kind of talking about this kind of data in idea. We also had a legacy CRM that was old and clunky. We did we're starting to incorporate a modern data warehouse, which we're talking about cloud based technology. And we started to use Power BI to show data to manage management. So mostly utilizing a function like Microsoft Power BI, which is we're in the Microsoft stack, but using Power BI to integrate data sources together. So the first one really was performance reporting. So that's into integrating financial HR and kind of quality metrics for services in one place. So things that were traditionally multiple different data systems into one place. And so we started building those in 2009, getting, building this kind of emerging data literacy across the organization. Things really got, I think. Really interesting in 2020, 2021, when it wasn't just about data in, it was more it was then about like what data matters. This was when we really moved into strategic alignment, really understanding the minimum data sets, really understanding theories of change. And when we started coming out with these monitoring evaluation and learning plans that Sherry will walk you through. We started measuring also client satisfaction at scale. In addition to outcomes measurement, actually at the end of the day, did a client say they were satisfied with the service they received. And so that was another thing that we have added in the last year or so. And then into the future, it's really for me and for us, it's really about usefulness and driving impact. We are not an academic organization. We are a service delivery organization. And so teams, Sherry and my team, are really about this driving, this usefulness across the business and making sure that our services really are operating the best way possible. So let's talk about some of these six key enablers or these foundational things that Mission Australia have that I think have worked for us. And I am really conscious that what may have worked for us in our journey may not work for you. We are of a certain size. We are have a certain structure, have certain cultural elements that have enabled us to do this work in this way. And it might be different for you. So I just really, you know, one size does not fit all. So first one, first key enabler, a really good type strategy. We have a really great strategy that was started, you know, maybe two years ago. It is focused and definitive and it sets the tone for everything. And you'll see how it weaves through all the other points. But really, we needed an ambitious outcomes focused strategy that was really about what kind of change do we want to see in the people that we serve? Not so much about, you know, dollars at all. Not so much about numbers or building numbers for growth's sake. That was really clear. It's really about impact here. So this has been a really key driver in the work that we've done. Also, we established my group, which is called the Centre for Evidence Insights. And I'll talk you through the structure as well, because I think the structure, I find it really interesting because, you know, every organisation structures themselves a bit differently. And I'm going to show you how we structure ours. So we set up this evidence and insights centre as a national group that that we're using words like inspire curiosity. We're using words like leading to evidence and action. So really, I think this multidisciplinary centre for evidence and insights is really about adding value to the organisation, adding value to the service worker and making sure that we everything that we are doing, producing is easy to understand, easy to action, as automated and as simple as it can be. And you can see here that it's also ties back to one of our strategic focus areas, which is constantly improving quality across all of Mission Australia and multiplying impact by sharing our expertise with others. And so we really have this idea about usefulness, about inspiring curiosity, about this kind of learning mindset that comes from our team. And it's really about the carrot. We really don't have any stick, really. We sit separately to the way services are held accountable and they are accountable to their own executive and we're in our own executive. So even though, so most all really of our of our work is done through sales, really demonstrating value, being able to empathise with the the services experience, trying to help them make things simple. And so it provides a really, I think, amazing challenge for us is because our work is demonstrated then by the amount of people who who use it, not because we think it's a good idea or not because we think it's, you know, smart or fun or, you know, particularly technically, you know, whiz banging. So it's it's a really nice grounding factor for me. And I, yeah, I really like it. So this is the way that it's structured. We've got 23 people in my group in the group that I lead. Shari is the Impact Measurement and Evaluation Manager. But we've also got a number of teams under the Centre for Evidence and Insights as well. And I think what's I think what's interesting here is that a lot of structures and organisations actually split it down the middle. And the reason why we haven't done that and the reason why we pulled it together is because we want this seamless experience for a service. We want because they don't think of monitoring and evaluation any different from their performance metrics that their funder holds them accountable to. It's all the same idea about how do I do the best I can with what I've got. And and really that's it's it's kind of I know it kind of seems a bit simple, but it's really not much more complicated than that. We want this kind of seamless experience for our service workers who should be released to spend that time with the with the clients who come to them rather than, you know, worried about spreadsheets or worried about I'm not sure of all this data fields that I'm collecting, which ones are important. Like for us, it's it's really about making things easier, making things useful. And and providing this kind of seamless experience. So you can see there how kind of just the way that it's structured. You know, I'm not going to read it. Another thing that we did that's really has been really important in in setting ourselves up for is for doing organisational wide M&E is actually just straight. It's pretty boring, but straight classifications and nomenclature of services. What do we mean? And this this didn't exist. You know, so we've done this in the last four years or so. Really tightening up. What do we mean? This kind of service service classification system, tears of services. And you can see why this matters when we start talking about data and data aggregation and Sherry will show you a really good example. So this is an example of homelessness services as a service category. And then we also move into service lines and flagship service models and creating these flag flagship service models that have got the theory of change in it, that have got critical success factors, but also have things like minimum viable models for for finance or for property or for staffing qualifications. And so these are kind of have formed the. A really a nice kind of a way of thinking and a way of classifying in a way of understanding out our four hundred and sixty two services across the country. So we got this to the foot, you know, so. Key number three is, you know, classification and the main culture. Four is minimum data sets. So using this same model. So and this classification, I think, importantly, the classification is also determined by the strategy. So in that classification, we we chose foundational categories, which we were going to be good at. Things that we wanted to be. I think that we were that we were really wanted to develop our capability in things that we thought were were Mission Australia's point of difference. One of the things that we could really drive. And so these are our foundational service categories. And we build those foundations. So that's where we want to prioritize effort, prioritize time, prioritize resources. And so we've built these into our our data landscape and our and our client management systems. As minimum data sets. So we'd not just have a core minimum data set, which we have for every Mission Australia client. We've got a core data set. But we also have these these ideas of minimum data sets from aligned to the categories, which then aligned also to the strategy and the strategic, the areas that we want to have strategic outcomes in over a five year period. So another thing that we did was this organizational approach to impact measurement, like I mentioned in an earlier slide. And so you can see the kind of gradual adoption of the impact measurement approach. I think it's thinking about this. So this has been over many years. But some of the things that you want to think about that have worked for us here is really the scale and the end to end IT system. So at the moment, a caseworker in the system in our CRM can trigger an SMS to a client. And they get sent the survey. It also making sure that that then that data comes back to the CRM and it comes back to a management, a dashboard, not just a management dashboard, but a dashboard. And I think one of those things that we've found was really important in getting this done even before the M&E system is created. Right. This is just a kind of a process. For a service, it's it's really about that immediately usefulness. So trying to trying to build this capacity of understanding that a client can answer a survey, a caseworker can look at it as they're developing their case plan and go, oh, you know, tell me about this health issue thing that you've mentioned here. And using that as a way to make sure that the client can answer the health issue thing that you've mentioned here. And using that as a way of that curiosity about this data that I've just collected is actually right, is useful for me in this next immediate action that I have to do. And so those were one of the things that was really helpful for us to embed this. The other thing, number six is the data architecture. And I know, like, I come from background more of M&E and ops. But man, like, you've got to you've got to make friends with your IT guy and you've got to really understand the way that the data is housed, the way that the data flows and making sure that you actually understand this process because this is the future. And anything that's of scale, actually, you've got to understand how the data is structured in your warehouse. So you can see here that the way that we have approached it is using a data lake, which is actually data cloud. And then we transform it from the cloud or from our lake to a funder. So rather than transforming, which is what we did in the previous system, sorry to talk kind of technical, each different funder gets their own transformation straight out of this client relationship management system. That means every you have to configure or customize each element of the CRM to that particular funder. And when you're talking about one year, two year, three year contracts, when you're talking about 462 services, you end up with this absolutely untenable customization of a system that doesn't work anymore because it's so slow and bespoke. So really, what we're doing now is moving away from that model, defining those minimum data sets, pulling them into the data warehouse, and then transforming them in the warehouse to whatever the funder requirement is. And this also lets us build a whole lot of Power BI reports, which I'll show you some. These are just snips of a couple that we've got. We have dozens and dozens, probably even close to 100, I reckon. And so this is all built out of our team. Things that for caseworkers, things for client, for program managers, things for executives, we do all of our reporting out of Power BI. And the thing that I love about Power BI is it actually, it democratizes the data and it allows us to be incredibly transparent. Because that's one of the things that those old school evaluations where someone comes in and looks through your stuff and then goes away and then writes a report and then a year later, you find out whether or not you're good or not. That kind of like at any time, our services know whether or not they're meeting their KPIs, they know how they're tracking. And so for me, it's really important that that kind of power is actually decentralized, down to not down, but out to services where the action is happening, where the decisions, the real life decisions are being made. So that's really important. So yeah, just in the last five minutes, I just thought I'd share what I've learned in this for me. And number one, and so please, so I'm going to talk five minutes and then put those questions in the chat. I'd love to have a talk to talk with you. But first up is leadership buy-in and your strategy. And this is something you cannot fake. You can't just stamp out, this is a line of strategy, on every single request that you make to your board or to your CEO. You really need to think through the way that your data and the way that your evidence collection systems are going to help you at the organization achieve that strategy. And if you can really think that stuff through, for me and for my experience, I've had nothing but an enthusiastic support from our board and from our executive in this work. And I know that's not everyone's experience, but it's actually been amazing to work in an organization who has been so ready to respond and to think differently and to add resources and to go there. Second thing is build an ecosystem for sustainability and automation. That's also trying to make sure that you're not spending time on stuff that doesn't matter. And by I mean, it's not that it doesn't matter, but those kind of boring, mundane, repetitive tasks that actually could be automated. And that's the only way that you're ever going to get your head above water to be able to think strategically about impact and about changes to systems. Because otherwise you're going to be reporting to your 140 funders for your 462 services. And that's all your time is going to be spent in. And unless you can automate that and easily on board a service and easily off board it, because when you're thinking about one year, two year funding cycles, there's this kind of changes in funder requirements. You need to be able to do that as easily and as flexibly as possible. So that would be another thing. The other thing I'd say is a staged roadmap to avoid burnout. This is a marathon. It is not a sprint. You need to think about I'm thinking three or five years ahead now so I can see what I need to do this year. But I can also see what I need to do for three years in front. But I think this also matters in my experience for your staff and especially for your staff who want to sprint all the time. And it's really important that it's really important to me as a manager that my staff don't sprint all the time, that they take time for sprinting sometimes, but then actually reflecting, learning, refining, just getting those foundational elements right. It's because you need to keep them and you need to make sure that they can still live and be good parents and spouses and all of those things. So it's one of the things that I've really learned over the last couple of years. And I suppose that also equally matters for key person dependency, that we're not always reliant on one particular staff performer who knows the data architecture or whatever. And that people play their roles and can share and learn from each other and that it's not all held in close. The other thing that I've learned is this nurturing a multidisciplinary mindset. And by that, I mean like people, yes. My team is amazing. And one of the things that I love most about them is that there are highly processed people in it and highly creative people in it. And one is not better than the other, but we're best together. And I'm just, I'm always inspired and I'm always in awe about the way that complimentary and the way different people approach problems and how that can actually promote a much more seamless experience. And, but it's also multidisciplinary in evidence. I'm talking about Quaunt, we're talking about external data sets, we're talking about also or different kinds of data sets, but also in an attitude that we have this approach to the work that we do. Actually one of the team, we just had a meeting this today and one of the guys in this morning said, he said, he goes, oh, it just goes to show, yeah, no one's really an expert at anything. And I just, I love that kind of attitude that even though we build this expertise, right? We are probably experts in our fields and he could very well be an expert in the field, but this idea that there's always something to learn and there's always a different approach, always something to a different perspective that you've never considered before. And I think that's been a really important part of building this foundations for how we've gotten, where we've gotten. So that's me, I've gone a bit over time, but I hope that that has been helpful. And I've got questions. Hi, I'm just trying to get my camera back on, sorry. Yes, we have a lot of questions. That was so interesting, Rachel, so thank you for that. Our first question is for Sarah. As you noted, I loved your comment that everyone's on their own journey and you can't have an identical path regardless of context and organizational size and so forth. And so this question addressed that. So you've said that Mission History, of course, is a large organization with skills, knowledge and funding to contribute to the raw sourcing of a large team, which doesn't, you know, the Center for Evidence and Insights. Do you have any thoughts on how to scale this into smaller organizations who may not have such resources available? What's gonna give you best bang for your buck in terms of critical elements that had the most difference that might be more possible for a smaller all? Well, I think the principles are actually true whether or not you're working on a Microsoft Dynamics platform and as you are like we are or you're on a spreadsheet that it's still about the structure, understanding what data is coming in, making sure that your minimum data set is the right one and also democratizing that data process and the transparency of that, the best way you can. And it doesn't matter if it's an automated dashboard like for us or actually, I don't know, emails or it's really about, for me, it's more about the principle of action about the democratization of the data and making sure that whatever you collect and whatever someone has spent time collecting, assuming that they take good care to make sure that that data is accurate, is treated with care and regard and is used in the context of that person's life. And for me, it doesn't matter if you're on a spreadsheet or a system. I hope that answers the question, but yeah, it's more about an attitude, I think. I'll take an attitude over our system any day. Mandy, you're on mute. That leads really nicely into our next question. And this is from somebody who arrived a little bit late and that's all good. I'd love to know how you approached your commencement of this journey for the team and the organisation, was there? So you're talking about a cultural shift there. You're talking about changing a language and a mindset. So the actual technical details maybe don't matter as much as mindset, but was there a lot of sussing out of the needs of the service areas and understanding their issues? Because you've got a diverse service offering here with lots of different players. What were your observations about that? Well, I guess, I didn't know that we would be here when we started this journey in 2019. We actually started with impact measurement, which at that time wasn't linked to a CRM. It was just straight out of a survey platform. This whole idea of pre and post surveys, outcome surveys for a client, that's it. We started with a pilot of 12 services. We decided, and we made a whole lot of mistakes, well, not really mistakes, but things that actually didn't serve us well in the end. Things like even just the way that we had our data structured, we needed to change a few years into the year and a half into the journey. We didn't actually, in the data string, we didn't identify whether it was an entry or a midpoint or exit. We were trying to correlate that back to the dates that it started, and that just didn't work. So we couldn't actually identify easily where people's entry start point was. Because also, because a case worker could do an entry survey kind of anytime within the first month of someone coming in, because it just depended on when they were gonna, from the data, the referral, and when they could first catch up with them and have a kind of conversation. So it actually didn't work the way that we originally designed it to not have those data within the string itself. And so once we got this idea that it was possible to collect outcomes data pre and post independently of any other client relationship management system, and that that was actually beneficial for the case worker. It was beneficial for, and we started using it straightaway. So, and even before we could use it in tenders or use it in kind of service evaluations, we were just focusing on using it, that case worker using it as soon as they collected it. Tell us what changed for client, that person. And so building that, oh, the other thing in that is that we didn't red amber green their outcomes measurement. We used, I think we used yellow and purple because the whole choice about actually we're not, we're not judging a case worker or judging a client because they didn't have increased exit from their entry. What we wanted was, so really the language was it's not a performance metric. What we want you to understand is actually why that's happened. And if you can tell me that you have, sometimes you work your butt off just to keep someone stable in some of the most horrific times of their life and keeping them stable is a massive achievement and to devalue that by making it seem like keeping them stable was not an achievement. It seemed really yuck. So I think one of those things is from a cultural perspective is we really, from the very beginning, we're trying to focus on actually the learning as what's important, yeah. Really interesting, because we've had a couple of questions exactly about that, the learning culture and the shift from seeing evidence about judging performance and judging by its necessity infers sufficient or not sufficient, good or bad, into evidence to make decisions and just improve. So on that score, I was wondering as well what you would say and you've mentioned one would be some critical factors that you use to build a common understanding and language because often people come with this stuff with really different language and capability and people come with different levels of language, skill, previous experience and buying. Let's talk about that. Would you like to make a comment? Yeah, and I suppose I would make the observation that we're always, it's really hard because actually our funders don't take this opinion. You have to, we have to drive this from your internal culture because your funders generally are actually putting you against each other. They are measuring us on our pre and post scores and seeing us as competitors for each other, against other providers doing the same thing. So to push this is really pushing uphill and pushing against the tide. To your question about language and data and capability, yes, that's really tricky and I would say there is a spectrum for this across the organization. And what would I say? I would say my key points are find your champions, find the people who get it and promote the heck out of it. There will be some who will get it, who will love the data, who will immediately be energized by your approach if you start talking like this and then just grab them and grab them and hold them and love them because they are the ones that are going to pull the others along. And once you promote them, they can see that it's works. It's not so scary. And then you will get this kind of user adoption that will flow. And I would say it's a continual challenge for us about our data literacy in the sector, because most people are social workers and because they wanna work with people, they don't wanna sit in front of a dashboard and look at the data. But I think increasingly, I think there is a component that both graduates in these kind of work sectors and organizations like ours are increasingly aware that data literacy is crucial. And so we need to, it's something we're still working on, but yeah. Great tips, Rachel, really good tips. We have had a range of questions and this will probably be the last group but this is really about the data system and how a range of questions related to how did you balance between different demands? If you listed in this, indicated that with your comment about funders from the organization, from accountability to display to donors which tend to be output measures, service recipients, project management, quality standards and then you've got your impact and outcome measurements. So we've had a range of how do you get those to work and also if you could talk us through a little bit more on sort of the data tools you use. Survey tools, you mentioned qualitative tools, that type of thing. So it's a double-edged sword, just if you can enlarge upon your data management strategies. There's a whole lot. So I suppose thinking about, we actually think about, we switched five years ago, I would have said the funders drove our data strategy. Now I am 100% sure that we drive our data strategy and that the funders get that transformation. Like we don't just collect the minimum of a funder because honestly, the funders are too varied. When you're talking about alcohol and drug services, they could be funded by any of the state health teams. It could be funded by a PHN, it could be funded by a federal department and all of those have different data sets. And if you continue to be pulled by from pillar to post by your funder, you actually never learn. So we've taken this strategy because one, we wanted to learn and we wanted to leverage the understanding of what we do in Perth to Darwin, to Bateman's Bay. And we wanted to understand those things together even though they're funded differently. And so that's why we created these kind of the classifications, the minimum data sets and the service line monetary evaluation and learning plans, which Shari will talk through. What was the other bit of the question that you wanted to know? Data tools, we use Alcoma for our surveys, which is both pre, I mean, both quantitative and qualitative. We use an API to go into our data lake every day. We're in the Microsoft stack. So we have our community service in Microsoft Dynamics. We also use Salesforce for our employment services. We use Power BI as a visualization platform, which is also in the Microsoft stack. So what do you want to show? We also use a variety of like statistical tools. So including like SPSS and in vivo for more bespoke analysis that we can't automate or trying to test or experiment with until we can automate something. Yeah, is that enough tools? That's a lot of tools. And I think you mentioned this in your presentation to the data lake. So you have an integrated data system, don't you? That you created? We didn't know, like not five years ago, but yes, we're getting there. And so that includes, we don't have finance data in there just yet, but risk data, but that's coming. But the idea is is that all our data would be in the lake. But we have like risk data in there and HR data in there and all kinds of different things, which you'll see when Cherie comes to present why that's, how that really unlocks this kind of system-wide approach to monitoring and evaluation from at a service level and how that matters to clients. And as you pointed out to staff, it sounds to me that as soon as staff can realize that the data can tell them something that matters to improve their client service and that makes a bigger difference to clients, all of a sudden you might have some converts there. Yeah, yeah, one more question. Sorry, sorry, Cherie. We've got a number of questions about how the research focus or more traditional notions of evaluation sit alongside your outcome measurement, impact measurement approach. I'm wondering if you do anything, do you do that? How does that sit? Where do you decide what you focus on? Oh, look, that's, that's Cherie's going to talk about that for 20 minutes. So here we go, okay. Yeah, that's a nice segue to Cherie. Okay, Cherie. Thank you everyone for such rich questions. Amazing job. We'll hand over to Cherie, welcome. Thanks, Mandy. Thanks everyone. Thanks for joining us on a Monday evening. So we thought now that you've heard a little bit about Mission Australia's journey over the last few years that it would be useful to share with you, I guess, some of the more current work that we've been doing. And this is all about us leveraging all those foundational pieces, pieces, pieces that Rach spoke about to really leverage and build these evaluation, sorry, these monitoring, evaluation and learning cycles across the organization. So I'm going to talk about, first I'm going to share with you our organizational approach to monitoring, evaluation and learning. I'll call it MEL from now on. Then I'm going to show you an example of what it actually looks like using out the MEL cycle that we're establishing for our homelessness services. And then I'm going to wrap up, I guess with some reflections around what's been working well and what are some of the challenges that we're facing that we really want to address over the next 12 months. Before I get started, I just wanted to re-emphasize what Rach said and just to really say that this is still very much a work in progress. We are learning as we go. We have a really big, large, diverse organization. And what that means is that we need to be pragmatic, particularly when we're thinking about how we scale across the whole organization. So it's not a one-size-fits-all. However, what we're hoping to give you is a bit of a feel and a bit of a flavor for an approach that we're using within our organizational context. Rach, if you want to go into the next slide for me, please. So our organizational approach to monitoring, evaluation and learning. So as you just heard, the organization really has been on this journey. It's made a huge investment really in getting really high quality data in. We're now at that really exciting point where we get to start to shift the focus from compliance. So getting the data in, supporting staff to understand the importance of data, to shift the focus to be more about how do we actually build organizational learning culture, which Rach mentioned before. One of the ways that we're doing that is through establishing MEL cycles for all of our foundational service categories and our flagship service model. So if you remember the wheel that Rach had as well, the nice colorful one, that's what we're focusing on. And this is really all about us finding out what's working really well in our services. So we can do more of it to find out what may not be working so well so we can address any barriers or any challenges. And ultimately, we want to be using data and we want to be using insights to ensure that we are delivering the best possible services that we can. So what you can see on the slide is that our MEL approach has three separate yet really interrelated activities that we're working on. So although the content of each of the MELs we're developing is going to be different, so homelessness MEL looks different to a youth AOD MEL, to a child and family MEL, the process and the way that we work with services and all of our outputs will be similar across all of the MEL cycles that we're in the process of establishing. And I'll just talk you through each of the components briefly. So before I show you what it looks like, the first component is monitoring. So as you all know, probably I'm sure monitoring is all about that ongoing routine collection of data in an organization that can be used primarily to manage a program. We have a ton of data, a ton of monitoring data at Mission Australia. We have data on program inputs, activities, outputs, as well as outcomes, which we measure through our really well established impact measurement program. So we have the monitoring data there. What we're looking to do is put three key initiatives in place or three key outputs, I guess you could say. The first is headline data. So we want all of our foundational service categories and our flagship service models to have that high level strategic headline data that can be used for organizational learning and decision making. The second component then is a more detailed monitoring and reporting framework. That essentially provides detail around what we're wanting to measure and monitor, looking at implementation and also outcomes. So looking holistically at our services, what we want to measure and monitor across those suites of services that we've grouped together. The third component of the monitoring is a MEL toolkit. The toolkit has minimum datasets, which we've spoken a little bit about already this evening. That is essentially our existing data sources. So in our client management system, impact measurement program, client satisfaction, all the data sources that we have available to us. And also some optional tools as well. We're really looking to grow that suite of tools as these MEL cycles embed over the next a few years. The second component of the MEL cycle is evaluation. Valuation is a little bit different in terms of the fact it's periodic, in-depth analysis to not only monitor how a service is tracking, but then also to make a judgment about merit, worth or value. When I talk about this in my organization, I like to say to people, we can have monitoring data, which we do have without having an evaluation. However, we can't conduct an evaluation without having really good monitoring data in place. We're also not ever going to be able to evaluate every single one of our services, the 400 odd that we have. So what we're trying to do through the development of the monitoring and reporting frameworks is really lay the foundation for future high impact, high quality evaluations to be occurring within the organization. So they could be conducted internally by my team, the impact measurement and evaluation team, but we can also partner or commission external evaluation providers as well. The third component of our approach is learning. So learning is, for us, the most important component, however, is probably the most commonly overlooked component of a male cycle. The success of these cycles, for us is really going to be all about the extent to which the data and the insights is actually used to improve our services, improve our service delivery and support organizational learning. So yes, we want the reports, we want the funders to be happy, but really that's not going to be our success measure. The measure is really on the extent that it's used. Learning has two components. First is how we share data and how we share insights with services. And then the second is all about how we can actually support people in the organization. So services, but different levels of the organization as well around what do they actually do with that data? How do they interpret it? How do they use it to improve their service delivery and to improve their practice? So that's a conceptually what these male cycles look like. Oh, if Rachel, you want to go to the next slide or now, talk you through each of those components. So monitoring, evaluation and learning and give you an example of what it looks like for our homelessness flagship service model. As Rach mentioned earlier, our service design and innovation team at Mission Australia is currently in the process of developing a national suite of flagship service models. These models bring together research, evidence, practitioner wisdom, lived experience and really provide this national best practice guidelines that then can be adapted for local contexts. The homelessness stable housing and support to flagship service model, it's a big one. It actually covers around 66 of our different homelessness services. So from case management to tendency support to support coordination and it also includes our crisis and transitional accommodation services as well. So it's a big flagship that we've actually started this process tackling which has had its benefits and its challenges at the same time. The key component of the flagship service model is a theory of change. That's also designed by our service design and innovation team in collaboration with services and people who are accessing our services as well. I didn't want to show you the detailed theory of change but I just really wanted to highlight some of the key aspects of it that we then use to inform the development of the male cycle. It outlines our inputs. So minimum viable service model as well as additional resources that services could access for greater impacts, really important. It outlines the core service activities. So what are the core activities that we would be expecting all 66 of those homelessness services to be delivering? And then it also has critical success factors. This is really, really important because it is our assumptions I guess on how or what the key change mechanisms are that will contribute to outcomes. So the critical success factors are what we're wanting to evidence. It's what we're wanting to test and what we're wanting to explore to really get a deeper understanding of the why and how our services work as well as the outcomes which is obviously really, really important. So we have a theory of change which sits within a flagship service model. And then what we're doing is developing the monitoring piece. So we're developing a detailed monitoring and reporting framework which you can see on the right hand side of the slides. So the monitoring and reporting framework has a few key aspects to it. There are monitoring questions. So the questions that we wanna be asking and answering about our services. Then we've developed signs of progress and dimensions and I'll show you what this looks like. For our homelessness services. But the signs of, sorry, Rachel, you just go back. Sorry, the signs of progress is all about what are those kind of key elements that we wanna be tracking for our services? And then what sits behind that is a detailed range of indicators and data sources that we will be reporting against the framework. So I'll show you what it looks like. Rachel, you can go to the next one now. Thank you. So for our homelessness services for our flagship service model, we've got a essentially two key monitoring questions that we wanna be looking at. And the first one is around implementation and the second is around outcomes. And this is for us to look holistically at our services. So we want to know things like, do we have adequate resources in place? Are we reaching the right target groups? Are we delivering those core service activities that were outlined in the theory of change and also around those critical success factors? Are they in place? Are they being implemented? And are they being implemented well, as well as are there other critical success factors that we actually have not identified? The second question is all around outcomes. So for this service model, we're really keen to explore whether our services are supporting people to transition into secure and stable housing, whether they have improved wellbeing, which is what we measure across all of our services at Mission Australia through the impact measurement program, whether they're more connected and resourced and also whether they're thriving. So looking at things like employment and education connections. While we're also embedding into the mail frameworks, and I think I saw a few questions about this in the comments in the chat earlier, it's also some higher level indicators around community and systems changes as well. So having a look more at some population level data aligned to each of the foundational service categories. And we haven't really done that in an integrated way before. So it's something that we're looking at embedding within these mail frameworks. So we'd have the population level indicators and then go down to the outcomes and implementation of our services as well. Rach, if you want to go to the next slide, so we have the monitoring questions, and then we've developed those signs of progress which you can see on the slide. Essentially the signs of progress is just we're trying to get a user friendly way to unpack the theory of change a little bit. And these are kind of the things that we would be tracking and measuring through the mail cycle. Again, you can see, so at the top we have impact which is all around that community level, population level impact that we'd be hoping to see. Signs one to signs three is all around the implementation of our services. And then signs four to signs seven is all around the outcomes. So we have the signs of progress and then beneath that you can see in the table we have all the different dimensions we call it that we will be measuring across all of our homelessness services. What sits behind it, which I won't show you because it's detailed and just a big document is all those different indicators and data sources that we will be using to monitor the progress of each of these signs being delivered. If we go to the next slide, so that's the monitoring piece. We also in this cycle have an evaluation piece for our homelessness services. And this piece will be delivered over the next 12 to 18 months. We're pursuing some super exciting partnerships to do a few projects, two projects. The first project we're wanting to progress is an independent evaluation of our homelessness services. This will utilize our existing data source. We have a big homelessness data set that we have pulled together, historic data for years and years since we've had all these client management systems in place as well as our impact measurement program. They're gonna use some pretty cool research designs and some advanced analytics to attempt to isolate the impact of different homelessness service models for different cohorts. So the impact evaluation, we really unpacking that what works for whom and in what context, which is a little bit deeper than what we're doing in the monitoring piece. The second really exciting project that we are exploring is all around data linkage. So how can we again, utilizing our existing data source, all that monitoring data, is it possible for us to be able to link that data to external data sources and external systems to enable us to have an understanding of the longer term impact of our services. So we know what happens with a client and their journey and their outcomes while they're receiving a Mission Australia service. However, what we're really, really interested to unpack is what happens once they leave our services. Are those outcomes sustained? Are they being connected into other services? At the moment, it's like the missing piece of the puzzle for us. So we're really, really keen to progress that over the next 12 to 18 months as well. That's the evaluation. So we've got the monitoring frameworks. We have that deeper dive evaluation projects occurring. And then we have the learning. This is for me, the most exciting piece. This is the piece that gets my team the most excited. And this is the most important piece at the moment is really how do we share data and insights from all of those projects into services and how do we support them to actually look at the data and identify things that they want to be changing, enhancing, growing. This is what we call in our organization. We're kind of branding it a bit at the moment, our evidence to action process. So we've been piloting this with a few services over the last 12 months and the process is working really well. So we are now at that point where we're wanting to embed it and really scale it across the organization. And we're in the process of doing that with our homelessness services right now. In terms of sharing data with services, we have a few different ways and Rachel already mentioned some of these. We are going to have a high level embedded dashboard within a SharePoint page using Power BI. That's kind of that high level strategic data that people can go into the SharePoint and have ready access to. We're also going to be developing a more detailed mail dashboard which is primarily for program managers. So we went and consulted with managers and we said, have a look at this framework. What are the key indicators that you want to see to support you to monitor your service? So we've got a long list of indicators that they identified that we're now building into an interactive dashboard that they can access integrating all of those different data sources that we're pulling together. We're planning on also doing some annual insights papers. So those insights papers are really going to be a kind of deeper dive analytic papers where we are wanting to explore, I guess, topics that would be a strategic or operational interest. We're planning on having those, not only share those within the organization but also release those externally. So share with the sector around what our data is telling us and doing that kind of deep dive analysis. And then finally, we're also particularly in this first round of the cycle, we're going to be releasing local data packs as well as national carts and state carts but local data packs for every single individual service. So we've got a lot for this first round but it's really, really important for us to get that on the ground engagement and understanding about, hey, this is what the story is saying about your data. This is what all of your data coming together looks like. Let's unpack that and explore that further. So we get in the process of releasing local data packs and then what we're going to be facilitating is evidence to action workshops. Those workshops are about us sharing the data. Services love us to come along and talk to the data but then we're also going to be facilitating some reflective practice activities as well. And that is all about supporting services to contextualize the data, to unpack, were there any surprises in the data? Is it the story of what they're expecting to see? What else is happening? The local context. We also want to get them to have a think about the flagship service model and their alignment to it. So they might be able to have a look, what we're hoping is they'll look at the data, have a look at the flagship service model and then be able to identify some key areas or some key actions that they want to take over the next 12 months to improve or enhance or align their service to that flagship service model so that best practice guidelines that I spoke about before. As a result of that process, sorry, Rach, back to the result of that process, there'll be evidence to action plans. Those plans are really important. They will be monitored for 12 months. And when I say monitored, I mean, we, one, we'll monitor whether the services actually did the actions that they said they would. But we're also really keen on monitoring the impact of those actions, if that makes sense. So, for example, if the data is saying that people, clients in the service have low levels of wellbeing that's not improving over time and the services identify some actions they want to take, we would be tracking that the actions occurred, but then also the impact of those actions. So what we'd be hoping to see over time is that that trend improve in terms of wellbeing. And then what we're hoping through having that kind of reflection, learning and action cycles happening all across the organization, is we're gonna have some really beautiful case studies and best practice opportunities and practice sharing across different services to be like, hey, we tried this, did that work for you? This is what we tried and it worked really well. So that's what the kind of longer term vision of what it, how it's gonna play out over the next 12 months for us. I'll wrap it up. So last one, what's working well and what are our key challenges? What's tracking really well? Rachel mentioned before, we do have such strong leadership buying and support across the organization. We recently did a review of our impact measurement program and part of that was a staff survey. And I think we had over 90% of staff actually agreed that their manager had explained to them the importance of outcomes data collection and had supported them to actually collect the data in practice. So really, really strong results in terms of leadership communication and support on the ground, which is a true testament to all of the work that has happened over the last few years to scale the impact measurement program. The next thing that's working really well for us is one of our executives actually said this in a meeting with me the other day, it's all about the L in the melt. So this is the tune that we're singing. This is what's going out on all the comms. The organization is ready. We're at that point where we have focused on getting the data in and people wanna see it now. They wanna see what it's saying. They're really excited. They wanna have a look at it. We're really at that point where this kind of learning culture can be really grown. The final thing that's working really well for us is just this notion, I guess, of developing sustainable and where possible automated M and EL systems and processes. It's actually giving us the opportunity to measure our collective impact. So looking at our impact across similar service offerings, similar service types, which in the long run will be really, really powerful data and insights we're gonna be able to generate as well as looking at what it means at a local level as well. In saying all of that, that's working well. We have challenges. We have some big challenges that we want to address over the next 12 months. The first is around culturally appropriate impact measurements. We're getting feedback on the grounds, particularly in Aboriginal and Torres Strait Islander communities where we're offering services that the current process of collecting surveys isn't culturally appropriate. Hand on heart, we see it, we own it. We now really wanna address it. So we're just about to kick off a big project, which is looking at how we can do that much, much, much better, which the whole team is really, really passionate and excited about. The other thing that is an ongoing challenge is really about how we embed data collection into practice. So there is messaging around compliance and getting the data, funders want outcomes measurement. We need to get it done, where we really wanna revisit, I guess, around the conversation around actually, this can provide you with really useful information for your support planning, for your reviews, and for your exits. So we wanna do some new resources and some more capacity-building activities with staff around how they can actually integrate the data collection into their practice. And then the last thing we really wanna address this year is all of 99% of what I just talked to you about is quantitative data collection, utilizing existing data sources in the organisation. We're really keen and we're conscious that that is just one aspect of the story. We wanna be capturing those qualitative insights from staff and from clients. So we have been piloting in a few services, collecting some stories of change using the most significant change. And we're really looking at ways now that we can scale that and embed it within the MEL cycle. So collecting those stories and then talking about those stories through these kind of annual reflection, learning and action cycles that we're putting in place. So that's only some of it. There's so many different things that we're constantly working on and thinking about ways, how can we best support services to really embed this in the organisation and to create that learning culture that we're keen to explore. I'll stop now. Mandy, you turned your camera on. We've got questions. We do have questions. Thank you. That was a really great insight and provided a different angle on the whole story, I suppose. So it's a really good combination. And speaking of, thank you both. We've had a number of different questions. I'm starting with this one because this is one I always grapple with as well. And I'm hoping others will be in a simpler place. I was telling our service manager about a sister organisation that does more sophisticated MEL than we do. We're small and basic by comparison with Mission Australia. I love the idea of having a data insights team with trained data analysts. The service manager's skeptical response was to ask, yeah, that's all good and well, but does it result in significantly better services for clients? I didn't know what to say. Can you give me a concrete example of how your MEL system has enhanced outcomes for clients? The tough one, but a rich one. Yeah, so a good example at the moment, and it's a work in progress as all of this is, is when we've started to introduce the concept of evidence to action process. So recently, we did a what we call a rapid program review of a service, a group of services in New South Wales. We've gone out and we have communicated at leadership level, at management level, we've run local workshops. And what we're getting in there is some really beautiful local actions and activities that the services are working on. Still a little bit too early to actually demonstrate the impact, but there are a range of different kind of targets that we'll be measuring over the next 12 months. And when that cycle is complete, we're really looking to share that. So share that internally, share that externally in terms of a best practice study. But I think any time that services have an opportunity to access the data that they're collecting is always going to result in improved outcomes for clients and communities. Being able to reflect on that data and not just gather information from clients and people accessing our services and not do anything with it. The fact that we're proactively even just looking at the data surely is going to have an impact for the outcomes for our clients and communities. Right, should you have an example from impact measurement probably from the scaling? It was some... Like, there's lots and lots. You know, even back in the pre-system days, even things like one example was, we, Shari mentioned, it's just really simple and basic. We use the Personal Well-Being Index, which is a validated tool out of... Yeah. Out of Deakin. Thank you. And we were looking at a... It was a mental health program. And so it was just using the Well-Being Index. And for those of you not familiar with the Well-Being Index, it's just seven questions of life domains and kind of a ranking of a one to 10 on a satisfaction. So health, you know, connections, all sorts of kind of different things, but seven questions. And one of the things that came out was there was a little bit lower on community connections with this community health, this community mental health program. Kind of it, that was surprising to the workers because they felt like they were, you know, quite embedded with the... But it kind of came out with this discussion that... In discussion, actually, they invited all the clients around for pizza at the back of the service. And the discussion was that, you know, we don't need so much connection with our communities anymore because we've got you as a case manager and you just provide me, I can come to you and you provide me with the connection that I need. And then, you know, you'd see the lights come on for those case workers going like, oh, no, that's bad. And so there was really this kind of like their focus on providing good connection with client in the client-case worker relationship was actually diminishing relationships with that client to their community. And so then they ended up putting into a whole lot of practices to make sure that those clients actually had, you know, much better connections with their community, with their families, with other things. And over the 18 months, we saw those numbers significantly improve. But that kind of that dawn of realization about the way that they were conducting their service never would have come out unless we had actually facilitated that process, that evidence essentially was a mini evidence to action even though we weren't calling it back then with the services. And so I suppose that that's one example but I've probably got dozens of similar kind of things like that. That's so powerful, isn't it? Because we go in with our unquantised expectations or our quantised expectations. That leads directly onto another question. Cherie was great presentation. You'd have to keep hearing that it's great presentation. I was just wondering how at this time, you're engaging consumers or slash clients in the process of developing theories of change, goals, objectives and identifying actions or areas of inclusion. Great question and something I'm super passionate about. Rach, I could see Rach nodding because it's something I've talked about since I started in this role about how it's super critical that we need to be talking to clients. So it happens at a few different levels. It happens through our services on innovation team. They engage consumers through the development of the theories of change. They hope it's their bread and butter. It's what they do using a person-centered design approach. And then what I have been really keen, what we have been doing through the development of the monitoring and reporting frameworks is also then consulting with clients and consumers around what are the outcome domains that we're measuring, the language around the domains. Are they the most important outcomes that we should be measuring and are there any gaps that we've missed? So even with the homelessness, Mel, we did actually make some pretty significant changes to that based on consumer or client feedback, particularly around we didn't... I think we had... Original was like health and wellbeing was one of the dimensions that sat underneath the sign. And they really picked up that we were missing the impact on mental health. The fact that our services are actually... So not just wellbeing, but actual mental health, that that was really important to them. And that was an outcome that they were achieving or that our services were contributing towards. So that was added in and we started thinking about, okay, well, how do we measure that across our services? I'm also really keen that any data that we get in and that we're analysed also goes back out. So we have an evaluation management procedure in our organisation, which is around how we manage evaluation projects, which the mills fall under. And part of that is at the end of any even like formal evaluation that we feed back into our clients. So that can be a summary, but it's also around workshopping as well. What does this data mean as part of that evidence to action process? So we actually had that documented and embedded within our procedure of this is just the way that we work. So also, Keene, if anybody out there has got great, other examples of ways for us to do it, I'm quite passionate that we're thinking about that at all stages of this process. It's a really important and I'm growing in importance area, isn't it really in terms of really understanding outcomes and the sustainable impact? So I was really interested in your linkage project idea. And we had a number of questions about the use of big data sets, a BS, sorry, I couldn't think of that for a minute, a BS. And I'm wondering in terms of assessing your linkage project comment, what was the sort of plan there? Because that's also a sticking point for accessing data that creates meaningful and robust understanding about comes given clients move on and you don't know if it's sustainable or not. Yes. Sorry, don't you take it right? Yeah, that's the nugget. Yeah, that's the next, yeah, like Kishori said, that's the gap. So that data, that idea is appropriate and the proposal that we're actually looking to get some funding for, it's a data linkage with MATIP. So anyone who's familiar with the MATIP data set, which is essentially a national data set of Centrelink data and different things, but yeah, so would be a MATIP data project that we're working with some academics to come up with around. Yeah, so I'm really excited about that. I think it needs to be able to be in a position to be able to even think about undertaking a linkage project. You kind of need to have enough data over enough time. You need to have also clean or clean enough data. Sheree's like, she doesn't want to clean it, but yeah, but I'm just, I'm excited that we've actually got things like data dictionaries and it's all those kind of boring hygiene things that are just so important to really understand the data definitions, the way that the data is collected for all your fields to be able to then have something like the MATIP project possible. So yeah, we're exploring and that's my plan for this financial year is to get a couple of longitudinal studies up. That'd be amazing. Look, we're almost out of time. So I thought I'd give you one more wrapping up question. For those organizations that are here, big or small, because you've given lots of various opportunities that people could take up to varying degrees, of course. But big or small, what would you say would be the three top benefits of taking an organizational systems point of view rather than what so commonly happens? Is this functioning sort of devolved out to research partners in an ad hoc way? So very commonly we talk about evidence. The assumption is it's sort of research that we draw upon academic consultants to use. And you've said yourself that that is an avenue you're pursuing specific ways. But what do you think, why do you think people should bother to work, to take this journey, to embed these systems in their own organizations rather than just do that other strategy? Is that a hard one? No, I'll go with one, right? You can do the second one. For me, it's all about relationships. Everything that we do in the social sector is about building relationships, having an internal function and capability. Like we are able to actually get out on the ground on a day-to-day basis and hear how our services are being delivered, the challenges they're facing. We can feed data in. If you were just commissioning that or getting somebody to outsource that, those opportunities are far and few in between. So they can get the job done, but actually knowing our services and understanding our organizational context, yeah, building relationships is really, really key over to you, Rach. So I would have two, I don't know if three, so maybe together we've got three. Three? One is fundamentally, I think like a values and an ethical principle that the data and the learning should be with the client and with the case worker themselves because that's where the change happens. And so I just think there's just a principle base that if where the learning can happen that's close to the relationship luxury said that, but for me that's an ethical and a principle and a values kind of driver. The other thing I would say is actually just straight up value for money. I have seen a lot of money wasted in my time. And if anyone who's been in the social services or international development system for more than five minutes, you would have seen wasted money and I'm frankly sick of it. And I don't wanna perpetuate that anymore. So I wanna make sure that what we do, it actually provides the outcomes and to the best degree that those services possibly can. Yeah, that's it. Really powerful way to finish up. Thank you both of you. The comments have been amazing, great presentation, great teamwork, a great amazing work and done with such humanity. Isn't that powerful? Such a privilege and so inspiring to listen to your story. And a lot of comments, exactly. Thank you for coming and thank you for, like it's just, we're in this actually together and so feel free to reach out to us on LinkedIn or email us, we're really happy to share and talk. And I just think the more that we can talk about this stuff together the better we're gonna be as a collective for outcomes. So yeah, just wherever you're at, don't worry about, you know, just a little bit by little bit and yeah. I have missed some of the questions. So I'm assuming that if people have other questions, you're happy to address them out of session. Sure, I can see it. Yeah, thank you. I mean, it's really generous, really wonderful presentation and insight into the different components of what is often very challenging work, frankly. You sort of sort of not hidden it, indicated, but we know how hard it is and it's clearly been an amazing journey. So thank you very much and thank you for everyone who made the time on what is a very rainy night here while I'm living. And hopefully we'll hear a lot more about your next step in your journey too as you take this next consolidation step, further consolidation. Thank you. Thanks everyone.