 So thanks everyone for taking the time to come and learn and discuss openly with us this state of evaluation piece of work. It's something that the AS has invested in through the Relationships Committee of which I have been chair for about the past two years or so. And I'm now co-chair with Robert Sale of the Relationship Subcommittee of the AS, which is a subcommittee of the board. Our aim here really, I guess the genesis came from a feeling that the AS would benefit and its members would benefit more broadly as well from just a report that would take a bit of a snapshot of where the evaluation sector is at in Australia. So several years ago we started planning out this piece of work and starting thinking through how can we understand and what should we look at in trying to understand the state or the circumstances or the environment for evaluation in Australia. So we might just start scrolling through Rob in the slides. I think we go through some background. We'll cover today most of the main findings emerging from the report and we're interested in facilitating some discussion with you as well about the sorts of things coming through, the things that we've picked up, things that may have surprised you, but also things that we could also look to dig a bit deeper into in any future work of this nature. A little bit of the genesis here around what we were trying to study and so that the, I guess the way this was all done was that we set up a state of evaluation subgroup, because the Relationships Committee is quite a large grouping. And we wanted to get some people involved in a subgroup dedicated around this study which we knew would take reasonable amount of resources and effort and thinking and planning to get it done. So we developed a subgroup and through that achieved some terms of reference. The broad objectives of the study, as you can see in this slide was to give us that report that I mentioned about what is happening in the evaluation world in, particularly in Australia. Understand the perception of evaluation among those who commission or use the outputs. So why is evaluation being done, what are the barriers and what are the enablers of evaluation. And then to really use this output, the state of evaluation report to have conversations and to put, give the AS a tool or a insight into the field that we can talk about when we go and have chats with other people. So that includes the AS, it also includes us as committees of the AS, it also includes you as evaluators and anyone really who's interested in talking about evaluation and understanding a little bit more about the sector. This resource that we put together we think gives a little overview of some things happening in the evaluation sector. So the project as it evolved and we set some, the terms of reference, we needed a bit of extra legwork and so KPMG helped out as a through an engagement as part of this project to lead some survey work and interviews and case studies and data analysis of evaluation volumes and some of the other resources that currently exist about evaluation in Australia, so KPMG has acknowledged for their contribution to the project. But yeah, ultimately it, the work was led by the relationships committee. So myself, Jade and Rob really took a central role in driving this, this piece of work. I'll just note here a few areas that were out of the scope of work. One is around a judgment of the quality of evaluations that have been produced. We didn't feel that we would be able to do justice to that task. It needs a really dedicated methodology for that type of work. We were more interested in understanding why evaluations were happening and why they may not be happening. And also, we did, we did decide to include a bit of an analysis of the types of methods and approaches for evaluation that would be undertaken, but without a judgment of whether they were being applied with high quality or not, which is a quite a different task. So we have looked at approaches and we'll talk about that today. The effectiveness and just go back one. Yeah, so the effectiveness and impact of the studies themselves, we're not looking at how evaluations are used are the reports, not only high quality, but are they being implemented, things being picked up from the reports. So that needs another layer of analysis that wasn't possible. We also wanted to focus on being the Australian Evaluation Society. We really had to limit the work to practice in Australia. We know there's Australian people doing evaluation and Australian companies doing evaluation in other countries and even government organizations and not for profits also doing evaluation offshore. We're really looking here at practices within Australia. And then finally, we know evaluations, not the only evidence related evidence related field, there's things like monitoring and there's outcomes frameworks and there's reporting processes and measurement of those things was also something that we thought about early on but really again as the first study. We said let's just bring it back to evaluation projects and evaluation products. So that was some skirt decisions we made fairly early on, which then flowed through into what we did and the way we did it. So there is room for questions as we go. So if there's relevant questions, Jade or Rob or myself will just jump in and we'll discuss them as we go. I don't think we'd really need to wait till later to talk about some big things if there's lots of questions. So feel free to pose any questions early on that we might want to talk about as we go so there's no no issue with that in terms of the questions. So these are the four areas of focus. So one is quite an ambitious one and Rob will talk about the results but we wanted to understand if possible. And maybe we learn from the results of this but is it possible to know how many evaluations are actually occurring across Australia annually. Could we could we somehow scale that? Is there data sources and ways to understand that better? How frequently and when are organizations evaluating? Obviously there's very different organizations do evaluation from huge government agencies to very small not for profits doing evaluation projects. So how often and why and when are they doing evaluation? And then if it's possible to link to the volume and how many evaluations are occurring. Is it possible to actually understand how much is being spent on evaluation and come up with a quantification of the size of this field that we're working in? Is there a kind of a dollar figure that tells you how big the evaluation field is in Australia? And from that can then talk to the importance for us as evaluators and for the AS and for the sector to do to do that high quality and good evaluation. And to build people and their practice for what is actually a fairly high dollar value industry its own right. So this is this is what we're hoping to follow up on in that area in terms of the drivers of evaluation. So this is really the barriers and enablers question. So why is evaluation happening? When is it happening? And what decisions are being made about when it does or does not happen? And at what point in the process of evaluation or evaluators engaged or in other words, what's the timing of evaluation projects? And then just broadly why are they commissioned in the first place? I think we all probably have our own understanding of that and we can see the value of our field. But for others engaged in the study why are evaluations being done or not done in terms of the evaluation approaches? What does the evidence tell us based on the surveys and consultation about the types of approaches that are being used in Australia? What do we have a sort of prevalent forms of evaluation that we think are being driven more often than not? And what share of evaluations are being done within organizations by internal evaluators or externally by external consultant type evaluators or blended models of those two? So we're interested in that as well. And finally looking forward so based on where we are now what sort of trends should we be preparing for as evaluators? And what capabilities and things to organizations need to think about when they're preparing for evaluation? And broadly where is the practice heading in Australia if that's possible to predict based on the above questions? So that was what we wanted to look at through this study and get a bit of a sense of where things were headed. And then use that obviously to drive those discussions about the field with the AS. How can we improve what we do and the members and others involved in evaluation better? So next slide. All right. So the data collection I think this is, is this over Jade? Yes. Okay. So thanks Charlie for setting the scene. I've got the task of sharing with the methods that we use. I'm not sure if any of you have had a chance to have a look at the report. But these were the methods that the KPMG project team used to kind of interrogate those four questions. So the first one was around focused around the volume question. And there was an aus tender data analysis and Rob will talk you through that results. But suffice to say that was probably the hardest bit to really like nail that question. And there's a lot more that would need to be done to really understand the volume of evaluation. The desktop analysis also included academic research or government strategy framework. So what else is happening in the discussion around evaluation and past AS papers. So some of the things that came up during the study, for example, around conversations about professionalization of evaluation. So drawing on that repository that the AS has of reports that have previously been done around those kind of issues. The key method with though was really the survey of AS members, which we'll go into on the next slide. And then supplementing that through stakeholder consultations and then a couple of discussion forums. So the project team had a challenging session, I think with the AS fellows, particularly around like interrogating some of that volume data. And then we had a discussion at the AS conference in Adelaide in 2022 to say, you know, what are these findings really mean? What might be underlying some of those insights have been put into the final report as well. So just onto the next slide about the survey. So the survey, the key limitation is that it was only sent to AS members and this was kind of a decision made to contain the scope of the study, but also, you know, recognizing it was just a first step and what was kind of feasible and impossible to implement. So it went to all of the AS members, got a 15% response rate, which isn't too shabby, but also could be improved on if we did it again. I think you'll see that there's a strong representation of like private sector and consultancy respondents. But then if you added all the government respondents up there too, that's a reasonable response from government agencies. But we know that to kind of get a better representation of what's happening from the perspective of government agencies, we'd need to go beyond that, that AS member list. Then the survey was supplemented with a stakeholder consultation. So we're aiming for up to 20 stakeholder interviews in the end, 14 organizations of sort of various types and sizes participated. And this was really to dig into the questions a little bit more around how is how are organizations structuring their evaluation capability? You know, is it a centralized unit? Are they planning to grow that unit? How are they managing things, internal versus external evaluations, digging into more trends in different sectors? So this really sort of supplemented the survey data. And then just to be clear, we understand the study has like strong limitations and it's really a jumping off point for conversations and for thinking about what do we as an AES want to know more about? So we know that limitation of only serving AES members doesn't really capture the full picture of evaluation in Australia. The volume question is the hard one. And we need to recognize that while the project did have a budget, it was a pretty small budget study. When you're trying to talk about what's happening in evaluation across Australia. I think it's over to Rob now. Yes, thank you Jade. So starting with this first question about volumes. And I'll jump you to I think what is really the key kind of piece of data that we have to share here. So as Jade mentioned, we looked at data from public sources such as Oztender to get a sense of the volume of activity being undertaken. The focus was on one financial year, which was 2021-22. And, you know, I imagine your reaction is probably similar to ours and that of others who are looking at this, which is that the numbers are quite a lot lower than we would have expected based on our own practice and from anecdotal evidence. I know, for example, working in the NT that I mean, I could count, you know, specific evaluations that would add up to a lot more than than those that are shown there. And I imagine that your reactions are probably similar. And this really I think reflects limitations in the data that is publicly available. For example, potentially evaluations not being posted on some of those forums or not being posted with titles that clearly identify them as evaluations. I mean potentially because people were thinking of them in different ways. I think really what we could draw from this examination of evaluation volumes was just about the diversity of evaluations that were being undertaken. So with the limited data that we did have, we could see a lot of diversity in terms of sectors, departments, levels of government. And I think, you know, as Charlie hinted at earlier, really a key finding here in addition to that about diversity is potentially about the value of trying to make this information more transparent or kind of doing more of a deep dive to try and understand more about this question. Before we move on, it would be actually interesting to check what are other people's reactions to these numbers. I know for the states and territories to me they look very low. What about the Commonwealth figures? How do people react to that? I think it also tells something about the diversity of government tendering websites, which is reflected in the data. And you may look at one website per state, but it's actually going through many avenues. That's the challenge of capturing all that. Yeah, absolutely. And I think Martina has hit one of the nails on the head there in saying that most government evaluation procurement goes through panels and so it's not necessarily published. Shout out to Cristobal also in the NT who has picked up that, yes, there's far more than four evaluations happening in the NT. Sorry, Martina, were you about to say something? Yeah, just particularly any consulting or strategy work that's under a particular dollar threshold goes to a panel typically rather than open. This is less work and it's easier for the people doing the procurement. And I think the majority of evaluation would be under that threshold, I would imagine. Yeah, absolutely. And for me thinking about if we were to try and repeat this exercise in my experience, panel managers don't necessarily have this data themselves about what projects are being undertaken. Do people have any thoughts or suggestions about what might be the best way to try and get a sense of evaluation volumes? Isn't there reporting against those pre-qualified panels as part of government? I think in some cases they may be, but certainly not in all cases, at least not public reporting. I think it's like an acknowledged cultural problem in some ways with the evaluation sector is that it's some sort of secret world where we provide targeted reports to specific clients about their issues or their future funding submissions. But that evaluation is probably not being used to its full potential in terms of a learning process and a sharing process. So I think that the hidden data around which evaluations are happening is maybe because those who are commissioning the evaluation in the first place are protecting reputations. They don't want the full review to go public because it'll lead to criticism from media and others, you know, other political interests who may use it against them. So I think maybe the perception or the cultural perception of evaluation is that it's sort of, it's almost like a secret world in a lot of cases. So even if you could find out that there was a study done, can you find the actual study? So even out of the contracts that are publicly available, there'd be another interesting follow-on to know, is it even possible to track down those reports? And I'm guessing no in most cases. So I just think there's a little concern for the sector that we should be aware of. There's a lot of work going on. There's more than what's here. It should be really across all sectors of public life, including the nonprofit and government sectors, which is traditionally where evaluation sits and academia as well. But yeah, but our work's probably not being shared as widely as it can be. And we all think of ourselves as doing work for the public good and doing work that makes things better. But it's a very, in a very targeted way rather than through sharing that information that may stand the test of time and be used down the track. So I think evaluations becoming a point in time review process, maybe, which I think caps maybe its impact in some senses. Yeah. Can I just make a point, please? David Bruce here. I thought I would have thought the easiest way of finding out about volume is to ask members of the AEAs what they're doing to provide a census of their activities from one financial year to another. It's clear that we're not allowed to distribute our reports often. It's not clear that we're not allowed. In fact, we are allowed in terms of the proposals that we prepare to indicate what jobs we've been working on, because that's part of the process of credentialing ourselves into the marketplace. I would have thought that the easiest way of establishing the volume of valuations is to ask evaluators, what are you doing? And what have you done it for? And what's the topic? And that's it. And if you want to, you could also ask what the value was. And that will at least give some better database than that which you've got in the present. Yeah, I think there's some great suggestions here in the comments about different ways we could look into this. And I believe in this survey that we ran, we did ask, for example, how many evaluations respondents had worked on. So that also is interesting to think about. Yeah, we'll certainly hover up all these suggestions to inform our thinking for future studies such as this about these questions. Any other comments or questions on the topic of evaluation volumes before we jump on to the next set of questions? No? Okay, so I'll hand back over to Charlie to talk through evaluation drivers. Yep. All right, thanks, Rob. Look, I don't know if many of you are aware. I'm just going to put something in the chat. There was another report released today from the Centre for Economic Development of Australia. There we go. Rick's just done the description there perfectly timed in the comments as well. So I've just shared the link to a report which is purely really all about evaluation and the need for better and more evaluation is more in line with the amount of social services work being done to improve outcomes for disadvantaged Australians. So just worth a look at that link. Their attempt at working out volumes is to look at order to general reports and the test of how the evaluation frameworks work together and also productivity commission studies. So they do a little attempt on volume and the presence of evaluation plans and evaluation work. So just another aside, but lots of good info in that report that's somewhat parallel to the state of evaluation. So it probably covers some other areas we'll touch on. So the drivers piece. So this is really suggesting that like why are evaluations being done. So the key reasons are around assessing impact, informing continuous improvement, ensuring accountability and supporting funding decisions. So probably many of the reasons that we're all fairly familiar with understanding the impact is all about outcomes, informing continuous improvement, probably more about the implementation side of things, accountability. That's the question we have around accountability for funding use, maybe feeding into budget cycles, for example, along with that funding decisions, recommendations and ways forward type of thinking that goes into evaluations. So study participants. Again, we don't have access to the full bank of interviews done by KPMG here but that we can really reflect what we heard back from them but the participants that they engage with reflected pressure and scrutiny on organizations to meet community needs. So that is to account for the funding spent and talk about the impact of various programs delivered and to demonstrate the I guess effectiveness of the use of resources. So particularly since the start of the COVID-19 pandemic and obviously with budget pressures and constraints. So there is that that's the really accountability side of things coming through again there. The results of evaluation practice included like access to data and good data analytic capabilities, but also a strong organizational culture that we're evaluation is is normalized and really expected and and supported and funded. Some of the barriers to evaluation occurring around where there's been a lack of funding allocated or committed as the program or project or or systemic area has been rolled out over time. Sometimes limitations in the capability of people to undertake evaluation so if an organization hadn't set funding aside to essentially hire an evaluator. They'd be looking internally and then there's a question of whether they're the program staff feel that they have the capability and knowledge about how to how to lead an evaluation project. Data availability is another one so a barrier would be that an evaluation may not happen where the data is known to be challenging or unclear or not in comprehensive. And analytical skills that go with the evaluation job. So this could all impact the timing design and use of the of the evaluation findings so in a nutshell that's the key findings about the drivers of evaluation. So maybe next slide Rob just shares a little bit of data from the survey around the main motivations for evaluation. So as you can see here for the, for those in the blue, or on the left of each of these bars is the non consultant organization. There are 121 respondents which is ever another than the private sector consultants and then the orangey red color is the consultant responses. So, for the non consultant responses the major reason for evaluating is to understand impact. So that that's that outcome oriented evaluation studies. And for the consultant evaluators the main reason why they feel they're doing evaluation is to improve implementation. So really the two sides implementation and impact come out first and second there. Then there's a few utility elements of evaluation so to seek funding renewal enhance accountability and then quite low down on the scale particularly consultants are seeing that only 13% are prioritizing the reason they're doing evaluation is to build greater knowledge. Some in non consultant organizations see their role as building greater knowledge, but maybe that that's maybe something the field could be looking to improve that we can drive improved knowledge and and practices through evaluation. Yeah, meeting legislative requirements are sometimes a driver for evaluation. So if it's been planned and legislated to give stakeholders a voice is another relatively low response rate area as the major reason for evaluating. So in Australia we could we could probably conclude from that that evaluations not being used to represent perhaps more marginalized or minority groups necessarily. So more done for the like the league of town rather than the sort of democratic or empowerment evaluation type of processes on the on the whole to promote transparency also very much a small a small response rate there. Not all evaluations as we discussed to being public publicized so that's not a huge driver. Identify innovative solutions sometimes and then to assess whether a program is needed that probably links a bit with to seek funding renewal. There's a little bit better to their the main drivers. Yeah, a few interesting results there really from what is being prioritized versus what's not prioritized. So next slide is the barriers so similar question on the left there in the survey what are the major barriers to evaluation. And again you can select up to three of these but the first major barrier is the lack of funding allocation. So if evaluation is not planned and budgeted. It tends to be less likely to happen. So I guess that's a lesson that we can systematize evaluation and make sure that it's planned and then it's more likely likely to happen. So if we if we're pursuing that form of evaluation. It does need to be built in from the start. There may be again with shortage of time perhaps also capability gaps where it may feel that the right people are unavailable to lead an evaluation. In some organizations that's just a limited culture of evaluation. So it may not happen for that reason shortage of time to complete evaluation typically that's a barrier that's run into when there's been a lack of planning for the evaluation from the start. But where there's considered to be not enough time. There as well along with a lack of desire from leadership. So again it's a whole of organization cultural positioning of evaluation where it needs to be accepted by an organization to happen. Small response rate there on poor past evaluations so some evidence of that perception that evaluations may not be adding value and a very small number saying there's a challenge finding an appropriate evaluator. There you go. I guess those things are talked to a little bit on the on the right there from the consultation responses so capability issues around staff turnover or lack of formal qualifications or training or maybe comfort in evaluation. And a feeling that there needs to be some level of baseline evaluation capability of all staff in an organization to enable the evaluation to be then embedded in the data collection process of programs. It'll come through a little more in the trends but the data question for us as evaluators I guess is just gets increasingly central to what we do so there is myriads of data available for us as evaluators in many cases. But we some external evaluators don't have access to that data and then we need to learn how to analyze and assess results against that data as well so there's a whole range of data questions for us. And at the bottom there there's a little dot point additional barriers include an absence of program theory or logic, which probably means lack of objectives being specified up front and a lack of a clear starting point for the evaluation. Challenges of information sharing, for example across agencies or clients, and then issues interpreting and presenting conflicting findings, but yeah, that's just something that can arise case by case. So really that's that's broadly it on evaluation enablers and barriers. Any, any questions from that content area. Charlie there's a comment from Rick in the chat that kind of says this data sort of doesn't support the consultants are the preferred evaluators for accountability. I wonder if the next time we did a survey like this presuming we do a next time we've got to define what we mean by some of these things to because perhaps like helping people improve might be a way of framing accountability and thinking about it or it might be. Well the original purposes of this evaluation your commission to do was for a kind of accountability and measuring in impact that actually the whole implementation slowed down and you're not yet ready to be commenting that way so you all you can do as a general evaluator is help improve processes and implementation. But Rick I'm not sure if you wanted to add anything more to that comment. You might be just on the on the typewriter. Yeah, any other queries or questions about the barriers and enablers, or we jump to approaches, maybe Jade. So, I think this one I'll preface again with gee it's hard to get an agreement about what are the categories for evaluation approaches, because everyone has slightly different ways of talking about these things and so the list that we put in the survey to AES members is, you know, by no means the end list but I think what came through strongly is, you know, there's not a lot of RCT is being done by AES members which is interesting when you hear the dialogue around, you know, the call for more RCT and evaluation so to the results of our study suggest hey actually the way AES members and evaluators are thinking about evaluation is to design evaluations based on the questions to be answered in the context in the stage of a development in a program so we're kind of using methodological appropriateness as a gold standard but others might be thinking RCTs are the gold standard so how do we have these conversations. This might sort of get reinforced that people in the tense views about what makes sense in terms of the evaluation approaches being used but it also could be used to say hey more RCTs need to be done as well. The things that came through the qualitative data were that more demand for rapid evaluations particularly during the pandemic which I think none of you will be surprised by you'll have seen some of those coming through the AES sessions as well. And then there was that sort of move from some of the government agencies that we were talking about about and focusing on that internal capacity building and that might tie to some of the comments earlier in the study about you need the baseline capacity for evaluation across the organization to be for evaluation to be useful and used as well. So if Rob just goes to the next slide. So this shows the results from the survey about what did what did AES members say were the approaches that they were using so theory based approaches came first and I think we had a description there like program logic or theories of change so probably not surprising to anyone that that came out strongest given that some agencies kind of have a templated approach to requiring programs to have a logic model or a theory of change to start off with maybe a bit surprising that emphasis around co design. There's often that need to collaborate around designing evaluation, developmental evaluation systems evaluation maybe surprising how high that was. And then you can see it sort of gets less and less towards experimental designs down the bottom. Anyone got any questions about these categories or these findings. Julie just had a question in the chat about, don't you see RCTs is out of scope like monitoring and so on, but I guess our response is that it was in scope so just to clarify, they were part of the experimental design category. There was a bit of a discussion about this Jade at the conference about why experimental and RCTs were so seen as so low in the response rate here. And then I think our conclusion was largely that that RCTs are really not happening as often in Australia as maybe in some other places or jurisdictions so yeah they were included the data suggests that from AS members who are often doing or involved in evaluation that RCTs really aren't happening a whole lot right now. But there might be some happening outside of AS members. But I guess some of the discussion also the conference was around that like the ethics of setting up an RCT is the right thing to be doing in certain contexts and that being difficult for a number of like the program areas that we're working into. Was there any discussion on the typology I find it quite quite good actually list here. Actually I don't think anyone's discussed or critiqued the list. It's quite controversial sometimes so I quite like it. We went through quite a few different we went through we really brainstormed this pretty hard in the survey design process so we did have multiple iterations of this list. And this is what we really arrived at. I guess that yeah I mean it really probably confirms what we probably all suspected that the theory based methods of what were objectives achieved really the predominant form of evaluation in Australia and that therefore you go through implementation and you look for some sort of outcomes. And yeah co design was like really much higher probably than many of us expected but that's obviously the work that people are getting involved in now, which tells me that evaluation sits somewhere between like end phase of an intervention or a program and also designing better programs in the first place so you might pick up some of that work around doing evaluation frameworks and evaluation planning and all of those things that get in the design process. Yeah and I think Charlie Martin has just put in the check I think co design probably and being interpreted in multiple ways I think it is one of those things that people are interpreting multiple ways, and you know it's probably just like collaboratively designing the evaluation in general. I think Julie that discussion about RCT is definitely back on the table, even debates emerging now, and maybe one of the next steps thinking about different evaluations is also how you can combine these different approaches to see you can combine things that on the surface seem difficult to combine, but can produce like different answers, or like different provide different aspects to the key evaluation questions. And I think just Lucy's asked if there's any links between the purposes and the methods but I think the way that the survey was set up because you could select multiple methods and purposes and it wasn't aligning around singular studies it wasn't really easy to do that analysis between like, for what purpose are we using what types of approaches but in the qualitative data there was that explanation of like, basically we're using the approach that's best was designed to best answer the questions that we have and the purpose that we have. Emily. Sorry. Sorry. No, you go. I was just going to point out that question from Emily about whether there are any differences between internal and external evaluators views. I'm not sure if we did break this one down in that way. I can just go have a look at the report while we jump to the next section. I'm just I'm just looking at that now I don't think we did get that breakdown in the survey breakdown data. So, no, I think the short answer I think is no but if anyone else finds it then let's share it. Okay, so let's jump to evaluation trends. There are four main trends that came through in the survey results and the stakeholder consultations, which we've summarized here. So, one of those trends was around evaluation commissioning models and in particular I think about building internal capability and capacity in evaluation which is something that many organizations are seeking to do. And what that looks like I think can vary quite a lot depending on the organization's kind of current level of capability, but it may look like establishing an internal evaluation unit. Or kind of at the other end of the spectrum it may just be around requiring capability building as part of externally commissioned evaluations. So, in terms of establishing internal evaluation units. There are multiple different models for that, ranging from centralized units that are conducting evaluations themselves through to more decentralized units that are kind of supporting program areas to fulfill their evaluation responsibilities. We've highlighted kind of some both opportunities and challenges relating to these types of units. So, on the opportunity side, we were hearing that these units can help to identify and address some of those barriers to evaluation that Charlie was discussing earlier, for example, around access to data or lack of evaluation capability. The challenges associated with them were around just the fact that building that internal capabilities is a long process that can take years and obviously requires funding, which can be an obstacle. I think another trend here is quite related to that one about commissioning models. There was really just recognition of workforce capability as something that impacts all aspects of evaluation. And so one of the trends that came through was around kind of the ongoing need to grow capability and evaluation at the level of individuals, teams and organizations as a whole and to build a culture where evidence and evaluation are highly valued. Some of the consultations did touch on the potential for evaluation to be professionalized or for training to be credentialed so that evaluators can demonstrate their skills and experience. And as we noted in the report, that is a topic that has been studied by the AAS before, such as in a 2017 NSOG study looking at pathways to evaluation professionalization. On the topic of data, this came through strongly, as Charlie mentioned, as a source of potential barriers to evaluation. For example, around accessing or linking data sets. But on the flip side of that, there was also, I think, excitement about the potential value of some new digital technologies, things such as artificial intelligence in enabling evaluators to use large data sets in new and innovative ways and do things like, for example, like real-time monitoring of programs. And then a fourth trend that came through very strongly was around Indigenous evaluation approaches and increasing recognition of the importance of ensuring cultural safety and increasing the ability of evaluators to work in a culturally safe way. Stakeholder said that this was a priority for their organizations. They noted a trend towards co-designing evaluations with Indigenous peoples and communities, although, you know, as Jade noted earlier, it would be interesting to unpack what specifically was meant by co-design. Something else that was mentioned, and then I think is worth a plug, is the AAS cultural safety framework, which is very relevant in this regard and is really worth reading if you haven't already. Were there any comments or questions? Did these trends resonate with people? Was there anything that you kind of expected to see here that wasn't covered? I think when we were discussing it, we were kind of surprised by that emphasis on like capability and is there a need for professionalization or things like knowing we've had some of those in-depth conversations within the AAS and not really hearing those conversations in the past. And so maybe this is like honing in on some comments from a small number of stakeholders, and maybe it's not as big a concern as we suggest. So I might jump to a discussion where we can kind of cover all of these topics in a bit more detail. So in particular, we've come up with a set of questions that we'd be interested to hear your views on. So as I was starting to touch on earlier, those are about looking across all of the findings that we've discussed in this session. What most resonated with you? What most surprised you? What are the implications that you take away for your practice as an evaluator? What are the implications for the AAS or the sector as a whole? And finally, what would you like to see explored in future state of evaluation studies? So what we're going to do, we've got a bit over half an hour to go. So I believe we're going to break into breakout groups and we'll take, let's say, 10 to 15 minutes or so to talk through these questions. And then we'll come back and have a chat about them as a group to hear what were the kind of key themes coming through in those conversations. Greg, was I correct in saying that we're going into breakout groups? You're on mute. Just getting them up now. So we've got five of the four to five participants each and we'll get people back in 15 minutes. Well, perhaps we can start with that conversation we were just having about evaluation volumes. And sorry, I didn't catch your name. Would you like to kind of talk through what the discussion was there? Oh, I've dropped myself in it now. Okay. So what most surprised us was the low number, the low volume of evaluations reported. We, as I said before, we had people from different evaluation backgrounds, sectors, we had people in public and people in not for profit. And a couple of different big Victorian government departments, Department of Health, Department of Education, both of which have in-house evaluation teams. Yeah. And so a few, quite a few different resources or resources were suggested that could have been accessed. And I personally was surprised by the use of KPMG. Yeah, I won't say too much more than that in public, but yeah, that's just to get the ball rolling on that subject. Does anybody else want to pick that up? Yeah, our group also had the volume as an area that was surprising and for the low volume, so just reiterate in that point. Yeah. I was in the same group as Franza and I was interested to know why there wasn't anything investigated around the philanthropic not for profit sectors and their commissioning of evaluation. Given that, you know, Paul Ramsey Foundation is just one example among many does invest quite a bit in its evaluation practice. Yeah, yeah. And feel free to jump in, Charlie or Jade. But I think, I mean, while we did have some consultations with people outside of government, I think the reason for the focus on things like odds tended to start with was really just a question of scope and kind of about the limited like resources. We had to to invest in the study. So, so I think, yeah, that would certainly be something that would be interesting to explore more detail for future studies. I think also just say we did an attempt at a case study process, which also included a case study from the philanthropic sector. And the, the some other case studies that one actually did get drafted that many other case studies ran into issues in publicizing departments entire like approach and data set and some interpretation around that so the attempted case study process we learned was maybe not the best process to run in this case. We did, we did philanthropic sector though and we, you know, got some reasonable data back from that sector specifically. And I think what you've just said is a really interesting insight in itself is, is that government reticence to talk about the evaluations that they do. And understanding where that comes from because I would certainly think that with the presence of government evaluators in the room here that the, and please stop me if I'm putting words in your mouth but you know there is a desire to see that work be published but there is still a barrier there and having been formally in government I faced that myself working in internally. And then how maybe we address that as a field is a important question going forward. And some kind of good reasons not to like not to publish to when you've got like what what would that stop us saying that actually can like help organizations learn like if you've got to, if you've got a report that you know staying internal, you can kind of say all the stuff that needs to be said to learn from words. If you're if you know it's going to be published the way that that report is framed and what goes into it might be kind of different to that I'm not for sharing things that there's, you know that has implications to. I guess in our group and just to find out something is a little bit surprising there is a slide showing quantitative funding and qualitative findings talking about barriers. So for the quantitative findings it mentions lack of evaluation culture and also lack of leadership. Actually I think the two factors is the interactive because your leadership style will affect evaluation culture, but I'm just a little bit surprising the interview find is a low body mission is that so I'm just wondering who is the interview is because I think the evaluation culture and the leadership would be very important. Yeah. Yeah, that's a good point in terms of yeah who would will be speaking specifically to. And I think, I mean Charlie or J mean my understanding is we probably were often speaking to, you know, evaluation units or, you know, potentially more senior people so that may be a factor in that that may have influenced the relative prominence of some of those factors is interesting to think about. Maybe some of the organizations with more stronger evaluation cultures to. Yeah, yeah, true. I think that's an excellent point that you make Emily in terms of those cultural aspects can really influence an appetite for evaluation and spending on it and transparency around it. Yeah. Would any other groups like to jump in and report back on their discussions. We didn't really follow the questions but I'm Julie and flow ads and my interesting comments so like, you know, this being kind of a good baseline something that we can build on but thinking about, you know, when we're just relying on commentary actually there's a need to think about like what's the history like the policy history of this and I'm not sure we've fully grounded the study and that's thinking about you know, the previous Commonwealth evaluation policy like around requiring an evaluation and that would have been better data and when we say things like there's a growth in internal internal evaluators that seem to kind of speak true but like since when and is that everywhere. So there's probably some things that we could dig down into. We also touched on, you know, how does this fit with things like social enterprises and social impact measurement you know what and this is some of the conversations we've had in the relationships committee as well like that boundary around a boundary because you know some social enterprises could value from could get value from evaluations but they tend to be talking more about social impact measurement and that's not with us and so how do we build kind of bridges and relationships as well. And then we started talking a little bit more about like where to next for this study and how could you engage like government agencies in getting an X version of this study up like what might be interesting to them like how would they get value out of out of this and what would be a benefit for them to for them to learn from. Yeah. Sorry, go on. I'm not sure if I zoned out for a minute or whether Charlie might have mentioned that in our group we talked about the timing of the conduct of the survey during COVID and that there was a lot of surprise but it was interesting to see the number of rapid evaluations that were undertaken and and that sort of potential shift to that desire for quick results or a quick evaluation. Yeah, absolutely. Yeah. And we actually had some some similar discussions. I think to your group Jade around the fact that there are a lot of people who are doing things that could be considered evaluation who might not think of themselves as evaluators. And that in terms of, you know, future studies it could be interesting to start delving more deeply into some of those like evaluation adjacent activities. Okay, was there any other comments any other insights that people wanted to share. So thinking about the numbers the issues around the numbers I mean if we had to do this again tomorrow. It's a bit tricky and I'm not sure we'll never ever get to the bottom of it and even the. Remember, remember the new social government did an exercise at the back of the 2016 auditor general report and evaluation and try to get some numbers but you know how much was being spent internally, and it was actually very difficult so Suggestion to rely on the AES members actually quite because they always be here and and they are representatives of evaluators internally externally from all spaces so I see the value of this kind of exercise focusing more on the trends the trends what's happening the kept capturing the the flavor of the moment and how can be shaped. Yeah, and there's also potential, you know future studies iterations of the study don't need to be similarly brought in scope like we could do a study just looking at volumes and try to go more into detail. Verena, did you have a comment. Thank you I was just going to comment on, perhaps an approach that might be a little bit more fruitful going forward I think. There's twice how much knowledge is within individual states and territories and I'll give you an example. If, for example, I commented in I have got a very good view what's happening in the Department of Education Victoria can give you the numbers and not that far off. So going in Victoria the other central evaluation units and then you combine that with knowledge from Department of Treasury and Finance, you would get pretty close to getting a representative number. So I think it's probably tapping into there's so much corporate knowledge within states and territories and I think that could be utilized in a much better way. I don't think it is actually that hard. Yeah, yeah, absolutely. Thank you. Thank you. I'm conscious we've only got a few minutes left and people are starting to drop off so I'm going to throw to Charlie to talk about the next steps and wrap things up. Cool. Well, thanks everyone. This is the first session that we've run post release of the report. And yeah, so the reports out there feel free to download it off the AS website. We just put a new version in picking up a table that had dropped out as well so have a look again if you've already looked once. We may do other sessions like this with other state groups. We haven't fully mapped out the next steps for the for the study. But yeah, the reports out there now appreciate everyone contributing to today's discussion. It was very much positioned as the first potentially of a first of a sequence of studies potentially that the AS may run at certain intervals over time so we might be revisiting like what's shifting year on year. Or every two, three years perhaps and undertake a kind of a revised version with some different questions and different methods. We've also learned a lot from the process of running through this report and developing it over the last year and a half so there's lots to learn as well from that perspective. So yeah, like the whole things are learning journey for our sector and for us as well as for the AS and hopefully for you guys too so yeah thanks for contributing.