 So, hello and welcome everyone. My name is Ruth and I'm the co-convener of the Canberra AES Committee. It's really wonderful to have you along today. I'd like to start by acknowledging the traditional owners of the lands on which we are all living, working, evaluating. I'm coming to you from Ngunnawal country, but I know that certainly we have presenters today who are joining us from other parts of Australia and people who are participating in the session today as well. So, welcome everyone and I'd like to, yes, in addition to acknowledge the traditional owners, pay my respects to elders past, present and emerging and any Aboriginal and Torres Strait Islander people who are joining us today as well. So, welcome everyone. It's great to have you here. I'm not going to spend too long on introductions today because our wonderful presenters are going to be able to introduce themselves, but just before I do, I'm going to hand over to Jade who's going to become our host so that she can share her slides. And Jade, I think hopefully you are now our host and you are able to share and if you have any issues, please let me know. I think in terms of housekeeping today, Jade will guide us around when you're looking for questions and so on, but I'm assuming it's the usual pop a question in the chat if you have one as well and we'll keep an eye on the chat function. So, yeah, Jade and team over to you. Thank you all so much. I'm just going to start sharing again so you don't get my second screen and presenter mode. Take two. Okay, no, still in present mode. Let's try that one again. Okay, hopefully you can all see that now. So, thank you for joining us this afternoon. It's great to see such a lot of interest in evaluation maturity models. I'm going to throw to each of our team to introduce themselves and then I'll talk you through a little bit about the structure of what we're going to cover today. So, I'm Jade Maloney, one of the partners at ARTD, Consultants and CEO. ARTD held the contract for the case study example we're going to talk about today. A lot of my work has been in evaluation capability building and I also did my master's research in evaluation what's the use. Thankfully, there are many uses but capability and maturity is important to enabling that use. Scott, can I throw to you? Hello, Scott Bailey. I'm a solo consultant based in Canberra. I work for myself currently, work for consulting firms, work for a range of Commonwealth and state government agencies and I've got a real interest in evaluation capacity building, having developed capacity building diagnostics and strategies for the Asian Development Bank as well as Department of Foreign Affairs and Trade. Thank you. Thanks, Scott Duncan. Good afternoon, everybody. Duncan Rintool here. I run a consulting company called Rooftop Social. Like the other presenters, I have a great love for evaluation capacity building. I spent four years doing that and nothing else at the Department of Education in New South Wales a few years ago which was a great opportunity and I worked on the case study project along with everyone else that you're going to hear about later so you'll hear from me in the second half. And Brad, last but not least. Good afternoon, everyone. My name is Brad Asprey. I work at the University of Melbourne. I'm the Center for Health Policy in a group called Evaluation and Implementation Science Unit and like the other presenters today have had a long-standing interest in ECB and I've really appreciated the shift that's been occurring in ECB which is moving away from the focusing on knowledge, attitude, skills and behaviors of individuals to concentrating on how we can get organisational systems and processes in place that are essential beyond the training side of it. So it's great to see that shift occurring and look forward to your questions and interest today. Thank you. Nice segue, Brad. So we have a little bit of an agenda for you this afternoon but we want to leave a lot of time at the end for questions and discussions too. So first, Scott is going to give us a summary of the international capacity building or diagnostic literature that can inform an evaluation maturity model or the way that we're thinking about evaluation maturity models. Then I'm going to jump into the link between evaluation maturity and the Department of Finance's evaluation policy and guidance and some more recent developments, how an evaluation maturity model might help with that. Then Duncan and Brad between them are going to cover both the case study that we're talking about, the maturity model we developed and an example of how it might be used in practice and then we'll come back together to give you an opportunity to ask us some questions or to share your thoughts. We're conscious that others have also developed evaluation maturity models for government too so we're keen to share learnings. There are many ways to approach this. Okay, if you have questions as we go, sort of clarifying questions, feel free to pop them in the chat and I'll try and keep an eye on that but if we can keep most of the things that are about implementation or questions about where the model came from or the challenges in implementing it to the end, that would be great. So Scott. Okay, thank you. So when we're talking about evaluation capacity, we generally mean the ability to undertake evaluations and to use the findings to inform decisions and actions. We actually have about 45 years of international experience now with the evaluation capacity building and associated diagnostic models. Not many people are aware that Australia actually started doing its own evaluation capacity development work in Vietnam back in the 1970s and we've jumped. They had a slide too quick. But I'm just trying to let some people into the waiting room. It seems like I can only do that if I can get off this screen. So I'm going to jump out for a second and see if I can let those participants in for one sec unless somebody else has already. I quietly in the background I've reclaimed the host and I've been letting people in. Excellent. That is great. So I'm saying that we've got about a 45 year of experience in international experience in evaluation capacity building and associated diagnostic models and this little diagram has just hit some of the highlights. For example, authors like Volkov and King have published a checklist. The ACT government has a nice evaluation maturity model. Rob Leahy is one of my favorite authors. He's the former head of the Canadian Evaluation Center of Excellence. Preskill and Boyle have written a lot of articles and Boyle was a co-author of one of my favorite books, Building Effective Evaluation Capacity. The World Bank has published a heap of stuff going back to the 1980s on strategies for evaluation evaluation capacity building. The authors Cousins and Burgess have published separately as well as together and Burgess has done a little diagnostic model that actually gives you a score on evaluation capacity. That's kind of interesting and she's tried to standardize that. John Mayn writes really nicely about evaluation cultures. The Asian Development Bank has published several studies on evaluation capacity building, one of which I wrote and others more recently. And just for fun, about a month ago I went into chat GPT and said, please prepare me a rubric for evaluation capacity building. And it came up with a whole a nice range of the standard factors and I thought it was pretty good actually. So I put that up on my website or you could try it yourself just for fun. Thanks, Jake. And if I was to try and boil all this down into a fairly simple way of thinking about it, this is what I find useful. When we're talking about evaluation capacity building in terms of focus areas for strategies or areas to do diagnostic assessments, I think this model here I find helpful. We can talk about leadership's demand for evaluative feedback and their ability to make use of it. We can also talk about the supply of evaluative feedback, the number of studies and their quality. We can also talk about what Leahy likes to call the institutional infrastructure, the policies, the systems, the resource, IT, staffing, all that sort of stuff. And then your organization's external environment, whether that's stakeholders demand, scrutiny from monetary generals, national policy, all those sort of things. And so evaluation capacity building efforts, and diagnostic tools tend to focus on one or more of these areas. And when we look at internationally what's been happening over the last 34 years, government capacity building efforts generally tend to emphasize new evaluation policies, staff training and technical guidance, coupled with a community of practice. And all this is going to lead to an increased supply of evaluations. Unfortunately, the international experience, not everybody agrees with me, but I think this is reasonably clear. Unfortunately, focusing on those things, policies, staff training, guidance, the community practice, without leadership buy-in, the international experience has been, you struggle to make progress. I think the experience in the last 45 years is quite clear, that leadership's demand for and ability to make use of evaluative feedback is the key. And I could probably name you 12 countries off the top of my head that are focused on supply side and policy tinkering that struggled a little bit, was countries that have made better progress. South Africa, right at the moment, is where there's very senior level political and administrative support for the function. And then the other point I would make is, the need to match the supply of evaluative feedback to leadership's demand. And by that, I mean, if you're on a hypothetical scale, if leadership is demanding five evaluation reports a year, and the evaluation unit in that government agency is giving them 10, you have an oversupply. And what tends to happen in those situations, and I've personally witnessed this in four or five, six agencies I've actually worked with, if you give an entity more evaluations than what they feel they can digest, more than what they want, they feel under stress, they feel attacked. And then they will turn on the evaluation function to reduce it. I've seen this happen in foreign affairs and trade, I've seen it in the Asian Development Bank, I've seen in the Disability Services Commission and Pro, they've seen it in community services in Melbourne. It's quite a common and predictable response. So if the agency feels under threat by the evaluation unit, they will reduce the evaluations unit budget staffing. They will, when one of the senior people move on, they'll leave the position vacant for an extended period of time. And one of my favorite strategies, and this has happened to me twice now, they take the evaluation unit out of head office and they move it a couple of kilometers down the road to another building to protect its independence. I quite enjoy that one. I've had been in that situation a couple of times. So I'm trying to make an argument that leadership's demand for an ability to make use of evaluation feedback is really crucial. And secondly, you want to match the level of supply of evaluative feedback to the level of demand. And if you have the ability to produce a lot more evaluation reports than what your agency entity actually wants, my suggestion would be you reduce the number of reports you do, but use your spare capacity or capacity building work. So you engage in workshops, you engage with forms for senior executives, use your extra human resource capacity in that sort of a way. Anyway, I'll stop there and I'll hand over back to Jane and I'll be happy to talk, answer any questions about this a bit later. Thanks, Scott. So hopefully you can see as we move through the model how we use that literature to think about an evaluation maturity model can't just talk about the quality and supply of evaluation. It has to talk about those demand side factors and what's important to driving those demand side factors. And I know when I did my research into evaluation use, those demand side factors are pretty important to useful and used evaluations. And that engagement that Scott's talking about in terms of capability building with stakeholders is really important too. Okay, so I'm going to shift to thinking about why an evaluation maturity model. And we think that there's some quite good alignment with between the Commonwealth evaluation policy and what an evaluation maturity model focuses on. So the Commonwealth evaluation policy aims to embed a culture of evaluation and learning, learning from experience to underpin evidence based polity. And it also aims to support entities to improve evaluation practices and capability. So these kind of things about generating the culture of evaluation and improving practices and evaluation maturity model can help with. Now what's in the Commonwealth evaluation policy doesn't say develop an evaluation maturity model, but it gives some point pointers to governance actions and other things that can support an evaluative culture. And hopefully you can see some of these things seated in the maturity model we talk about. But there's a focus on planning fit for purpose activities using strategic approaches to identify and prioritize and schedule activities. So what actually is getting evaluated and why. And I guess, you know, along with what Scott was saying, thinking about, well, you're not going to just evaluate everything, particularly if there's not the demand for it. So what are you evaluating and why is there demand there for that aligning evaluation activities with external requirements and assigning responsibilities for the different elements of the performance monitoring and evaluation system. So for example, who's responsible for following up on recommendations and implementing actions after an evaluation. So some of these things might be aspects that you cover in an evaluation maturity model. Behind the policy, for those of you who are not so familiar, the APS review identified concerns about the quality, extent and use of evaluation within the Australian public service, the quality of outcomes in evaluation in particular and the need to be able to draw on better data, the usefulness of evaluation in terms of timing and the questions that it focuses on and tries to answer. So again, kind of speaking to what Scott was saying about that need to align evaluation with demand for evaluation and the support of senior staff and ministers for evaluation. It also identified that there's big challenges in establishing a culture of evaluation and structuring and resourcing evaluation. And for those of you familiar with the kind of ebbs and flows of how evaluation has been structured at the Commonwealth or a state and territory government level, you can see those changes over time from policies that required evaluation to kind of stepping back to integrating evaluation to sort of PGPA acts. You can see that those cycles of things happening. I suppose the other big development that most of you will have been aware of if you've been reading the papers lately will be the opportunity provided by the APS reform and the establishment of an evaluator general. So APS reform really centres the role for evaluation and better policy and outcomes. The office of the evaluator general is being established and things are still emerging about what that looks like and focuses on. But it's really hopeful, I think that a role for that office has been identified in embedding that culture of evaluation and capability across the public service. And we think an evaluation maturity model or models might enable the APS to track progress with this. So I'll hand over to Duncan now. Thanks Jade. So the four of us who are presenting today all worked on a project together for a line agency. It's an anonymous line agency for the purposes of this presentation and that's probably kind of helpful because it keeps us thinking about process rather than any particular area of policy. And our brief was in two parts. One of them was to consult with staff at various levels. So this is a pretty big agency. Different functions and some people are in policy roles or groups and some people are in direct service delivery groups. Brad used a lovely phrase the other day of some people are steering and others are rowing. But we consulted vertically and horizontally with a range of different people in the organization asking about what is it like now? What would you like it to be like in the future? And then what are the currents that are pulling you already towards things are going in the right direction. We've got these enablers that are already in place. This is already in play as opposed to maybe we've got some barriers. We've got some limitations here and there's some barriers and blockers to us moving in the direction that we'd like to go. All thinking about evaluative practice within that department. So trying to get some priorities for what would be the most important directions to head in and then discussing some possible strategies for how we could get there. The next stage of the work was to then play that back to the senior executive to gain consensus on what the priorities might be for the department. And then on the basis of that flesh out the promising strategies for getting us from A to B over a five year period evaluation capacity building strategies. And then once those settled development implementation plan for them. So you can see an iterative process of collaborative development of a strategy like this. So if we have a look at the next slide, what I can share with you is some of where this landed and focusing not so much on what the strategies are within that department, but on what's the end point and the priorities. So the senior exec reviewed and edited this with us, but a vision that evaluation is a critical part of the policy development cycle. A statement of that and our goal at this department is to evaluate the right things, evaluate them well and use the insight effectively to drive the performance of our policies and programs for the benefit of the people that we serve. That's the vision. That's what we want to be true here. And then in order for us to get there, we're making some commitments. So the important bit of context here is this statement that at present our capability and our maturity to undertake and to use evaluation varies from one group to the next. And so to strengthen our evaluative practice, we're making a set of commitments. And it just happened that there were 10 of them, not commandments, but commitments. And here they are. We're committing to bolster demand across the department for good quality evaluation evidence that's useful in decision making. So do you remember Scott's earlier section thinking about demand side and supply side, the first commitment is to bolster demand. Second one is to prioritize evaluation efforts strategically focusing on the known gaps in the evidence base and the scale and risk of our investments. The issue here being there are some areas of our work that are very well evidenced, very well supported by research, and they've been evaluated a lot. And then there are others where we are in kind of more frontier territory. So thinking about prioritizing our evaluation efforts strategically budgeting appropriately, we're committing to budget appropriately for evaluation, including it as a part of new funding proposals, we're committing to integrating evaluation planning as a part of program and policy design. You can see where the commitments like this would come from is people describing well at in our current state, one of the barriers that we have is around perhaps planning or budgeting or demand. So that's why these commitments are the way they are their context specific and they came from the consultation. And you can see the others committing to using robust evaluation designs and approaches that are well suited to the questions that we're asking. So we wanted to leverage existing data for evaluative purposes to maximize their value, not all groups within this department had access to the same kinds of data. Some had access to unit record files about everything that was going on in the policy that's really interest to them, others more at a distance receiving reports from others. And so different opportunities for doing that as well. Using evaluation evidence for continuous improvement through the program and policy cycle, you can kind of decode that and say, well, I think probably what they heard here was that was maybe we're using evaluation evidence in a summative decision making way, but there's a commitment here to using it throughout the life of a program to retain and share the lessons. We're committing to retaining the lessons and sharing the lessons from across the department and beyond, building our knowledge base about what works and why. So there's a lot of commitments that are being made here. Nine and 10 slightly different. We're committing to strengthen staff capability and business processes, which goes to Brad's point earlier of it's not just about individual capability, but also the business processes that are there. And we're committing to growing our evaluation maturity across the whole department. So it's not that some people really care about it and really invest in it and really grow, grow, grow, grow. And others don't. We're saying, we want everyone to care about this and we want everyone to grow. But if you go back to the top of that, above the commitments, you'll see that statement that at the moment, the starting places are different across the organization. So you've got common language, common goals across a big organization, but different starting places within it. And that is where the idea of a maturity model or a maturity matrix came from. How do you deal with this idea of wanting to have one song sheet, one strategy across a big department, but then recognizing there's all different starting places for different groups within that department. And so I will pass over to Brad. Thanks, Duncan. So in this context, we conceptualize the maturity model as a really practical tool and framework to help people, so individuals, teams and groups within the organization. I could use the word diagnose, but I'll say assess or self-assess. So it is a diagnostic tool to use Scott's reference point there before to figure out where people are currently at and to provide really clear practical guidance on how to get from there to some future more desirable state. And so we turn to the idea of a maturity model. And I'm not going to talk about the literature on maturity models. Suffice to say that they're not an evaluation-specific thing. Maturity models are used across a range of sectors. Surprisingly, maybe someone can correct me in the chat, but we did not find an article in any of the key evaluation journals that talked about evaluation maturity models. So they're not something that the evaluation community has really embraced to a full extent yet. However, the thing that we do do well as evaluators is develop rubrics, and rubric is at the heart of an evaluation maturity model. What it does is it provides clarity. So we co-designed this with the organization, and it provides clarity around what good is, what great is, and what not so good looks like. And it really articulates that across levels of performance. We chose four. That's the structure we use. There were four levels, and we did debate and go back and forth about this. And so we're not saying this is right or we'll be right for everyone, but we chose these four levels. And there are other levels that we could have chosen, but these seem to be the best ones in this particular context. So the structure, the common structure across each of the commitments, for the 10 commitments, we had the standard four levels and descriptors for what limiting, developing, delivering, and excelling in terms of their evaluation practice looks like. So in terms of the structure at a higher level, just to give you a little bit of an idea, you've probably already had a chance to read the text under each of those four levels already. But on the far, I think it's your, it will be your left, limited. So basically, things are not going really well for this organization or for that group within the organization. So evaluation practices are underdeveloped. Evaluation tends to be an afterthought when it's undertaken, it delivers little benefit. So it's not working well. The evaluation function, the evaluation supply, the evaluation demand, the evaluation infrastructure, it's not going well. And on the far right, we have excelling, which I probably should go to delivering. So delivering, we have evaluation. It's in place. The practice is established and consistent. It's done well and it's used. So we use this kind of common architecture for all the 10 commitments. What we're going to do now is I'm going to talk about, we're just going to talk about two commitments. I'm going to talk about one, which is commitment eight. And then I'm going to pass back to Duncan to talk through what applying that rubric in practice looks like in terms of a continuous improvement cycle with the organization. And then we're going to reflect on that in relation to, I think it's commitment three. So next slide, please, Joe. So for commitment eight, one of the things that we see often is that the knowledge that's generated from a portfolio of evidence work is often not mortared together or not drawn upon in any kind of systematic way. So you might have individual evaluations that are used and they're used to inform and improve a particular program. But over time, if the evaluation function is going well, you might end up with quite a large number of evaluation reports in an organization. And invariably, almost without exception in most of my work with organizations, it's often difficult to locate that body of work in any kind of systematic way to access it to retrieve those reports. There's no knitting of the information or knowledge or insights across the reports. There's very little systematic reviews of those evaluations, often very little meta evaluation of those evaluations. So what we're suggesting with this particular commitment is that knowing that you need to think about planning and budgeting for evaluation, doing good evaluation, using individual evaluations, but also drawing on the insights and sharing the lessons across the body of evaluative work to build and accumulate knowledge about what's working, what's not, when, where and why. So on the far left is a situation where no one can find evaluation reports. There's no kind of knowledge repository. There's no knowledge management system. There's no sharing of evaluation results. It's the dusty shelf phenomenon. On the delivering side, there's a knowledge management strategy in place as part of, for example, an evaluation policy. So there's details on where evaluations are stored. There's information about how knowledge from evaluations is translated. There's communication around evaluation. It may even be as part of the commitment for transparency, a public facing website with existing evaluations on there. And everyone knows where to go to find information. So the organisational memory around evaluation is really solid and strong. Now, if you're lucky enough to get to the excelling stage, not only are you doing all of those things, but you're also seen to be a thought leader in this place. And you're probably extending a little bit further and you're actually mortaring together the knowledge from a whole body of evaluations using something like meta-analysis or real synthesis or some other systematic review process. So that's an example for commitment aid with the rubric there that describes for people what really good practices look like in terms of this commitment and what not so good practices look like. So I'm going to pass to you again, Duncan, to go through the process of how you'd use this as part of a continuous improvement cycle. Thanks, Brad. Now, students of the 10 commitments may have spotted something in what Brad was talking about there, which is that you need progress on commitment number eight in order to get progress on one of the earlier ones. So remember how I was talking about this commitment to prioritising strategically the kinds of evaluations that we do, including on where we have gaps in our evidence base? Well, how do you know where your evidence base is stronger? Well, because in commitment eight, you're on the right hand side and that you've got good knowledge management of your evidence base. So that's one of the reasons why you've got these each kind of stands alone as if we want to really advance our practice in relation to this commitment, this is what progress might look like. But about all 10 of those commitments, they weave in and out of each other. They're part of that ecosystem or that economy that Scott was talking about earlier in terms of how evaluative practice is done and valued and resourced within an organisation. So how would you use a rubric like that? Well, this is what we proposed and certainly up for discussion. But the big idea is one of a cycle of continuous improvement. So Jade, the first one then is to take a baseline. So this is I guess the idea of take a before photo. If I was about to join the Jenny what's the name, 12 week body transformation program, what I would probably do is have a weigh-in and all of those kinds of things. If you're going to embark on some evaluation capacity building, take a before photo so that you can see distance travelled. And so the suggestion here is use the maturity model as a reflection tool. So for each commitment, consider your practice in light of the descriptors that are in the table, make an on balance judgment about what category best describes your current position. And if in doubt, ask people. And of course, you might find that we're kind of straddling to or what have you. It's not a doesn't have to be a really precise measurement. But where are we now? That's that's the first bit. The next bit is then to prioritise to say, well, which of these 10 things are the other two or three where we would most like to see improvement and why those ones? And then the next thing is within those areas of priority is to think about what what's our goal? You know, if this is where we are, where would we like to be? What level would we like to be operating at? And we can be specific here? Like what are the particular gaps in practice that we're trying to close? What would success look like for our group in our context over the next not five years, but over the next kind of one or two years? And we're trying to try to bring it in. And so then we make a plan. If we say this is where we are, this is where we want to be. What's our plan? And so we identify some improvement strategies that we can put in place to strengthen our practice. And we set some milestones that we can use as markers to along the way to help us know if we're on the right track. And we give ourselves a timeframe to do these kinds of things and a bit of accountability as well. You know, not no time, but also not endless time. Dizzy Gillespie, a jazz player would say, don't give me time. Give me a deadline for creating creating new things. And there's something important about the creativity of constraint. We've only got nine months to do this. So what would we really do? Let's get on with it. And then as we implement our plan, we keep ourselves accountable for the actions that we take, but also for the results that they deliver. And so if we divert from our plan, maybe we choose to divert from our plan for good reasons, or maybe we get diverted from our plan by something else. And then we document that the reasons for it so that we don't lose the thread within this improvement cycle. So then what happens? Well, we reassess. We take stock. Once our improvement strategies have had enough time to take effect, then we reassess. How's our practice going against descriptors? Same maturity model, same priority. Has it matured? If yes, let's celebrate that. Whether it has matured or not, what did we learn from the journey? And do we have a story that is worth sharing with others? Are there other people, teams in the organization that also have chosen the same area of priority as us? And could we have a bit of a corridor conversation there? Or maybe more formal than that, where we share what we've learned in trying to do this? Once we reassess, take stock again, then we just go again. This is a cycle. That's the whole idea. So it's a continuous improvement cycle, process improvement. Not a quick fix. It's not a one-off push. And so reassessing gives us a new baseline from which we can then update our priorities and goals, and we can refresh our plans and we can keep pressing forward. So let me give you an example of this. Let's have a fictitious walk through a couple of years of a group within this department. Thanks, Jade, if you can go to the next slide. Let's say you and I, we work in a group that has said, this commitment number three, this is really important to us. It's really important to us because when we did a bit of soul searching, we put ourselves in the limited column. It doesn't feel good. What does it feel like? What it feels like getting funding for an evaluation requires a special effort. And that the funding that we have is usually only available because it's from leftover resources, from something else, rather than specifically for this purpose and a financial year, or we had some understaffing, we had difficulty recruiting, so we've got all those kinds of things. People in our group, we don't really know how to budget properly for evaluations at work, what it's likely to cost, and also what kinds of approaches might give us good value for money because it could be under or over, but we're budgeting. That's us right now, and we say we would love to be at delivering. We'd love to be in the situation where most of our program budgets have got an item for evaluation. That feels like that would be a realistic thing for us, but certainly all the new ones that we're submitting, our new policy proposals, they've always got a budget, but we recognize that some of our legacy ones won't. That's okay. And also that the amount that we earmark for evaluation is usually in line with what it's going to require, so we're pretty good at estimating requirements there. That's where we'd like to be. Quick process question Jade, in the chat, someone would like to know if they can have access to the slides. Yes, they can after the session. I'm not sure who or how it's distributing, but we'll work it offline. Thank you very much. So that's our, so we prioritized this one. We've decided that we'd love to be in delivering, and then we do something about it. Now, what we actually do about it, let's say we give ourselves six, nine months to do that, there might be some things that are happening across the whole organization. For example, this idea of new policy pro proposals always containing a budget for evaluation, that might come from the senior executive in the organization requiring it, and saying we won't consider a new policy proposal unless it's got a budget for evaluation. And we might also get better at costing that because there's some resources that have been developed on how to budget for an evaluation. So they might be cross-organization, but there might also be some things that we do within our business unit to do that. So we might, for example, have a look at the last four or five evaluation efforts that we have pursued, and look at the ones that felt like they were adequately resourced, the ones that felt like they were on the smell of an oily rag, and the ones that felt like they were over resourced, and have a look at what did it take, what was the resourcing, and so we're getting smarter by not just following a guideline, but we're doing a little bit of meta-evaluation. We're evaluating the resourcing of our prior evaluations. So there's a whole of organization of things that we tap into or that we fall in line with, and then there's some particular things that we do within our business group because we choose to. We get to a point, six, nine months later, where we say, look, where are we up to on this one? Let's have a look at our current practices now, and what we discover is that we are not at delivering yet, but we're kind of on the way. In fact, we're comfortable that we've kicked out of the limited space, and we're in developing, all right? And so we've got some wins that we can celebrate. We've got some things that are now in place that are starting to bear fruit, but when it comes to that reassessing and re-prioritizing, we say, we're not there yet, and it still matters to us, and we still want to be where we said we wanted to be nine months ago, but we know we're not there yet. And so let's go again. And so we embed all that kind of stuff, and we get our budgeting better, and over time, another six or nine months, then you look at our practice and we say, we've arrived. We're not at excelling, we're at delivering, and that's where we said we wanted to be. That's really awesome. Then when it comes to reassessing, we might say, possibly we're comfortable with where we're at compared to some of the other areas. And maybe it's the one that Brad was talking about before, that we're now like we're resourcing our evaluations about right and particularly the new stuff. That's all coming through. The big issue now is retention of the insights and the lessons from our evaluation. We're still at limited there, and now we turn to some other priority. So we don't no longer care about budgeting. We just we say, let's maintain that for now and hold that and then build something up because you can't have a war on 10 fronts, just to focus and let those evolve. So that's the idea of how you might be able to use a maturity matrix like this and embody it and live it within the organization. But you can see that if somebody is thinking about evaluation maturity across the whole organization, then they would be able to identify things like which of these commitments are people looking to us for a central response, a centralized response. And which of the commitments are areas where people are really marking themselves down on this one? What are the, I mean, they're all organizational priorities, but which are the ones where we really need to rally around in this year or perhaps in next year? What are the kinds of advice that we can give to groups about things that they can do themselves, rather than relying on a centralized response? And so it allows the I guess overall governance and support of evaluation maturity for the different groups within a big department, while also allowing the groups to essentially choose their own adventure within it and do things that they're passionate about that they know are going to make a difference to the quality of the work that they do for the benefit of the people that they serve, which is why it all comes back to that vision. We want to evaluate the right things. We want to evaluate them well and we want to use it for the benefit of the people that we serve. So that that that vision is the thing then that drives all of that decision making about prioritization. Thanks Duncan. So just to clarify the thinking is you can use this at a team or a business level business unit or whatever kind of organizational unit makes the most sense for the organization. You expect each team or business unit to kind of be doing their own thing, but it will also be most effective if they're someone sort of centrally within the organization taking a leadership role to say really to get change on budgeting. Hey, we're going to need to do something at an organizational level for that one to work. Excellent. Okay, I'm going to say we can go to questions now and I'm going to stop sharing the screen and maybe some people might want to turn their cameras on so we can see who you are, but you're allowed to ask a question even if you choose not to do that. Now there was a comment from Julie. There's a couple of comments from Julie. I want to reinterpret the first one which was if most programs have an evaluation is that creating an oversupply problem and maybe direct a question to Scott to say, you know, if every team is kind of choosing their own adventure to say, here's what we're prioritizing. How do you not get the kind of supply and demand factors out of sync when you're using an evaluation maturity model to think about what you focus on and to make change? Oh, thanks, Jake. Hi Julie. Well, it's one thing to build capacity. It's not quite the same thing that was how much evaluation you choose to do. So with this particular entity, we're inviting their senior executives to determine their needs for the volume of evaluative feedback. So we weren't, as external parties, going to tell them how many evaluations they should do a year, but we were trying to help them to have the capacity to build their internal capacities and then make choices about what they need and how much of it. So in that sense, the executives weren't going to ask for more than they wanted. They weren't going to threaten themselves with an oversupply. I don't see that as the problem at all. There's another question from Julie that Julie, I might ask you to unmute and describe to us a little bit more about this ideas underpinning whole of organizational governance. Thanks, Scott. Thanks everybody. Really good presentation. So about 20 years ago, I did the first evaluative capacity building project in the Australian government and we actually had a situation where there was a really good governance structure in place and the head of the department, the secretary, had a secretary's lunch and invited people to come along and pitch good ideas to him. So I pitched that we have an ECB program because we didn't hadn't had one and so we ended up getting some money and I ran it. And so it was based at the individual level. So we trained 38 people and some of them went on to become evaluators who we know all we know and love to this day. But the thing is that's really important is that that didn't happen in isolation. It happened because there was a senior executive committee responsible for research and evaluation and they decided where, how much money was spent on what research and evaluation projects. And there was a whole branch dedicated to a central branch dedicated to research and evaluation and all sorts of resources and it really already had a strong culture. So over the last 20 years, I've seen that wax and wane and it really depends on like who's in government and who gets appointed to the secretary's position and other senior positions. So my second question was about governance and you can't just take as kind of given the ideas that underpin public administration now as always what's in place. These things change over time, don't they? So can you talk about what you think about different governance arrangements and governance systems and thinking about public admin and how what we're talking about here fits into that context? Scott, when you answer this question, the other question that's come through from Eleanor is about like it's in the same ballpark. So have you seen that in the chat? So it's about the importance. So we've all talked about the importance of leadership and the importance of an authorizing, a strong authorizing environment for evaluation. So how can we build leadership understanding and support for evaluation? So it might be good to tackle that at the same time as Julie's question. Thanks, Julie. But a great question and I love it because it goes to the heart of what I think is the key issue, which I will interpret as your question exploring the topic of how does one institutionalize the evaluation function? And I'll answer in a couple of different ways. That's not it, Scott, because what I'm thinking about is that there are bigger things at play and we can't assume that what might be useful in a kind of conducive environment is conducive under say the previous government where evaluation capacity took a nosedive. Yeah, that's going to be the first part of my answer is that the government of the day has the prerogative to set the policies at once. And if it chooses to downgrade the evaluation function as John Howard did when he came to government in 1996, that is his prerogative. He has the authority and the ability to do that. The counterbalancing force to that, however, is a community of stakeholders who see value in the evaluation function. It meets their needs. And the other part of that is what I'll call institutionalizing the function. And institutionalizing, to my way of thinking, means locking in evaluation into the routines, policies, legislation, and decision-making of government. What does it mean to institutionalize evaluation? Well, as well as having policies, you want links to budget processes, you want linkages into reporting practices, the concept of all new policy proposals having an evaluation budget. These are the things that help to link to lock in evaluation. It doesn't mean a future government can't undo them, but it does make it a bit easier. And I think Amber Perron from France talks about this a lot. I think, and I include myself in this when I say it and it's a little bit painful, I think that we haven't always been very good at using evaluation to meet the needs of program managers. We often, I think, use evaluation to do things that we want to do as evaluators, which isn't always the same as what will quite meet their needs. Very few program managers are going to want a summative impact evaluation that tell them where they've gone wrong. They want more formative consulting type help and advice and guidance, which they'll find more valuable. And unfortunately, the political environment makes this a little bit difficult, too, with freedom of information. And there's an interesting article in the Mandarin newsletter today about this. On the one hand, by making everything public, it's actually a disincentive for program managers to do evaluations. But on the other hand, if you keep them private, that's not a very democratic governance point of view. I was talking to a deputy secretary in a department I won't name. And I asked, this person was quite antagonistic towards evaluation. And I thought that perhaps I made the assumption that they didn't understand evaluation. They didn't appreciate its potential value or how it was done, that sort of thing. I was actually wrong when I interviewed this person. They had a really good knowledge of evaluation, and they did not want it. It wasn't a choice out of ignorance. It was an active choice that they didn't want it. Another department that I won't name, but a different one, I used to think was totally incapable of implementing a reform program that would support evaluation. And then the same department decided that they had a problem with insufficient number of women in leadership positions. What did they do? They did a diagnostic study. They reviewed the literature. They identified champions. They put incentives in place. They developed strategies. They tracked their progress. They did offered rewards there as making progress. They did all that for the women in leadership program. They didn't want to do any of that for their evaluation function in that department. I know because I interviewed the deputy secretary, two of the deputy secretaries, and asked them. And one said, Scott, you're kidding yourself. I don't want anything to do with evaluation. It's a waste of time. And the other person said, I'm not interested because it will give negative feedback, and that would just cause this embarrassment. So there's some really major issues at play here. And maybe I'm using the word governance in a different way than you might have, Julie, but there's that big picture government context. And it is the government of the day's prerogative as to their policies. But I also think being able to have stakeholders who are receiving benefits and receive value helps. And what I call institutionalizing, linking evaluations into budgeting reporting and other practices. And I think in Australia, at least, we have a huge gap in the sense that our media is really silent in this area. And it's an area I personally think the evaluation society should be more active in that is helping journalists to understand the role of evaluation, how to ask good questions when the minister makes his latest announcement, rather than playing fault finding type stories. So those are a few of my thoughts, but my colleagues might have some additional ones. Maybe there's a bit of a shift coming with an evaluator general as well that might enable the media to be focusing on these things more. Duncan or Brad, either of you have anything to contribute to that question about what do you do to build leadership understanding and support? And maybe you could also tackle as, do you need leadership understanding and support already before you try and implement something like this evaluation maturity model? Do you need a senior executive sponsor? Do you need an executive that's already across evaluation for this to be a thing? Could it work from a ground up? I do have things to say, but I want to know what Brad thinks first. Stack them all, start again. I mean, there's, yeah, I think you need to work top down, bottom up. So you need to work, it's great if you've got executive support, but it might not be across the whole executive. And I had a conversation for another day, but I've noticed a dramatic decline in the level of willingness to raise frank and fearless advice at senior levels of government by executive. And I think there's lots of reasons for that, but I think that undermines, I think that there are people at most levels within organizations that do value evaluation and want to see it happening. And they want to know that they're making a difference. But I think as you go up and up and up, the level of comfort can start to get a little bit less, particularly in terms of the cohesive group of senior executives. I've taken to heart one of Scott's practical suggestions, and I've never seen it done, but maybe Scott has an example of where it's done, which is to write this into the performance development plans of leadership. And I haven't seen an example of that, but I'd love to see that somehow written in through the senior leadership's performance development, and that might be a practical way to build understanding and support for evaluation. Has it been done, Scott? Yes, I've seen it in Western Australia, but that was 15 years ago. It might bring us back to those hips and flows in evaluation practice. Duncan, what were you going to add? My experience of working in an environment where we developed an appetite. So I think artists talk about audience development. So if you think, I think it's really important for us to be doing nude opera, maybe your opera audience isn't quite ready for that. So you need to do audience development work over time to build that audience. And so we're up there in high order thinking with creative types. So what do we do about audience development for evaluation? Helping build the appetite for it, like wedding people's appetite. And my lesson was don't talk about evaluation and how cool it is. Talk about the things that are important to the organization's leaders. Are they worried about risk? Listen to that, because evaluation is a kick-ass way of managing risk. Are they worried about efficiency? Listen to that, because evaluation is a really great way of giving you the intel that you need to be able to make good decisions that are going to lead to efficiency. Are they worried about customer service? Great. Listen to that. Evaluation is a great way to know whether you are in fact meeting your customers. You see where I'm going, right? So evaluation is good for what ails you. But don't pitch it as evaluation is really cool. Say, I really care about the things that we all care about. And I think I've got something that might be good. And so in the context where I was working inside, the things that people talked about a lot were innovation. And they talked about the importance of being able to know your impact. And so cool. Evaluation is a great way of creating the condition. So evaluative practice, ongoing, healthy, reflective practice is a great way to create the conditions that make innovation both safe and possible. Like psychologically safe, but also possible. Because every innovation cycle that is self-respecting has evaluation loops within it. So there's the on-ramp into the things that the senior leaderships already, they've already pinned their colors to the mask and said, I really care about this. Cool. Well, guess what I've got for you. And the other one was talking about evaluation as a way for us to make and demonstrate our impact. Because about reflective practice being a way of using evidence to make good decisions. And that was enough. That was enough to get senior people saying, yeah, can you come talk to my colleagues about this? Can you do more? Can you do more? And so it just happened to be the right place, right time. But it wasn't by coming in and saying, hey, how cool is evaluation? It was by listening and then reflecting back with all of that bait, like all those baited hooks then in there to have an offer to say, I think I've got something that might help. Thanks, Duncan. There's another question following everything on this leadership, Scott. I might just throw it out there and you can build this as well. So do we know anything about or have suggestions for incorporating evaluation training and senior management courses such as with the APS Academy as well? Because that might be part of building leadership demand or an understanding. Yeah, thanks, Joe. Just building in mind that we're about to get a new APS commissioner. So it's a good time for Scott and the rest to answer this question. I love this question. This is so good. It disturbs me greatly that many countries in South Asia have parliamentary forums where MPs get together and discuss the evaluation and we have nothing equivalent to that in Australia. In fact, our Australian MPs were invited to join the South Asian parliamentary evaluation forum and we declined. I thought that was a real lost opportunity. John Maine likes to talk about this a lot at an organizational level, the senior executive about having discussion groups where senior executives can come together and chew over the whole challenges and issues related to evidence-based decision-making, what's working, what are the constraints, what would help them. Rather than formal training in a technical sense, those sort of discussion peer-to-peer things would seem to me to have a lot to offer. Something that always surprised me but in a pleasant way. When I was in AusAid, it was my job to be the advocate for evidence-based decision-making and evaluation. I could go to the secretary and his deputies on a number of occasions and say, this is my job. Could I ask you too? I would literally ask them to do things. Will you come to my conference and give me money to hold the conference? Will you speak? If I write a speech for you, will you give it? If I knock up an email once a fortnight for you to send to all 3,000 staff members on the merits of evaluation practice, will you do this? And it was partly because I wanted that thing to happen, but more importantly, it was a way of me testing their commitment. And the senior executives at the secretary, deputy secretary level would tell me quite quickly what they were willing to do and what they weren't willing to do. And so I found out relatively quickly who I could work with and who was on a different wavelength than myself in terms of this agenda and I would just part them and I'd focus on working with the, I won't call them the true believers, but the people who are more amenable. And related to this, this focus on training operational staff through the APS Academy, I mean, that's a good thing, but that is getting a little bit on the supply side by itself. It's not that it's a bad thing, but by itself it's not going to be enough without senior leadership's buy-in. And I think Julie was talking about suggesting maybe like what training for in understanding evaluation or appreciation of evaluation might go in that senior manager. Yeah, it was more of the fact that, you know, the APS review under 30, you recommended that evaluation become a profession. And it wasn't, it was creating an integral craft and nobody knew what that meant. But by the time that decision happened, it was given to the APS Academy to train people, to train leaders. So it was just added onto their leadership training around policy strategy and evaluation. So it's really quite senior bureaucrats getting at that level rather than operational kind of on the ground level. I'm not sure what other people think, but off the top of my head, I would have thought evaluative thinking was a core skill of all the senior staff in the public service. But they might call it something else along the line. Oh yeah, oh yeah, yeah, yeah. It's 100% in there in every capability framework, but it's not called evaluative thinking. It's called like critical thinking, problem solving, all that kind of stuff. So that's like that's another thing that I did when I was in this particular government department was to look at the standard position descriptions. It was in New South Wales Department of Education, right? And that's kind of no secrets about where I was for years. But there's a thing like the principal standard. It kind of just sets out like these are the expectations of school principles. Okay, cool. Well, there's 2,238 of them. And I'll look over there. There's one called leading improvement, innovation and change. All right, cool. No worries. If that's not like a big hook that's sticking out the wall, you can go, oh, I can get someone to hang on that. That's fantastic. And because then the message is we're not trying to talk about something additional to your responsibilities. What we try to do is to support authentic practice, like that descriptor, that line item, to really have flesh on it. What does that look like in practice? And just to speak into something that they've already got to do. And in fact, in all the job applications, they've already said they can do. And so, yeah. Time together what Scott was saying at the start about some of that institutional infrastructure with Duncan, your example of how do you start to embed these things? Get it like you frame it differently, you embed it in what's already there. I'm conscious you might want to ask a few more questions. Duncan, there's one clarifying question there about what you meant by impact. I think you were saying in the context of getting people. Oh, yeah. Yeah. Knowing your impact. Yeah. I saw that from Lara. Look, the that for me was so impact always needs another short word after it in the same way as data or information. Like if you get a call, a piece of information evidence, there has to be evidence of something. It's evidence of reach or it's evidence of efficient, you know, efficiency or whatever. It can't be evidence unless it has to be evidence of something, right? And the same thing is true about impact. It has to be impact on something. And so, like this is going to sound heaps boring, but an outcome hierarchy that's like that sits off a logic model tells you to like the domain like so in education. Well, there's evidence. We have evidence of impact on teaching practices and evidence of impact of those change like from those change teaching practices on student learning or engagement or well being. And so, so is the evidence of impact on teaching practices? Is that not evidence of impact? Of course it is. But it's just evidence of impact that's kind of higher up the food chain in the outcome hierarchy. So to sometimes I will see people will talk about well there's outcomes and then there's outcomes and then we reserve the term impact to talk about this kind of accumulated big picture like it's only the later stuff. But I find that it's more empowering to bring the language of impact closer to the people doing the work and say you make an impact. Like what you do makes an impact on this. That makes an impact on this and that makes an impact on that. It's all it's all impact, right? Because it's the difference that that that is made at different scales. Because if what if the message that people get is that those impact measures are so far away from me that I can kind of check out on that then I feel like we've done people a disservice. So I prefer to bring the impact language in rather than keep it really rarefied. Plus if you talk to people who work in road safety they'll talk about impact as the very first thing that happens and then all the outcomes come after that. So you know you've always just got to define like when you use the term impact what do you mean? But it always has to have that impact on what. In the context of what you were talking about Duncan I think you're showing like show your impact. So what has evaluation helped you achieve that might kind of help with the senior leader buy in when we're talking about you know what if we want more to uptake of using this maturity model to build capability. Okay well what did that enable us to do? So some of that documenting your story of what you know what it helped us prioritize then what we did and what's changed. Yeah that might help. Yeah I also saw a question Jade in there from someone asking it was about kind of examples of evaluation literacy being stitched into public sector learning. I just wanted a quick shout out here to ACT public service. I think I saw Raul Kramer on the call here. So the ACT public service is in its third year of running something called the Evidence and Evaluation Academy which is workshops and a workplace learning project where people come together they learn and then they do and they've got coaching and then they present what they like what they've learned and some of the like how it's changed their thinking to each other at the end and across each and there's you know there's nominees from each directorate in the ACT public service and so now there's like there's three years worth of alumni in each of those directorates and each of those directorates has a person in his senior in there who's identified themselves or been nominated as the valuation champion and this is it's hard work to do and sometimes you just two steps forward one step backwards you know get a key person who's really enthusiastic for a while they leave all that kind of stuff but it's doable and it's being done but you wouldn't necessarily know about that unless you happen to work in the ACT public service but that is an example. If anyone has any last questions if you can throw them into the chat that would be great but I'm just wondering to kind of start to wrap it up for people would one of you like to tackle the question of what should people do with this model so you talked about we developed 10 commitments that were kind of particular to this agency but if you have a look at those commitments they're pretty broad and reflect the things in the literature on the supply and demand and institutional infrastructure side of things that are needed to get to good quality evaluation that is useful and used so should people use this as a jumping off point would they need to adapt it do we need sort of benchmarking against these standards that people should be comparing themselves to is there a set place that people should be at at this model anyone want to take a look multiple questions in one I spoke too much brand I think that the evaluation community is at the well not the start of its journey around how we use maturity models but I think that you know it feels in some sense it feels like what we did is innovative and in some sense when I read over the 10 commitments again in preparation for today a lot of its common sense so but I think just in hearing what I've been talking about what people have been saying I think you've got to start somewhere and you know we have thought a little bit about you know this is one case example what's its scope of applicability to other like they're all unknown questions at the moment so I think the fact that it was owned by the organization helped so I wouldn't necessarily say you can do it off the shelf I think that there's a process you have to go through around the consulting and the identification of the priorities and those kinds of things but the basic kind of skeleton of what a maturity matrix is and the descriptors I think that you know we've given an example that might stimulate people to build from but adapt locally because that sort of the journey of getting towards those commitments is kind of as the process is part of the key as well yeah I think there's a really good principle here that if you want ownership then you have to cede some level of authorship as if you want ownership you have to cede some level of authorship and so I would suggest if like if someone else was picking this up back it out and then talk about it as this is what they did and then ask the question what of this feels relevant for us what if this feels like it's on time for where we are and when and when we are in our journey and then and then adapt customize use your own language all that kind of stuff so white label it and then whack your own label on it and that I think will have like it's much easier because then you can kind of cut the ribbon on it and there's plenty of examples in in Australian public service and and state agencies of people standing on the shoulders of giants and say look here's our new thing by the way we've borrowed from DFAT's evaluation standards document or by the way we've borrowed from the way that the Department of Industry and Science do the evaluation ready process but we've we've kind of taken the principles and then we've applied them for us and so I think that that would be my hot tip and I like that and just to build slightly on it it's a tool for helping an agency entity to decide where they're up to now on a range of dimensions and then have a discussion about where they might want to get to but but it's not a prepackaged strategy for how you're going to improve your budgeting or your leadership's demand or any of that but it is a it's a guide a framework that helps you to have the discussions and enables agencies to tailor to their context and from their starting point now David you've had a hand up for a little while good thanks very much and thanks for the presentation it's actually very enlightening and it's clearly a step forward in relation to trying to place a a stronger structure of evaluation but the key driver for evaluation at the moment is Andrew Lee he's the one who's got the 10 million dollars from government ostensibly to replace consultants in the public sector to build the internal capability but it could well be the stalking horse for the evaluator general and for many of the processes that you've been talking about my question is what do we know if at all that AES is is doing to support and influence the process that Andrew Lee is putting in place he's a junior minister he's a self-confessed nerd he's going to irritate other ministers and other senior bureaucrats he needs the support he needs support externally to get this initiative moving and successful is there any indication that AES is doing anything in this regard I might tackle a first point so on this Scott and then come to you so I know the AES relationships committee part of our program has been doing the AES state of evaluation report and so we're starting to use that think about how we're going to engage conversations with government that would include thinking about who's setting up the office of the evaluator general and where that's going and so the relationships committee is advising the board on what next and what are some of the important priorities and some of the things that have come up so far are thinking about that emphasis on RCTs and how the evaluation community might have a somewhat different view about what the quality evaluation looks like thinking about you know the benefits of internal external hybrid models of of delivering evaluations and putting that into the mix as well and Scott what were you going to say and then Kim you've said pretty much everything I was going to mention but just a couple of small additional points the I've swapped a few emails with Bill Wallace the CEO of the Evaluation Society he he tells me that the relationships committee and the board are very very well aware of this issue and they're actively chewing it over there's some thought to having Andrew Lee come to the next conference in Brisbane and I myself know one of the staff members in the area and we have informal chats and without mentioning names the staff member portrays what they understand Andrew Lee's position to be and then shares their own views with me which along the lines with yes Andrew Lee wants to see much better evidence of program impacts to inform policy decisions but the staff member themselves realizes there's a lot of constraints about what's possible or not possible in the RCT space so I think it's still a little bit early days yet and the evaluator the press announcements have talked about the evaluator general's office driving improved evaluation capacity I have no idea how that's meant to happen or how that relates to the APS academy or the department finances role with the PGPA acts so all those interrelationships are very unclear to me personally and one of the other points of conversation might be thinking about you know what role for AES and capability building to give an AES program capability building we do know what happened in Canada some years ago when government said look we're just so unhappy with the quality of evaluations that we're going to introduce the accreditation and the Canadian Evaluation Society responded by saying well no you don't have to do that we'll do that and so that's how they got into the path of accreditation it was driven by central governments just unhappiness with the quality of the products they were buying and so the CES stood in to say they will implement that so it's pretty much an open space at the moment sorry David come to Kim quickly because she's got a hand up and you might have another point about how we're going to get how AES is engaging thanks Jay thanks for that yeah I wanted to share that I'm co co-facilitator of the ACT committee for the evaluation group in the AES and with Ruth Nichols and we're setting up a panel discussion it's a breakfast networking event on the 2nd of June which you might have seen if you're getting the AES notices so if anyone's in Canberra join us lovely if you can join us so we've got three local academics who will write about these issues in the Canberra Times or the Associated Papers and Wendy Javi and others and very long-standing experience in the public service so they know these kind of themes that a lot of you've been talking about and we've got a representative from Treasury and from Department of Finance coming along so the AES locally as you imagine it's a small town we wanted to try and network and get together so as part of the whole set of things that the AES does we also have this kind of arrangement where we we talk to one another and try and facilitate those kind of conversations so I think it's early days and we're all going to be waiting for a budget night next Tuesday with great interest to see what's you know a bit more detail about what's coming and yeah I think it's an interesting year or two ahead of us so but I might suggest we throw back to Ruth because I think we need to move towards going I'm just on the way to Ruth if anyone is not from Australia on the call I don't know there are a couple of people who have joined from overseas I've popped the link in the chat to Andrew Lee's book which is which will explain where he's coming from it's called randomisters how radical research has changed our world and that'll be a context piece if you want to get in behind that I believe we're going to have another paradigm war like I think we move past this sounds like we've had this argument many times but you know I guess this goes to starting with people from where they are at oh my final comment is this this you know the difficulty around this work relates to the status of evaluation compared to other disciplines that have any decision makers like behavioral economists economists so where the broader context the macro context is the visibility profile status of evaluation as a discipline slash profession and it makes it more difficult because an economist talks to senior people and they're coming with assumed authority whereas sometimes it's not the case with evaluation alone in Scott's case it was understood but rejected for political reasons but often the starting point is what's evaluation whereas you don't have to do that if you're an economist or something else so final comment so on that note I think we found a topic for our next kind of seminar because we could keep going on this topic so thanks for thanks for the contributions everyone