 respect to elders past, present and emerging and extend that respect to any Aboriginal and Torres Strait Islander colleagues who are joining us on the line today. I'm coming to you from beautiful Gadigal country of the Oro Nation but just acknowledging that wherever we are, we are on Aboriginal land which was never ceded. It's lovely to have you all with us today and thank you for joining. Just a little bit of housekeeping before we jump into the session. I wanted to let you know that this session is being recorded. I'd ask for you all to remain on mute and your video off unless you are speaking so that you can see the presenters and we should all be appearing on your screen. If you do have any questions throughout the session, please feel free to pop those into the chat box. We will have time for Q&A later on in the session and we're really excited to hear from you and to address some of those questions then. It's lovely to have you all joining us today. My name is Melissa Coltner. I'm an EY based Human Services Evaluator and I coordinate EY's Evaluation Practice Network alongside wearing another hat for the Australian Evaluation Society in the New South Wales Organising Committee and I'm really excited to be chairing this session today which is an AES session focused on cross jurisdictional approaches to guiding program evaluation and features a range of great guest speakers from across Australia. If you're an evaluator as I am, you'll be conscious of the way that evaluation frameworks shape our thinking as evaluators and guide our choices and methodologies and approaches to the way that we deliver evaluation. The conversation that we'll have today will review approaches and frameworks across jurisdictions and delve into some of the similarities and differences in approaches between different areas. We're really excited to be sharing this conversation with you all today. I'd like to introduce you to today's panel. We have Danielle Spurt. Danielle is Principal Economist and Evaluation Lead at the Centre of Evidence and Evaluation in New South Wales Treasury and she works to strengthen the quality of evidence and support government decision making. Danielle's previously led development and evaluation of evaluation strategies in both New South Wales and Northern Territory and developed and delivered courses in University of Sydney and New South University of New South Wales related to socio-economic development, regional and remote area development and environmental management. Welcome Danielle. We also have Todd Sandness, who's joining us from Queensland, has more than 20 years experience in working in state and Australian government departments with leading quantitative and qualitative research, program evaluation and performance management. As Assistant Government statistician in the Queensland Government statistician's office in Queensland Treasury, his team is responsible for the Queensland Government program evaluation guidelines and the collection of a large range of official statistics on behalf of the Queensland Government. Welcome Todd. We have Eleanor joining us today. Eleanor Williams, who has recently joined the Australian and New Zealand School of Government as Deputy Director of Research and Advisory. Prior to this Eleanor held a large number of roles across various Victorian Government departments including health, primary and cabinet and education and training and most recently Eleanor led the Centre for Evaluation and Research Evidence at the Department of Health. Welcome Eleanor. We have Christabel Darcy from Northern Territory. Christabel leads the program evaluation unit within the Northern Territory Department of Treasury and Finance and is co-convener of the Northern Territory branch of the Australian Evaluation Society. Christabel has a background in health research and has previously worked in science and innovation policy with the Commonwealth Government and economic policy within the Northern Territory Government. Welcome Christabel. And finally, Narina. Narina is Canberra-based and she's a director in the policy design and evaluation team with a focus on how to embed evaluation into policy design. Narina brings a background in social policy in Commonwealth and ACT Government settings. I'd also like to acknowledge that we did have Kylie Delling joining us but unfortunately she's been unable to make it today. But welcome to all of our panel members. Thank you so much for joining us today. Now in terms of today's session we're going to start by hearing from each of the panellists on the frameworks and approaches that guide program evaluation in their settings. We'll then be moving towards a panel discussion and we'll invite questions from all of you listening today in the audience as well. To start I'd like to invite Danielle to share the New South Wales approach. Thank you Melissa and good afternoon to all. I'm dialing in today from the lands of the Gadigal people of the Eora Nation and would like to acknowledge Eora's past and present. I'm proud today to represent the Centre for Evidence and Evaluation within the Economic Strategy Division of the New South Wales Treasury. The unit has a central role in driving evidence-based decision making across the New South Wales Government. Next slide. Thank you. So building evaluation across government as you would all know has a long history and I'll only be representing a small part of this today. Great impetus to build evidence and performance reporting was provided by a series of audit reports for the New South Wales Government which emphasised the need for evidence, efficiency, effectiveness and value for money in government decision making. Next slide. Thanks. Part of the Government response was to set up a Centre for Programme Evaluation which is now the New South Wales Treasury for New South Wales Treasury Centre for Evidence and Evaluation. The key role for the Centre is to develop an Evidence Bank for decision making and as part of that we set the standards for agencies through a number of different guidelines. We work with agencies to build capacity to apply those standards consistently across the public sector and we also provide review and assessment of CBAs and evaluations conducted by agencies. Next slide. Thank you. So to improve standards of evidence the Centre has provided a series of guidelines documents related both to examity or before implementation appraisal of initiatives and requirements related to post-implementation or exposed evaluation of initiatives. Next slide. We are owners of the New South Wales Government evaluation guidelines that sits within this suite of documents. The guidelines are supported by a toolkit of resources that's available online and assists government agencies to implement at the different stages of the New South Wales Government Programme Evaluation guidelines. So we are currently updating these programme evaluation guidelines and supporting resources. This is in line with a general refresh of documents to ensure that aligned with current best practice but particularly aligned with new developments in New South Wales Government policies related to performance reporting and evidence. Next slide. So key to this is the role of outcome budgeting which has introduced state outcomes as clear statements of what the Government is seeking to achieve for the New South Wales Community and outcome budgeting works with the investment phase guidelines to ensure that evidence supports budget cycle decision making and also enhances cluster capability in assessing initiatives. And here the whole of Government approach is very important because it enables the Government to make decisions across initiatives proposed across Government. And last slide. So one of our key activities then is in updating the guidelines as part of anticipating some of the challenges we've identified in linkages between the different processes. So in terms of business case guidance it's very important that we recognise the role of evaluation and CBA in informing initiative development and then as part of initiative development. Identifying early the need for monitoring and evaluation planning both so that monitoring and evaluation planning can be resourced so that data can be collected early and so that that awareness of what the initiative is intended to be achieving is maintained throughout its implementation. In turn monitoring and evaluation informs that post-implementation appraisal which feeds into decision making regarding improving the initiative but also provides a better evidence base for future appraisals going forward. All of this sits within the overall objectives of New South Wales Government activity which is identified through the New South Wales State Outcomes outcome budgeting processes and noting that we also have an investor insurance policy framework whereas activities of a high risk profile or and high priority and size will trigger independent peer assessment of their various stages of business case CBA benefits realisation and evaluation. Thank you. That's New South Wales. Thank you. Handing over to Queensland. Hi, can I just check you can hear me? We can hear you Todd. Thank you. Great. Apologies for the lack of video input. I think Todd we may have some challenges actually. Todd you may have just dropped out actually. Thanks very much. Well there had to be some technical challenges didn't there. Perhaps what we'll do Todd we might just move over and go to Eleanor in Victoria and we'll come back to Queensland when we have Todd back on the line. That's great. Happy to jump in and we can move back to Todd and thanks for having me here today. I'll just give the briefest kind of summary and I think New South Wales that's just such a great summary of what a highly functional kind of system is in place. If we move to the first slide just in terms of Victoria's setup. Victoria probably works in a little bit less of a systematic way. I know this is quite a fun photo as opposed to the sort of formal slides but what I wanted to emphasise actually in Victoria we don't have a central agency kind of evaluation function that guides the expectations across Victorian Government but we do have evaluation units in most of the departments. The largest is in the Department of Health and this is them last year at the Christmas party. This means it sort of works in a bit of a more loosely federated model and it's why we in general in Victoria they there's a Victorian public sector evaluation network and they work in very networked ways sort of sharing materials and solutions across the public sector. But that's not to say there's not some central agency expectations that are in place so we've moved to the next slide. One of the guiding documents that's in place is the Department of Premier and Cabinet launched an evidence reform strategy. I think that was in 2019 and broadly that sets the expectations in terms of evidence formed policy overall and this is probably my favourite diagram from that document which is sort of emphasising the fact that with products like Government Programme evaluations it's not only about ensuring the quality supply of them it's also making sure there's demand at the other end and a lot of these systems are designed to stimulate the demand to make sure that we are effectively requiring high quality programme evaluation from decision makers and that's what leads that productive use in the middle. So as you said there's also a quote at the bottom of that slide which sort of emphasises these reform strategies about harnessing evidence to support better decision making which I think is what probably most people in this call are passionate about. If we go to the next slide there's also some requirements from Department of Treasury and Finance that specifically apply to lapsing programs. So the bulk of evaluation requirements in Victoria relate to budget funding so at the point where a project is funded through the budget process it's required to have an evaluation plan and it needs to have met that by the time the budget funding is lapsing and it does the requirements around lapsing programme evaluation do set out some high level requirements of what how programme evaluations should work and what it needs to include. If we move to the next slide I've just done a little excerpt from the resource management framework and I know this will be too small to probably read on screens but for people who get the slides later this is sort of the key section which outlines what the specific requirements are around those budget lapsing programs and so they're really asking for slightly different requirements slightly higher expectations where their total funding is 20 million or more and lower expectations where it's less than 20 million funding over that four-year period and so this is sort of all that sits in place in Victoria in terms of an absolute requirement around programme evaluation which means everything else happens as part of the broader network of activities and the sharing of good practice. I haven't bought in today most of the departments do have good examples of those sort of evaluation guidelines and templates and tools in place but there's not something sort of centrally mandated on that front. I think do I have one more slide? Oh yes and I was just emphasising as I've mentioned a couple of times so there is this Victorian public sector evaluation network and we're also closely connected to the Australian public sector evaluation network and that really operates to deliver this regular programme of events and share good practice and provide that sort of forum there's a lot of questions get asked between that network so say if somebody is looking to run a programme evaluation and they want peer review or they want some input that network sort of operates to provide that sharing of expertise and sharing of knowledge across departments. I think that's everything I was going to cover for Victoria. Thanks so much Eleanor that was great and love the photos too. We'll just move now Todd did you just I'll just throw to Todd and see Todd are you on the line can you speak and we'll just see if your connection will hold I think that's a no. Let's continue on. Christabel we'd love to hear from you about the Northern Territory. Thank you Rob if we can move to the next slide please. I'm talking to you today from Darwin which is on Laracieland and I'd like to acknowledge Elders past present and emerging. Next slide please. We've only recently set up our whole of government approach to evaluation in the Northern Territory. Our framework was released in May last year. I'd like to explain why we felt that we needed a whole of government evaluation approach in 2018 an interim report by the fiscal strategy panel found that the Northern Territory government was in the unsustainable position of borrowing to pay for recurrent activities including interest expenses. The 2019 final report had a range of recommendations to fix the budget including improving government's approach to program evaluation. The report noted a whole of government approach to evaluation was required to embed a culture of evidence-based policy across the territory government. Government accepted all the evaluation related recommendations and the Department of Treasury and Finance is responsible for implementing them. Next slide please. Luckily for us we weren't starting from scratch. We borrowed a lot from other jurisdictions especially New South Wales, ACT, WA and the Commonwealth Department of Industry, Science, Energy and Resources. We contacted our counterparts and other jurisdictions and we were delighted at how generous they were sharing the lessons they learned along the way. We also established a Northern Territory government program evaluation community of practice and this group was very active reviewing and commenting on drafts of the framework toolkit and templates. Next slide please. So we now have a team of three people and we sit within the budget development team. We report to the Senior Director of Budget Development and Evaluation within Treasury. Our role is to coordinate support and encourage the use of evaluations across government. The responsibility for evaluation or commissioning evaluations sits with line agencies. Next slide thanks. As our program evaluation framework was developed in the context of budget repair we needed to integrate evaluation into the budget development process and I'd like to finish by summarizing how we've been doing this. So the Cabinet submission template now includes program evaluation and sunset clause requirements. We use evaluations to inform Treasury comments on Cabinet submissions and this is important in terms of making sure that we're using the evidence base. A whole of government program master list is developed each year as part of the budget development process which links current programs to government priorities and strategies, previous evaluations and planned evaluations. A risk based ranking process is used to prioritize evaluations and this is important because we need to make sure that we're using our evaluation resources wisely. The annual schedule of evaluations is endorsed by the Budget Review Subcommittee of Cabinet each year along with an update on previous evaluations to close the loop. And finally a repository of completed evaluations is building an evidence base of what works in the territory. Next slide, thank you. And if you'd like to know more there's more information on our website. Thank you. Thanks so much, Christabel. That's fascinating, particularly the repository and I'm looking forward to hearing a little bit more about that as we go along. Navrena, can I invite you to share from your perspective? Sure, thank you. I'd first like to start by acknowledging the non oil people, the traditional custodians of the land from where I'm speaking to you today. I wish to acknowledge and respect their continuing culture and the contribution they make to the life of Canberra and the region. I'd like to start with the first side by putting the ACT's evaluation policy in context. Some of the characteristics of the ACT is a small jurisdiction with only one level of government shapes our approach to evaluation policy. So for example the ACT is a one service model which places a strong emphasis on working collaboratively across directorates. So this means for us there's been more of an emphasis on collaboration capability and guidance rather than formal mandate requirements for evaluation. So we've had an evaluation policy since 2010 and during this time our whole government approach has evolved. So 10 years ago when the evaluation policy was first launched it was very much within the performance and accountability framework. So it was designed to improve performance and accountability and the main requirement was for directorates to produce an annual evaluation plan and the role of the central agency was primarily one of coordinating that reporting process. In terms of our impetus for moving taking more of a whole government approach I think what we found was while the process of developing agency plans had increased awareness of evaluation and there'd been some increased training in some agencies that we needed to have a more of deliberate strategy to develop capability. In our evaluation policy we have a maturity framework and I think it was recognised we needed to have a bit more of a whole of government approach in building that capability. And I think the other driver was also an increasing recognition of the need to be able to evaluate priorities that impact multiple portfolios. So that's kind of the context to how we became established in 2019. So we were established the policy design and evaluation team was established as a whole of government function. It was funded in the 2018-19 budget and it was set up to complement and support the directorates own evaluation capabilities in their own evaluation work. So we don't have an oversight role or a compliance role we're intended as a whole of government resource and along those lines our initial focus has been on capability building. Partly this is because we recognise that this is going to be a key building block if we're looking at how to undertake cross-government evaluations in the future. I just want to briefly talk about the ACT evidence and evaluation academy because that's one of the key initiatives in our capability strategy and I think it illustrates our whole of government approach. It was launched in April this year and it will run until August. It's been designed and delivered for the ACT by Dr George Argerus from UTS with Dr Duncan Rital from rooftop social. And the key features of the academy that it is needs based and tailored to the ACT. So participants completed needs assessment at the beginning and during the program we also do an assessment of organisational culture from each directorate. The other aspect of it is that it's aimed at embedding evaluation practice and building a cohort across the ACT public service. So in the past we've trialled short courses and we wanted to take the next step beyond individual learning to developing more of a whole of government approach. So participants bring a work project to the academy and then the idea is that they transfer the skills and knowledge back to their team so it's part of this ongoing building of evaluation practice. And outside the workshop there's been six workshops and outside of the workshops participants meet in groups which includes coaching and peer support. And the peer support was a key part of the program so part of it has been providing participants with the tools to be effective peer supports. And partly this is because we have an expectation that this will be an ongoing cohort that continues to be engaged beyond the formal program. We see them having a key role in building evaluation practice and doing that collaboratively. So ultimately we want them to be a community of influencers. The participants are also supported by evaluation executive champions at the SES level and this recognises that we want capability to be more than individual skills development and we want to develop that whole of government evaluation culture. And then finally I think it takes me on to the next point about the what distinguishes the ACT's approach to evaluation. And I think it's the ACT's wellbeing framework which was launched last year and the dashboard was released this year. So when we set up the academy one of the key aspects of it was that we wanted to apply wellbeing lens. So we wanted participants to be able to have the confidence to evaluate policy and programs using a wellbeing lens. So the wellbeing framework provides that shared outcomes framework so that participants can start to make the connections across programs and start to think in a whole of government way about evaluation. And this will become increasingly important as we move to the next phase of embedding wellbeing into policy and budget processes in the ACT. So the whole of government approach to evaluation will become a key part of that wellbeing work as we go forward. So that's it for the ACT. Fantastic. Thanks Marina. I'm really interested to talk a little bit more about the wellbeing framework as we go along too. Now I think Todd may have rejoined us. Todd did you just want to try and speak and we'll see if we can hear you this time. Okay can you hear me? We can. So we'll just flip back to the Queensland slides. Technology. It's holding up for now. So thank you for rejoining us Todd. All right thank you and hopefully you can continue to hear me as I speak. Yeah so my name is Todd Sandsness and I work in the Queensland Government statisticians office in Treasury. Next slide. So just a brief orientation of the structure so that the stats office, the main role is to support the evidence based for policy and decision making. And this feeds into fairly well that supporting the budget process, economic analysis, policy development, performance reporting and investment decision making. We've got a small team in the evaluation and performance function within QGSO. We provide advice to Treasury and other agencies and undertake evaluation and have oversight for the Queensland Government Programme evaluation guidelines. Next slide. So just by way of some context for the guidelines represent the main component to a whole of government approach if you like. They came about in 2014 following a Queensland Audit Office audit recommendation to clarify expectations on the evaluation of public sector programs. We updated those guidelines in 2020 to simplify the language and support the application of contemporary evaluation practice within the context of government priorities. The previous edition had a fairly heavy focus on economic evaluation. We wanted to broaden that to other social and other policy environments particularly. The guidelines mandatory but they provide a good common starting place for agencies looking for good practice guidelines. Next slide. So the main purpose of the guidelines are to support users to understand the evaluative concepts. There's content there on sort of building capabilities. So we've got quite a few information sheets, short information sheets on specific evaluation topics to complement the guidelines themselves. A real theme on embedding evaluative thinking early and often throughout the policy cycle. So actually how to undertake the evaluation of government funded programs and then again a focus on a line expenditure to government priorities. There's a range of other complementary resources including the financial accountability app. So there's a section there that requires accountable officers to achieve value for money by ensuring their operations are carried out efficiently, effectively and economically. So the guidelines help to support that function and accountability. Performance management framework and then the budget process. So cabinet decisions particularly relate to significant government strategies and programs have increasingly required an evaluation framework to be established with reporting on outcomes. Next slide might be the last one. So that's just where our resource free resources available and the information sheets which we're sort of adding to as we go. Thank you. Thank you Todd. And thank you all for joining us today. So as I mentioned we will have some questions from all of you on the floor as well. So feel free to pop those in the chat as we go to. I'll start off though with a question to those of you on the panel who have whole of government evaluation approaches like that we've just heard about from Todd. What was the catalyst for establishing a whole of government approach and what are some of the strengths and weaknesses in your view and the challenges that you've faced in implementing it in practice? Christabel did you want to go first? Yeah sure. So I mentioned that our framework was developed in the context of the fiscal strategy panels report and at the time it was simply up to individual agencies to evaluate programs and the panel noted that there was an ad hoc approach to evaluation that the standard of evaluation was inconsistent across agencies and so the idea was having a whole of government approach would try to bring some consistency across government. And for those of you on the panel what have you found is working well with your whole of government approaches for those of you who are using them? Todd did you want to jump in with some of your experiences from Queensland? Yeah and just to clarify one of the comments we're not moving away from economic evaluation we just want to broaden the scope so that the principles that we've included can be applicable across a range of policy environments. So just to clarify that point we're certainly not moving away from economic evaluation that's a key component to a lot of sort of high profile high risk high scale evaluations that we undertake taken. What's worked well for us? Yeah I suppose providing a level of consistency from a starting point getting good principles in place across any evaluation depending on the sorry regardless of the size. And so I think there's been yeah also an increasing acknowledgement of thinking about evaluation at the outset so the program design and conception phase so not just as an afterthought but but early in that those discussions so that's probably worked fairly well from a whole government perspective. And Narina I'm interested in your thoughts too based on your experience what's worked well in your framework and how have you found the uptake particularly I'm interested in the wellbeing approach that you've integrated. What challenges have you faced in practice in having that uptake be embraced by the community generally? Yeah so in terms of some of the challenges I think when we were established as a policy design and evaluation team I think there was probably an assumption that we would do some more of the so we also have a hands-on role in terms of evaluation and that we would do some more of those strategic whole government evaluations and priorities. I think one of the lessons for us was that's probably a later maturity and that there was some investment in capability building but also in building networks and practice before before you can sort of get into that stage. We're still very much in the early days of the wellbeing work in terms of how that gets embedded into the policy and practice but I think what we're seeing is a real interest in being able to link what people are doing in terms of their programs and activities. So the domains are quite high level in terms of the indicators but there's quite a bit of work now happening to within areas to understand what the linkages are and what the pathways are and then what that might mean in terms of the future evidence and the ways that they can design to capture that wellbeing evidence as well as looking at how they might evaluate that contribution towards outcomes. So I think it's still fairly early but I think there's a real willingness to embrace the wellbeing framework in terms of those shared outcomes. Yeah that's really exciting Marina and Eleanor for you what have you found is working really well in Victoria and where have been some of the challenges that you've been navigating? Yes I think we've had that long established kind of approach around lapsing program evaluations which means that's a really well established part of the budgeting process and I think a lot of times when people talk about good evaluation systems partly it's about how integrated into the decision making and how much it is part of the fabric of the process. I think what we're seeing is like really good practice happening at the individual organization level but perhaps not you know I can really see the benefits where you do have those more systematic structures in place. On the other hand I guess what you see is different departments finding systems that work for them in their particular context with their particular content you know we're talking about that trade-off between economic evaluations and other kinds that works differently different policy portfolios so there is some benefit to diversity but equally I can see there's a lot of attraction in trying to set up a good whole of government process as well. There's a very interesting question that's just come through on the chat around the evidence base for whole of government evaluation frameworks and I think it's an interesting one for all of us as evaluators as we think about evidence all the time as we're informing practice but for our own practice to the panel and maybe Danielle I'm interested in your thoughts to start the evidence base behind having a whole of government approach what's your view on that and what have you learned from the New South Wales experience and the evaluation approaches that you've undertaken here. Okay so I think that ties into the earlier conversation too and as Christabel raised and as was followed by many it's quite important to have that consistency in approach because the government is making decisions across a range of very different but also very important projects and having a consistency in approach to evidence enables that larger scale decision making. Within New South Wales government all of our clusters have some a significant degree of evaluation expertise although the evaluation cluster looks different within each cluster the whole of government approach provides that overarching framework recognising the clusters do have their own expertise and certainly where clusters are developing that maturity we can provide greater guidance whereas other clusters may be better developed there. So I did just want to there is a point in the chat and we it was just raised on the economic evaluation and I would like to point out that you know one of the key things from the audit reports was the importance of recognising efficiency and value for money and transparency there. The New South Wales Guide to CBA emphasises the importance of net social benefits so it's not we don't think of it as an economic evaluation approach narrowly so the guidance on CBA is about recognising the breadth and extent of impacts both qualitatively and quantitatively identified and using as a final assessment benefit cost ratios to assist in identifying what that net social benefit may look like but certainly we see it as quite a broad social benefit assessment tool and yes being that it does provide for that consistency in approach to evidence across a range of different activities because the benefits it picks up can be social, economic, environmental or cultural. Thanks Danielle and to the panel more broadly the evidence base behind your decision making in adapting or not all of government approach I'm really interested to hear from you you know what what do you see as some of the strengths of the approach and what do you see as some of the weaknesses and ways to look at strengthening it over time. If I can jump in Mel I noticed that the 30 review of the APS had a big section on evaluation and what evaluation looks like in the future of government and they spent quite a bit of time going through the evidence of the different approaches to evaluation in government where you can have a centralized evaluation approach where you have a central agency that not only sets up the guidelines but also does the evaluation so where you've got a central agency coming in and evaluating the programs in other agencies and then you've got the other end of the spectrum where agencies are all taking different approaches to evaluations and they suggested that the best way forward was what they called a hub and spoke model where you've got a hub in the central agency which coordinates evaluation but takes a step back and leaves the actual commissioning of the evaluations or evaluations to the agencies which allows them to be to bring in their subject matter expertise and they suggested that way forward so for people who are interested this quite a bit of detail in that report. You know that's really interesting Cristina. Other members of the panel I do have a question if people don't want to jump in which is related to a number of the questions that have come through as well. It's around thinking about evaluation frameworks and the way that they shape and guide our thinking as evaluators. How do your frameworks in your view foster emerging trends in best practice in evaluation? So for instance those that have been articulated in the Productivity Commission's review the Indigenous Evaluation Strategy. Things like a collaborative design empowerment principles, elevation of lived experience voices. I'm really interested to hear from those of you on the panel how you've integrated some of that and I think Narina your well-being example may be one of these that we could start with but how you've integrated some of those emerging views on best practice to ensure that we as an evaluation community are not stagnating in the methods that we are using and applying in practice. So I might start there. I think with our policy which is obviously a 2010 one which we will be updating in the context of the well-being policy and I think we'll be picking up on some of the key themes around that. Certainly in terms of that one of lived experience I think that's quite a key focus in a lot of policy areas as well and I think that will have a stronger focus in a fresh evaluation policy and it was certainly in terms there was an extensive consultation process in terms of the well-being framework. So in terms of how the indicators and the measures were identified it was about an extensive engagement process with ACT community about what was important and what did well-being mean from them. So that process actually reflects quite an extensive process including people who may not normally participate in government community engagement. So the well-being team actually made there was a lot of work going to sort of reach out to different voices as part of that process. What about Eleanor from your setting? John, could you just repeat their question? Melissa, I just want to stay on task. I was very taken by what Norayna was saying. Thinking about your framework and approach, how does it support the novel sort of approaches to evaluation or best practice approaches to evaluation that are emerging over time? I gave the example of the IES framework produced by the Productivity Commission but more broadly like thinking about methodologies like empowerment methodologies, lived experience voices. How do you integrate that into a model and ensure that it's responsive and able to really encompass some of those novel methodologies? I think so in Victoria where we've got a looser whole of government strategy so actually if you look at the evidence reform strategy that's a very big end to valuing different kinds of evidence. A lot of what it's saying is that we probably need to move, we need to stop thinking about hierarchies of evidence and recognising that different forms of evidence are appropriate in different situations and that's actually a pretty big disruption where you've also got treasury guidelines that more say we'd expect pretty rigorous analysis of causation when we're talking about big investments. So there's two things, there's that tension of those things pulling against each other but I guess because neither of them are especially prescriptive it actually still leaves the door open for departments to make decisions on their own and I do think at this stage it's sort of emerging a bit more organically through the Victorian public service about which way they go around those concepts of method and how prescriptive it is so each department has their own sort of set of guidelines and broadly they all say that evaluation methods need to be fit for purpose. There's a lot of content around client voice and there's a lot of content about Indigenous led approaches as well so making sure that that's all fit for purpose. So I think there's pros and cons of having a loose model which does allow for that innovation and tailoring but yeah obviously the con is in terms of inconsistency we've had questions about you know sort of how do you compare apples and oranges and do you cost things like social impacts they they're all very very big questions that you know I think are open for debate in Victoria with the system we're under but you know the more that you get prescriptive the more you try and nail a whole of government approach the more you have to make decisions on those things you have to make it call one way or the other about whether you accept innovation or if you think it's going to create too much inconsistency. Good point Eleanor and Todd I guess this sort of goes to your point earlier as well around you know balancing different methodologies and different elements of evaluation and I'm interested in your perspective in Queensland. Yeah I mean I'd second the point that the design the methods the approaches should be fit for purpose and somewhat dependent on available resources and time and relationships and do we have sufficient time to allow the the right kind of relationships to occur with communities to sort of let them sort of lead some of the evaluations and design approaches so there's a lot of one of our information sheets touches on kind of the branches of evidence research evidence contextual evidence experiential evidence and financial evidence are just a few that we tap into where we recognise that there are different types of evidence that that are fit for purpose and appropriate just one other point I'll make is in terms of how we keep up with a rest of contemporary practice we stay looked into AES content better evaluation various sources but we stay connected with and across different networks so yeah we're always learning I think the the figure we may have just lost you again but I think we got the juice the evaluation is a learning field and profession if you like thank you yeah we've got you back now Todd yes thank you great answer and Cristobal there's a question here that I'm going to throw to you first and then to the rest of the panel which is similar sort of question but around whether a whole of government approaches actually help achieve value for government money in your perspective and does the evidence consistently change expenditure decisions so I'm interested in your thoughts given that fairly recently you determined that you wanted to go through a whole of government method look it's still really early days for us our framework was only released last year we'll certainly be reviewing our approach over time but I do think there is the potential for it to improve value for money over time I think one of the real important points of the whole of government approach is the coordination that we can bring so now as I mentioned part of our budget process we now ask agencies to complete the program master list and for us that is the first time that we have had a whole of government list of all of our programs how they link to government priorities and an indication of what the evidence base that sits behind each of those programs whether they have ever been evaluated before so that is really useful and also as part of that we ask agencies to indicate when they're planning to evaluate and that can be really important for where programs have a shared outcome we can start to encourage agencies to think about where it makes sense to coordinate evaluations so you haven't got siloed evaluations happening where you have we're not going out to communities and asking the same questions and collecting data independently where it makes sense to evaluate programs together so I think there is potential but it's something that we'll have to review over time yeah absolutely it's interesting actually that idea about linking the evaluations so that you're not oversampling really important point and something to consider there is a question that's specific to you and particularly that risk-based ranking that you've mentioned Claire has asked if you could explain a little bit more about the uptake and influence of the evaluation unit and how you influence the number of the quality of evaluations and the number of evaluations so you've described a bit about how you prioritise there yeah the risk-based approach that we use is adapted from the New South Wales approach and their their tiering and in our initial discussions with agencies it was interesting I think the the starting point for prioritising evaluations was simply the funding size of the program so how much funding the program had was used as the main driver for how important it was to evaluate it and while that is definitely a component it's not the only thing that should be taken into account and so getting people to think more broadly about what the risk actually looks like one important one is you know risk to government and community how much we how much it aligns to government priorities and what the evidence base is so if if something is a really new approach if we're trialling something innovative even if it has a relatively small budget it probably is something that should be prioritised for evaluation and then in terms of the the second part of the question how much influence we've had it's still really early days I think it's been really important for us to to bring together the people who are already involved in evaluation across the Northern Territory Government you know our community of practice now has over a hundred people across all agencies and finding ways for them to share information with each other share lessons learned with each other um I think will help over time uh to improve the quality of evaluations but still early days yeah absolutely Christabel and I guess given that you did borrow some of those ideas from New South Wales Danielle I'm interested to hear from you how the tier system is working in New South Wales I would very much like to also weigh in on the earlier discussion about the whole of government approach and please do and its effectiveness um from our perspective it's not a top-down process so our work in building an evidence bank and in updating say the evaluation guidance is very much a collaborative process so just as important is all those conversations that are happening across clusters about what's feasible what's best practice what we want to prioritize in terms of requirements and getting an understanding across different clusters of where our skills are and how we approach things and yeah just what the business models look like so in that sense as I'm emphasizing the whole of government approach is collaborative and that's part of its value um or I guess the the key aspect of its value in that we're developing this and learning together so that's on that point in terms of the tiering and risk based um as with um we've recognized uh with many of us you know evaluation and performance reporting is building in maturity we're not at a stage where the entire portfolio is of activities is being evaluated on the regular basis that we may be aiming for certainly there's a need to identify where we're going to prioritize evaluations and the gateway process is an effective way of saying these are particularly high profile or high risk projects we need to ensure an independent review of those appraisals and evaluations related to this initiative however the framework itself doesn't determine what should be evaluated and when so that's an internal agency decision often made in communication with treasury and certainly there's a value in a consistent evaluation schedule that considers performance across your initiatives when budget decisions are being made when information is needed to inform decision making whether it's just investment or reframing an initiative and also that there's a lot of value in evaluating smaller programs when there's potentials to demonstrate lessons learned or best practice across an area or that you know just the the learnings from that can be applied elsewhere so the the tiering is important to ensure that those big important initiatives are well monitored but certainly evaluation provides an important role across an entire portfolio of initiatives and daniel you touched on a point there that i guess i wanted to explore a little bit more around independence more broadly and i'm interested in and christabel you spoke about this earlier too around thinking about the internal conduct of evaluation the balance that you are all driving within your individual frameworks in independent evaluation work versus internal program led evaluation or perhaps independent government led evaluation so i wondered if each of you wanted to talk a little bit about the way that your frameworks drive or support consideration around which evaluations are conducted externally in which you commission versus those that you keep in house ellen or did you want to go first there well this in general is one of my favorite topics which is about you know what's the that sort of the ethics and appropriateness of internal versus external evaluation having led an internal evaluation unit for a long period so the frameworks in victoria really don't specify except for the fact that for lapsing program evaluations you can't have somebody who designed the program evaluating the program that's the that's the full extent of the limits in victoria which really leaves the door open for someone even just next door of a program to to be able to evaluate and i actually think that's fine because there's as long as the method and the approach is appropriate and it's been validated and you're basing it it's well designed and well executed it doesn't matter so much who does it and i think um obviously there's a there's a school of thought that says internal evaluation units inside departments aren't appropriate because you're evaluating yourself my position is obviously disagree with that i think that you can put all the appropriate checks and balances in place and actually the commercial relationship when you have to externally procure one effectively creates the same conflict so we have to be kind of at some point just trust that if these things are set up right and executed well there's no reason that the per the you know the entity delivering it has to play such a critical part but i do think you know i think it can be helpful to have frameworks which say how close or how far away you need to be in order to evaluate but i don't think that's a a really cool question about the quality of what what you're going to get delivered is about whether how close it is to the program itself it's really about how the thing's conducted um Todd if you're there in your view from Queensland um what sort of emphasis do you put on those external evaluation versus internal evaluation activities if any guidance in your framework itself um can you hear me okay we can yeah fairly similar um we don't we don't have that kind of tiered risk-based sort of approach either sometimes there may be a cabinet decision that requires external evaluation often it does relate to the risk or scale or resource intensiveness as well um or sort of profile um but yeah there's we we don't have um you know a massive breadth of evaluation and research functions across the departments in general um there are some small scale evaluations at the current house but um and where the capability building can um support that kind of in-house approach leveraging off the the guidelines and then that that's okay too so yeah it kind of comes often down to that that risk profile and and um capacity within the agency to deliver but yeah and Narina from your perspective um so we've had a focus on building internal capability and in that kind it's it's an interesting question because I think people often ask this question about independence and we we probably have a few examples of hybrid models where supporting internal areas to do it but there may be a data assurance process or there may be an economic analysis that's done externally so um I think we you know encourage it when we're working with um different parts of the public service and they raise this question we you know I guess we encourage the full gamut which includes looking at some of the hybrid models so there are advantages I think sometimes in doing it internally in terms of how that leads to continuous improvement in the program um but you also want to have some of those assurance processes so we have done a couple of examples of that hybrid model um and you know recommending things like peer review or whether there's components that can be done independently and also how you set up set up your reference groups um whether you know what sort of um degree of external representation you have on those so I guess it's an open question but we sort of also encourage people to think about that hybrid model as well and I think that's um you touched on a couple of other points that I wanted to delve into both the governance arrangements and so we might go there first I'm really interested in because I guess when we think about evaluation governance having those stakeholders who have either lived experience or who are close to programs and service delivery as well is really important to help shape and guide evaluation approaches I'm interested in whether of any of your frameworks actually encompass that sort of direction around governance might go Danielle did you want to speak first from New South Wales yes so certainly our 2016 guidelines have um advice on governance and also when it's appropriate to perhaps bring in an external consultant and where activities can be undertaken internally I think these the questions overlap because regardless you need an appropriate governance structure that is drawing upon the internal requirements for the evaluation so you can be clear in terms of who the audience is, how the information is going to be used, who should be feeding into the process and I think too that relates again to the question of use of internal and external resources so whilst external resources can be brought in at various points perhaps for assurance but often more because of immediate resourcing issues or particular expertise that is needed you never move away from the importance of internal evaluative thinking and it's very difficult to bring in an external consultant late in the piece if that evaluative thinking hasn't been set up at the beginning of the initiative and if monitoring isn't in place so I think we've always got to be careful about suggesting that evaluation is something that can be outsourced and that someone can come in late in the piece and just pick it up so from that sense the governance is very important at the beginning of the project thinking about okay we're implementing it who's going to be responsible for establishing the monitoring and who's going to be responsible for ensuring that the evaluation is shenjoyed at appropriate basis for decision making and it's a very good point Daniel I guess for me as an evaluator it's always the most exciting point to be brought in is at the start when a program is being established and I have seen I guess some sitting externally to government and watching tenders come out we have seen that there's a transition to bringing in evaluators early which is really exciting to see but I'm interested in how much of your frameworks drive that too and Christabel maybe from your perspective in establishing your framework was that something that you're conscious of really promoting this is evaluation early and partnership early yeah planning for evaluation early is is really important as part of our framework and the internal external discussion is interesting we haven't got any hard and fast rules it simply depends on the context there's you know there's pros and cons to both sides we like the idea of certainly um evaluation planning agencies being really involved in that they might have external help as part of that but I don't think we should ever think that we can outsource the entire evaluation planning um that really should be part of the policy design process and we we also like the idea of agencies being involved in some of the simpler smaller evaluations especially process implementation process evaluations where it's simply looking at at how a program was was implemented um with the idea of that building capability over time um there's always going to be a need for independent evaluations not not not because we can't always get the method right I think we can do good evaluations internally but sometimes we also need that that perception of independence as well and I think that's an important one um for us in terms of the governance we have evaluation work plan template which we ask agencies to fill out within six months of their program being approved and that steps them through who will be part of the steering committee when the evaluations will happen and what sort of methods that they will use and that's something that we review here at Treasury and so there can be that's the discussion point for us in terms of whether they've got the methods right whether they've got the right people involved you're great and um Eleanor in the Victorian perspective one of the other things I guess I'm interested in more broadly and we've touched on it a couple of times is that capacity building and I'm interested in from your perspective how you've integrated capacity building for evaluation into your model and into the work and driving that capacity across agencies well it's such a good question so in Victoria it's still happening agency by agency so Department of Health runs a pretty big um program of capacity building across kind of evaluation 101 and literature reviews and program logic and data visualization so it's got a kind of a program of capacity building activities and they have been offered out to the VPS evaluation network to participate but broadly it's actually targeted towards the one you know it's a big big department and now two departments Department of Health and Department of Fairness Families and Families Fairness and Housing which used to be DHS in Victoria so there isn't a sort of whole of government approach but the work that comes out of the the big team in Department of Health is sort of spreading back out I think there's clearly some very cool work happening um in the ACT I guess around that leadership academy approach as well so I feel like other other jurisdictions have probably taken a more cross-government approach on that one so we might throw then to Narina did you want to talk a little bit about the ACT model and I think you can probably um size is a factor um and our city state is a factor in why a whole of government approach to capability makes sense for us um I guess we learned from some when we just sort of did tailor training um but we sort of really want to build that I think when people participated in early training I think one of the things they really enjoyed was that making connections across different areas um and with peers in other directorates so it's um that's been a big part of the academy approach and I think we're really keen to sort of not just make it the five months of the program but also how do we continue to build on this as a cohort and then continue to build on it so it has a bit of an accelerating impact and so we start I think we'll start to learn some lessons around how we do some of the selection for the academy um so we did it on individual work programs but I think we might start looking more at how do we make it a bit more teams-based and that's some of that sustainability you know how do you address things with um individuals moving on or changing and how do you start to embed it into more into the organisational structure so I think that will be some of the issues that we'll be thinking about going forward but it certainly works for us as a model and I think that's partly size but also feedback from participants about that's where they get a lot of the value out of it. Yeah fantastic um panel members did anyone else have comments that they wanted to make? Yep Christabel's with her hand up. I just wanted to say with the capability building for us in the Northern Territory one of our challenges has been our remoteness um and so capability building uh was often something that involved flying someone up to have a workshop um in Darwin for example um and in that context the online training has been really valuable um especially the online workshops that we've had through the Australian Evaluation Society for example so being able to share those opportunities it meant that suddenly workshops were available much more frequently than they have been for us in the Northern Territory previously so that has been really useful um and also I guess I just wanted to say you know there's lots of different ways of building capability so there's the formal stuff but I love the informal stuff um you know sharing lessons learned I think is a great way of building capability and that's something that we're trying to do through our community practice as well. Yeah and you know it's a really um interesting point that you've sort of touched on there more broadly around the impact of COVID and um which we've spoken about and I know um we've held some previous panel discussions with some of you on the panel here um around the impact on evaluation more generally but it has enabled us all as evaluation society more broadly to connect um and to share learnings so there have been those benefits too um Danielle I'm interested in your perspectives on building capacity in New South Wales given that you're a fairly large area. Yes so as I pointed out previously we approached this as a collaborative process so I think it works both ways um we work with agencies and they're sort of identifying their best practices and their challenges and we can also share those um with them through our collaborative groups. I did want to point out in relation to some of the earlier discussion that say uh with um New South Wales Aboriginal Affairs and the OCA plan you know those ongoing developments feed into what we do as also say the work of Department of Customer Services in terms of recognizing customers and stakeholders. In terms of more general capability building there is a number of community communities of practices um that clusters run themselves and we have um we had the opportunity to sit in on um when we develop the or finalize the evaluation guidelines we will then also take that out to the clusters and utilize those communities of practice to help um run sessions on the new guidelines and how they're applied. I think to uh New South Wales it's got a number of cities and a lot of populations in regional locations so now uh some agencies are catching up with others who've already had good use of technology in place um so you know Department of Regional and South Wales to connect uh different groups um so yes some of them are already well established in coping with this online communications but certainly it's provided new forums. Yeah absolutely um I'm just going to open quickly to the floor were there any other questions from the floor anyone want to come off mute and ask something? It's been such a fascinating conversation everyone thank you so much for your participation.