 I'm just so thrilled to see you all here today and thank you so much for letting us know where you're sitting today as you as you join us. It's a really common practice now for us to acknowledge country and I just want to make sure we take a moment and do it justice so when I think about evaluation I often think about learning from those who have come before, building on the wisdom that exists and so when I think about acknowledging the elders of the country that I am dialing in from so the Boonaron country down in St Kelda in Melbourne it really gives me pause and kind of his deep sense of wishing to really connect and honour the work that's come before and the people and the leaders that have come before, the Aboriginal people who are here with us now from Australia and from other countries and the other First Nations people and to think about how we kind of build on what we know so far about learning from those cultures. One of the challenges that we continue to grapple with as evaluators is how do we do evaluation better in accordance with self-determination principles and practices. It's an area I still feel entirely novice at but sitting here and paying acknowledgement to the country I am trying to do better I think we as a profession are trying to do better and just really wishing to pay my acknowledgements and acknowledgements from the network of how fortunate we are to be able to building on the work and of the history of people who have come before. So just a little touch on housekeeping we are recording today the recording will be available from the AAS when they get a moment to kind of get a clear moment. I'm sure it will happen pretty quickly. We will be taking notes and we're taking questions in the chat so if you have questions as we're going through please feel free to put them in. We as always really value your thoughtful contributions so if there are questions that are beyond the scope of this conversation please let us know what they are because it helps Christabel and I and other members of the steering committee think through how we might better take the network forward. So the format for today is I'm going to give you a brief update on the work of the Australian public sector evaluation network. Then we are really delighted to have two colleagues with us from the ANU who have done a lot of really fantastic work in this space to help throw some ideas for our consideration about evaluation in the public sector and then there will be a facilitated conversation at the end. So I will let me allow me to introduce my colleague. So Dr Christabel Darcy is the Assistant Director for Programme Evaluation in the Department of Treasury and Finance for the Northern Territory. Dr Rob Bray PSM is here with us from the ANU. He joined the ANU in 2010 after a long career in the Australian public service and has been seminal in a number of really key and complex evaluations which I'm sure he'll touch on through his work. Professor Matthew Gray is also here with us he's Director of the ANU Centre for Social Research and Methods and has previously held important positions such as the Director of the Centre for Aboriginal Economic Research, Policy Research, Director of Research for the College of Arts and Social Sciences and Deputy Director of the Australian Institute of Family Studies. So we've got some people with really deep lived experience within the public sector and also who have spent a career doing really deep thinking about this really tricky topic of evaluation in the public sector. So in a moment I'll hand over to Christabel to introduce the sort of the meaty part of the session but I did want to give a little moment on the Australian Public Sector Evaluation Network or APSEN as we as we're known. We launched last year last festival so we're still pretty new and what has been really fantastic to see is that there's such a huge appetite of enthusiastic, engaged, skilled people in the public service and in the public purpose sector who are wanting to change the world through evaluation. So we have nearly 500 members now in APSEN. We aim to provide a dedicated informal network to connect people working in evaluation in the public sector to share information and build capability across the membership. We have a tremendous steering committee. Christabel is a key member. We have another eight steering committee colleagues. We've been working really hard behind the scenes to try and push ourselves on how can we best turn that aim into something that is useful and practical and doable within the context of a pandemic and the thing that we're most proud of is the SharePoint site. So APSEN has its own SharePoint site. It's available to all members and there's just a raft of really helpful tools and templates and artefacts that can help you as you're grappling through your evaluation challenges. So a plug I'll put a note in the chat later but if you haven't already joined us, please do. We'd love to see you as part of the membership. You can email it at us at apsen.asn.au. I could talk about APSEN all day but I'm actually more interested to hear what Christabel, Rob and Matt have to say. So I'm going to mute myself and hand over to Dr Christabel Darcy. Thanks Jo. We just wanted to give a little bit of context before Rob starts the presentation formally because as evaluators we should be interested in the evidence base of our approaches including our evaluation systems. When Rob and Matt's research was first published it was just at the time that we in the Northern Territory were setting up our whole of government approach to evaluation and we were moving from a very decentralized evaluation approach to a coordinated central approach and it was a major reform that involved changes to our budget development process and our cabinet submission templates and we knew that simply writing a guidance document wasn't going to be enough. We also had to drive some cultural change in the way that people perceived and approached evaluations and we had to explain why, why we needed this change and this research evaluation and learning from failure and success makes a strong case for a centrally coordinated approach to evaluation in which the responsibility for evaluations continues to be with line agencies and my understanding of the evidence is that this approach keeps the close link between evaluations and the program area while avoiding the downsides of a decentralized approach. Now in the NT we are hoping that our centralized approach will improve coordination of evaluation across government, support a consistent standard of evaluation, help prioritize and identify gaps in evaluation and build a centralized repository of evaluations and it was just so valuable to have an authoritative resource supported by thorough research and a superstar reference panel which included people like Patricia Rogers, Tom Karma and Nicholas Gruen. Now I don't know if the way that we have integrated evaluation into our policy and budget development process has been a change for the better but we will be evaluating the reform and I can say that we will do our best to learn from the evaluation findings and add to the evidence base of how to improve evaluation functions in government. As this research influenced the way that we established our whole of government evaluation approach in the northern territory I'm so pleased that Robin that have agreed to present these findings today as part of the festival and I hope you find it as thought-provoking as I did. Over to you Robin. Thanks. First of all I'm talking from Nunderbolg country and I acknowledge that we're a setless society that lives on lands which were forcibly taken from the indigenous people of Australia and I'd also as an introduction thank the AES for the invitation to talk today. I'll just get the slides up and hopefully get into it. Yes I hope that's on everyone's screen now. I'll talk today drawing on two papers the first was one which we prepared for the Australian New Zealand School of Government as an input to the THODI review of the Australian Public Service and that was done in association with another annual researcher Paul Tahart and then the second journal article in the Australian Pacific Journal of Public Administration which we did with David Stanton who once again brings a very long history of valuation and public administration on it. Both Matthew and I from the ANU my background as Chris Bill mentioned was in the public service and I moved over to the ANU and Matt's had some of those sorts of mix of experiences and some of the major evaluations we've been involved in have been the evaluation of new income management in Northern Territory and most recently the evaluation of child care reforms which we hope will be out in public fairly soon. The focus however today will be on some of the materials coming from the ANSOP papers as I mentioned they were developed to feed into the THODI review and why not start off with the conclusions. The THODI review recommended that a culture of evaluation be embedded in the public service specifically the partners of finance develop an APS-wide approach to build evaluation capability and ensure systematic evaluation of programs and policies that there be a central enabling function established in finance and that departments establish their own evaluation functions and publish annual plans there's all the valuations should be published and this exempt by cabinet and that cabinets establish a systematic approach to the formal evaluation of all programs and policies. The government agreed in part it's spoken of establishing a small team within the department of finance it's decided that publication should be where appropriate and it rejected the systematic approach within the cabinet process of ensuring those evaluations were embedded. It also though did note the creation of a specialist evaluation profession within the public service so moving to the paper before further it was essentially written around six questions that we were asked by ANSOP but at the same time we include an initial discussion of what we really saw as the tensions that come out and these are the four questions I'll deal with in more detail as I talk is what is accountability the link with learning the problem of the immediate and that balance between centralization decentralization. I'll also refer to big data which was a topic which we picked up in the Asian Pacific paper but before I guess into the problems it's most probably worthwhile looking at what's not a problem. The first one is that we echoed Shand with his finding that the major issues are evaluation and managerial rather than methodological. Essentially our view is the public service has the capability and the skills to undertake and to manage evaluation that's not to say that these can't be enhanced but rather that the base skills are there. The second is that all debates upon whether or not evaluation should be done internally within the public service or externally through the use of consultants and others. We did not see that as a really big divide there are merits in both approaches. External it takes you outside the immediate, it allows for a different perspective to be brought to the problems and it allows for multidisciplinary teams, large field work and a lot of other things to be organized and sometimes that's appropriate. Internal evaluation allows people to use a lot more of that detailed expertise and knowledge of programs on the ground. It allows that phenomenally important use of contacts within organizations to obtain information and to understand why things have been done. So it's really a balance. In terms of the public service and evaluation we did though feel that there was a big need to the public service to think a little bit more about the management of evaluations and a couple of points come up. Some of these are of course influenced by the fact we're now working as external evaluators. The first is not to attempt to micromanage external evaluators. Should I say if you've taken the risk to go down that path you keep with that risk and you allow the evaluators to work especially if it's long term evaluation running over a year or two weekly reporting etc does not facilitate that process. The second is to avoid task fragmentation. Times now the public service will get someone to do one bit of an evaluation, someone else do another bit. A third group to collect the data it makes it very difficult to do a cohesive evaluation. If you're relying upon the data that someone else has collected in particular questions they pose or if bits of the evaluation are done separately you do not get that whole picture. Access to data is phenomenally important and we've experienced a number of cases where while the evaluation people are committed the rest of the organization is not committed and so those individuals or custodians in the data are not necessarily willing to provide that data. Only two more problems timing. We get the surge of RFQs that come in in April when everyone's realized they've got some money left in their budgets. The bottom line is these evaluations are never delivered within the original timescale so just think about that please. And the final one is to think about how you manage external evaluators over a longer term. Did we do a good job? Who are the evaluators who you actually want to keep on using? Who are the evaluators you don't want to use? Do you share with your colleagues across the public service? Who was a successful evaluator and who not to touch? So often everyone seems to go back to square one and that sort of shared corporate knowledge doesn't seem to be used in the process. So there are those management issues that most probably should be looked at. Moving on to the question of accountability. The first of those big issues that I raised and this comes very much to how we pitched what we recommended in that paper for the third inquiry. We accept the Australian government does follow the Westminster principle however we know and that's the traditional responsibility of departments to their ministers, ministers to Parliament. Although we notice these days the accountability tends to be much more ministers cabinet or ministers to the Prime Minister and accountability decisions being taken on that basis often in a politically expedient way rather than full accountability through the Parliament. So I'll come then to the evaluator general approach which for those who don't know was a proposal particularly strongly backed by Nick Groen but by a number of others suggesting that an evaluator general as an equivalent to the auditor general be established that would undertake and manage public sector evaluations and would report directly to the cabinet. We looked at this proposal in quite some detail and it was really a big question of balance and what we saw was lots of positives around the potential of this type of approach and negatives around the reality of it being actually able to be properly instituted in Australia at this time. The real problem is it requires very long-term bipartisan political commitment and we know for example and that includes the dollars and as we know sort of with the orders of general even there are questions about whether or not there are sufficient resources being given to the orders of general to fully do the function and that is an incredibly well established institution within Australia's parliamentary democracy. So there were questions around whether a second one a second mechanism would actually be able to garnish that amount of support. The other issue is that within government government departments manage their contact very strongly with the orders of general and this was not really what we saw as the best environment for really good evaluation so the departments have always tried to keep their distance from the orders of general manage what information flows so that was the second problem was that we could see that being replicated. Coming to two other questions of accountability of the scope of evaluations and objectives and looking at those traditionally and this is a quote from Volcker who talks of evaluation having three lines one's efficiency how well it's been implemented effectiveness and then appropriateness. Now across the literature this issue of appropriateness is quite contested if you look at the UK I can never remember which colour which of their books is but the UK main evaluation book it's not mentioned at all in the loss of the US material is not mentioned but this third one is this one around objectives is it consistent with the needs of the client group and why do government policy considerations now firstly of course these two objectives can really be seen as being often in conflict with each other so the interests of the of a client group for a program is not necessarily the same as the government policy considerations. Now we took the approach and third he had spoken in some early material about rising expectation of citizens for more transparency and accountability and for that reason we actually see this third objective as being a very important one in public sector evaluation but it's one which does have to be managed quite cautiously by those who are undertaking the evaluation the second is that we tend to evaluate against program objectives and this these next two slides talk about program objectives Newstart payments that's unemployment support for the unemployed I think it's now job seeker in 1997-98 in the DSS annual report it gave a really clear objective it was to ensure that unemployed people received adequate levels of income to support themselves it then had a stack of indicators including the adequacy whether it's enabled financial independence take up because it was seen important the program not only provide those benefits those who applied for it but also to those who are eligible for it and did not apply that was obviously a program effective in this issue if people were not taking it up the ministry of efficiency customer service and protection of human rights now the same program 20 years later is described in the DSS PBS as being to assist people who are temporarily unable to support themselves and the performance criteria is that there's an agreement in place with the Department of Human Services that the payments are made in accordance with relevant legislation policy and guidelines that concept of adequacy is not mentioned there at all and in fact even further DSS was responsible for the whole of government submission to the 2019 Senate inquiry into the adequacy of Newstart and the word adequacy in that submission only appeared once and that was in the title reference so from the perspective of evaluators this question of evaluate against the objectives what are the objectives what are the real objectives what are the unstated objectives so often the governments in the policies and if you're thinking about embedding long-term evaluation what happens when the objectives are reframed to this degree second subject is accountability and learning and this is a challenge for both organizations and for evaluators so if one goes to sort of the classic statement of the functions of evaluation and i'm quoting here mark bed this accountability making sure the public institutions and their staff are held accountable for their performance as well as allocation and then learning the problem is is if we use evaluation in this accountability sense as a matter of judgment of the performance of people how do we actually always build in learning and how can we build good learning organizations now these points here there are six of them which come out of work in particular Paul to heart one important one most probably in terms of the network is the third look widely and compare performance so the building a learning organization is not just thinking about your own performance your own experience but what are the parallel experiences what's the international experience and look at all of that and draw upon that to help build your organizational focus sitting above all of these i see really two quite strong important issues the first is corporate memory building a corporate memory is absolutely essential to having a learning organization because if you come to the last bit your lesson drawing from what's happened in the past you're sustaining it which means you actually have to look both backwards and forwards looking widely also involves looking at your past history so corporate memory is very important and how you can preserve corporate memory through reports evaluation reports etc the second is building organizations where it's safe to be self-critical i've got no simple answers on how one does that but to otherwise recognize that is a challenge the next focus is that problem of the immediate with governments increasingly focusing on the short term we all know the 24 hour news cycle and that really poses a big problem and especially since it's a new cycle it's into gotcha and comes back to that question about accountability and learning unfortunately also within the public service it means we really do reward the fixes those who can play with that 24 hour cycle who can come up with the incident answers not those who are necessarily looking at long term the problem is evaluation takes time and there are a couple of sub-elements of that the first is if there is evaluation being done you have to quarantine the resources and the worst is that you're most probably pulling some of your brightest people to do the evaluation and they're the same people who can contribute elsewhere in your organization but that has to be managed secondly that if you're doing evaluation knowing the pressures of the short term how can you find make timely findings without jeopardizing your big evaluation project and your those balanced judgments you often only make right at the end by giving feedback as the process goes through investing in data is a fairly important step to achieving some of this because if you've already got good data that can be drawn upon it does cut down the amount of time evaluation takes but it leaves all of you with that real challenge of building learning and reflective organizations acknowledging those pressures of the short term sorry just sorry what's happened is the text comments have just come up onto my screen but the fourth one is the question of centralization and decentralization i've already mentioned one part of that which was to do with the evaluated general but more broadly to my mind it is a perpetual challenge and it's not just in the field of evaluation but it's the classic challenge in policy and program separation and that concept of practice of having the theory and the practice together it's most probably one which at times was due too much over because i think it's unlike fairly unlikely that there's actually a proper outcome rather i think what we tend to do is we swing one way in an organization and then swing back the other way because there are always benefits in having them together benefits and having them apart and we just have to accept this transition from one to the other but the real challenge is that some leadership because with good leadership you can operate under both of those circumstances good leadership can force when you've got the sent the to the function centralized can force people to think outwardly at the same time when they're decentralized good leadership helps build the bridges so the leadership is really critical there our preferred approach was one which really i think reflected upon that balance a units and central agency and that was to give leadership across the public services and overall to provide oversight and to ensure a fairly comprehensive approach so it wouldn't be doing all of the work but it would have this constant role in the constant focus on leadership and comprehensive across government departments and also that having a centralized function in the central agency really can be used to build enhance capacity and also enhance some sort of mobility because there's lots to be said for having experienced evaluators moving across functions within departments we'd argue for centralization very much to give critical mass because no single evaluator has all of the range of skills you need to conduct a good evaluation you need people who do think both quantitative and qualitative senses you need people who are very good at working with people people who are very good working with data and for all of these things you actually want a critical mass of people and but still keeping that mass close to the program which we consider one achieves within departments so they saw key points from 30 and the final one is big data and i'll only mention this one briefly because i see times marching on big data has many positives that utilizes existing data which as i said earlier is really important because if you've got the data there already it really helps cut down on that evaluation task it allows us to look at small population groups something which we so often don't do but where we know that because programs have heterogeneous impacts that we have to be aware of what happens to small groups as well as the average effects gives us lots of covariates and big data derived from program sources gives us a good linkage to program interventions because we know who's been treated at the same time it's got negatives at times it's limited to program specifics and the concepts underlying that so in social security the concept of income is very specific so what's recorded as income is not necessarily the income that you want to use for other purposes enormous issues around privacy and social license and we've seen that with my health record number of people who weren't within do that and some of the challenges around covid reporting and social license is something that has to be built through trust and unfortunately in public administration we've not always taken those actions which build trust but have rather built mistrust the other thing that we should never forget about those at the margin because while big data enables us to get to some small population groups it's just as likely that those who are at the margin of our society are also at the margin of big data they're the people who will not necessarily use a health service they're the people who may not necessarily apply for a benefit and so hence to simply rely upon big data and thinking of it as being comprehensive will just lead to further neglect of groups who are already neglected in our society the challenges are for all fairly simple there has to be a commitment to building maintaining we've had in the past some excellent data sets that have just disappeared and have never been maintained we need good access by those who are needing to use the data but in the end we most probably big data still promises more than it will deliver Matt who most probably disagreed with me and told me that he can deliver a lot with big data so finally the lessons culture is critical and the words leadership and stewardship come through it really needs to be embedded structurally so that's through formal units and embedded in processes it has to cover both evaluation and how we learn from this which means both as evaluators and as organizations we have to be reflective and wide-looking we have to invest of course in skills and data but the remaining challenges are there and there's a lot of it is to do the nature of government and public administration in Australia so thanks thank you um Rob that was um that was just fantastic and Matt um I'll welcome you into the discussion um and perhaps I'll give you the first opportunity to correct anything that you think Rob said that was incorrect um or anything that he might have missed no I'd agree with everything Rob said and um you know I I think that um the emphasis on the challenges of accountability um and the commitment that's required is really the fundamental issue yeah um look there's been some great discussion in the chat Rob you probably haven't had a chance to see it but it's really good it certainly looks like you have um sparked um some some thinking there um and it's it's really good to see that coming through um one of the questions that I wanted to kick off with um was just in terms of how how things have changed so in particularly from the ANSOC um research that you did that was two years ago now um you know we we bought in the big data at the end but is there anything else that you think if you were to do similar research now um that would feed into a similar review is there anything else that you would would add to or something that you would change compared to what you did in 2019 I'll let you go Matt yeah I mean at one level the um response from the Commonwealth Government is enlightening which is partial acceptance of the recommendations which they um suit them and I mean I was very heartened to hear that it's your discussion about the Northern Territory and um it was useful for for your for your work so I think that um I I don't from my point of view I wouldn't really I haven't changed my view about what needs to happen um yeah I think there's the interesting question over to what extent it would be it should be internal to government versus a role for an evaluator general and we I think um came down just on the side of um more um essential function within government structures because we felt that that would embed the learning um and the culture to change more firmly within government and hopefully lead to a greater um culture of learning the question is whether that's actually going to happen and if it doesn't then maybe something like an evaluator general becomes a more attractive proposition actually on that I I have just a quick question um around the APS um evaluation profession what are your thoughts on that um having these this professional stream and how important is it also to have um basic evaluation skills within all policy and program offices all right uh look I think the the trouble in public service is that specialists do not necessarily always move through organizations well and that's one slight danger with having an evaluation profession is that they get sidetracked into only doing evaluation uh while there is an awful lot you have to learn about how to do an evaluation equally the skills you need as an evaluator are very frequently those skills that you need as a program or a policy developer as a program analyst and so hence I don't necessarily see a division between evaluation skills and that broader set of skills to that degree a lot of it's to do with mindset in the in those other areas you're so often focused on seeking the answers in evaluation you're almost a fraction more reflective in that you're both seeking answers on what has actually occurred but you're also answering a lot more of the question about why this has occurred and so hence there is sort of a dual function you have to do within within evaluation Matt what are your thoughts um I think I agree with Rob said I think that um whether evaluation is being done within government or your or it's being contracted out I think that to get the real value from that the people involved in the outsourcing of evaluation and contracting out need to understand something about evaluation and the technical side of it and how to interpret it and so that is one of the real challenges is how you can get the the benefit from that that that external outsourcing evaluation and people will sometimes say that you know to be independent it needs to be outsourced and in my experience you can have fiercely independent and high quality evaluation done from within government or outsourced and you can get evaluation that's designed to produce a certain outcome whether it done be within government or outsourced and even with universities where the academics might claim that they're not affected by the commercial incentives um they are I mean many research and just require evaluate require the money to fund staff and so on and so there are those sort of pressures that can be brought to bear but also if you choose the right person there are certain people have got known views about certain topics and if you go to them you're likely to get an answer that um you can reasonably predict before you enter them so I think there's that interesting question about um you know the ability to effectively manage and to ensure that you get high quality evaluation and that you're able to use the findings and learn from the experience I think that last one is a great point um and although I'm very biased I think um you know evaluation skills make us better public servants even a basic understanding of of data analysis and statistics and being able to read an evaluation report and see its flaws I think is is a skill that's useful but obviously I would say that um so more broadly um you know as public servants there are some things that we can change and some things that we can't change um what are your thoughts in terms of for for people who would like to um change the way that evaluation is perceived within the public sector what are some of the things that you would say individual public servants can can do um and I know Rob in particular you've been in the public service before and you know how tricky it can be what are some of the things that individuals can do right well I think having an understanding of evaluation the first instance is important for that uh so if you think about in terms of how programs are set up what are the objectives now what's actually stated in those objectives suddenly becomes really important when you're into evaluation so when people start thinking about how to express a program think about expressing outcomes in terms of what's actually truly able to be evaluated and also that's really good because it often gets around some of the very mushy thinking you know you have programs designed these days to make the world better uh and you and you need to get in there and say okay then well which betterness are you after so that's one set of contributions another one and it goes back to that point about thinking widely and about corporate knowledge I think evaluators do have a role in terms corporate knowledge and one important function is keeping on building that knowledge back into the organization uh to give an example government departments do do a range of papers or range of submissions to inquiries and things like that so often these only focus on the immediate projects providing some context some background a little chapter on the history of where things have come from what lessons have been learned in the past what was successful what failed drawing upon those evaluations giving context is the sort of little work that you can do and fit into other processes because it's not the sort of thing that people who are immediately cut in the thruster policy will think about they'll think about rising although that's just from now and to the extent you can get people reflecting back into that and giving them sorts of the hooks because you've got the control of the evaluation material you've been able to draw upon that that's a good way of bringing them in Matt anything to add I think sometimes a lot depends upon the views of the very senior people in the department and I think that there are circumstances within which there are senior people who are not open to evidence and data in which case I think it's extremely difficult but if there are senior people who are open to the potential value and can see them and are not overly concerned about the potential risks that they see then yeah then I think individuals can make a very big difference I think in part it depends upon the environment within which people are working and it's important to be realistic about that there are times when you can fight the good fight but you're actually on a hiding to nothing the chances of bringing about the change are slim but but I do think that when people persist over the longer term it can actually really change things and you do see that agencies yeah but it's not always comfortable what to be I would say yes exactly um Joe I'd be very grateful if you could have a look through the chat to see if there's some questions that we urgently need to address and while you do that I just wanted to ask one more question um when we talk about things that are uncomfortable one thing that is uncomfortable is evaluating an existing program especially if it hasn't been designed for evaluation that can be an uncomfortable process going through the program logic etc um what are your thoughts on this you you did mention it in your paper around you know the importance of evaluating existing programs especially given that high proportion of government funding is ongoing have you seen perhaps examples from other jurisdictions on the best way to approach this challenge how do we gradually pull in um and evaluate our high risk existing programs right you want to go Matt first uh yeah look I think that there are some things that can be done to to support longer term evaluation um but they so for some programs it's very difficult because decisions weren't made 10 15 20 years ago to help us down to the evidence base so um so I think in part in the paper we talk about there's some of this is about investments in long term data assets and one thing Rob referred to in either presentation and answer was that there are major valuable data assets that are there and then they're not and they just not maintained so I think that you know and I noted in the chat there's been some questions about ethics and comments that question of ethics and consent which is very important so obtaining the necessary ethics consent the necessary consent from people to to to recontact them or to link their data and so on is important I think that I do and this is where I would slightly diverge from Rob is that I would be I'm more optimistic about the value of linked large scale data to helping this um I think that that potential has not been realised yet fully but it's starting to be but on a similar vein to Rob's comment about those on the margins these types of data sets only include people generally who are currently receiving a program or a part of a program and so if you're talking about a major change which might dramatically expand the scope of coverage of a policy or program you might well find that the administrative data doesn't tell you much because the people you're interested in are not in it so I think that those investments in data assets and investment in long-term research capability and interest on a topic and expertise can really help but yeah there are a few things Rob. Yeah I look I don't think we've seen that much in really good experience that can be drawn on uh there are some cases where you say some of the orders and generals work almost touches on that sort of longer term but I think the essential elements are a actually identifying what those programs do and so the data side is enormously important so before you almost want to answer the question of how you know do we evaluate the program we actually have to start answering the question what's that program actually do how big how many people are impacting upon etc so because we almost stopped thinking about some of these embedded programs as actually being a program so that's straight program reporting and so hence the that step back from valuation the KPI's and the other program measures embedding that is a really good first step the second step is I think as a culture develops where evaluation is seen as the norm for these new programs that eventually starts building up some pressure for looking at the old programs but it's long term I mean these are programs that may have been in place for 50 odd years you don't expect to sort of crank up and get them looked at in the next two years and if there are things such as tax expenditures they're not even recognised as programs but with apologies to the few people in taxation who are actually trying to get them identified as programs oh gosh you and Rob you and Matt have given me so much to to think about in some ways um I'm fairly pretty wrapped about where the absence steering committee has been has been going because many of the things that you're talking about are also things that we are working really hard behind the scenes to try and do being able to connect people more broadly so that we can look more widely being able to have this better capability within the public sector evaluation colleagues is certainly a piece of me about how do we get governance right because with with good governance and kind of getting good sign off on objectives early for program areas you know I think that gives us a bit of a pathway going forward but I I think I could sit here for several more hours and just continue this conversation but that is not what the AES had in mind for us so for those of you who have put really thoughtful insightful questions in the chat that we didn't get to Christabel and I might work with Rob and Matt in the background to see if we might be able to bring you back for for a sort of a second a second bite at some of those more complex questions but please would everybody join me in thanking so much Rob and Matt for their wonderful contribution to to the to festival but also for their fantastic contribution through their careers to to evaluation in the public sector it's such a privilege to hear from you and thank you Christabel for your your fantastic facilitation so with me I think we say muted but we but we clap you thank you Rob and Matthew really appreciate your time today thanks for having us brilliant all right thank you everybody bye