 Before we begin, I would like to acknowledge the traditional owners of the land from which we are all dialing in from. We pay respect to elders past, present and future and acknowledge the importance of Indigenous knowledge in the academy and the profession. As a community of evaluators, we are privileged to work and learn every day with Indigenous colleagues and partners. Now, throughout this presentation, we're wanting to encourage people to pop their comments and questions onto the chat box. And our colleague, and so, and John will be moderating these questions and we'll have a chance to discuss them at the end of the presentation. So, you know, feel free to be using the chat box, if there are any questions that are coming up along the way. So today we'll be discussing research on evaluation, and we'll be hearing from a panel of speakers which I have the privilege of introducing. So, firstly, we'll be hearing from a speaker who is a familiar face to many of us, Dr. Ruth Mastin, hailing from New Zealand and now a senior lecturer at the Assessment and Evaluation Research Centre here at the University of Melbourne. Ruth has a deep expertise, has deep expertise in the areas of public health and health services research, educational evaluation and health promotion in schools with a specialisation in programme evaluation. She's currently working on several evaluations of evaluates of educational initiatives in Victoria and the Northern Territory and the evaluation of the weak health local government partnerships with over 30 local councils in Victoria. She holds an honorary fellow position at the Centre for Adolescent Health at Murdoch Children's Research Institute. We'll then get to hear from three brilliant emerging evaluators who come from varied background, but are all keen scholars of evaluation and in fact are now completing your Masters of Evaluation with us. So, firstly, we'll hear from Cat Franks, who has a background in business operations, developing grant programmes, managing research centres and client service, and she has an experience working in government-funded organisations, non-government sectors, including academic research, consultancy and design, and she's currently working in evaluation at an agency in government. Then we get to hear from Hannah Morgan, who is a qualified social worker with experience in mental health, disability and LGBTIQ plus health. Most recently, she has worked in project management, coordinating national projects focused on LGBTIQ plus health and palliative care research, and she's currently working as an evaluator at the Black Dock Institute. This is Australia's only medical research institute investigating mental health across the lifespan. And last, but definitely not least, we'll hear from Stephanie White, who come to evaluation after years working in education research, policy and practice in the government and non-government sectors. She's led projects in early childhood education of original and Torres Strait Islander education, student engagement, school improvement, and most recently student wellbeing. Stephanie is a senior evaluation officer at the Victorian Department of Education. So without much further ado, please welcome Dr Ruth Aston. Thank you, Katina. And yes, I'm just thrilled to have the opportunity to present to our AES members, but also as part of this incredible team. Just the next slide please, Katina. So what we wanted to do was sort of to start with a little bit of context about the review project and before you hear from each of our speakers about their particular areas of interest as part of this review. So we are conducting this review because the last known published review of research on evaluation was published in the 2017 but included research that was essentially available up until the date of 2014. So it had been some time since that review was conducted. But what Coronetales, this is Chris Coron from the University of Western Michigan, what he found with his colleagues is that over that decade of 2005 to 2014, there were 257 research and evaluation studies published. And what Chris and his colleagues were most concerned about was the limited research available at that time on valuing and evaluation, ethics, and the use of evaluation. They and we reflected that since that review was conducted, there has been a growing interest in many areas of evaluation, including but not limited to participatory evaluation and co-design in the relationship between policy and program design and evaluation practice. So therefore we felt that synthesizing research on evaluation is really important to make sure that all available evidence is being collated and summarized and ideally appraised on a fairly regular basis. It also allows us to see where we are at in terms of the knowledge that we have about our field of evaluation and evaluation practice and importantly identify where there are gaps and what priorities we have for future research on evaluation. Let me slide please. So before we go any further, we just wanted to sort of clarify how we have referred to research on evaluation for the purposes of a systematic review. And in large part we've inherited this definition. So the definition you can see on the slide there was defined by Chris Pryne and his colleagues where research on evaluation is any purposeful systematic and empirical inquiry intended to create a strong evidence base and infrastructure for applied practice of evaluation. So at this point I want to note that while empirical inquiry was the focus of our review, we recognize that there are many other types of inquiry that are as important at adding to our evidence base and infrastructure for applied practice of evaluation. So just noting that it's a lot broader than just empirical inquiry, but that was the purpose of this review. Next slide please. Thank you. So holding that definition and the broad context in mind, the objectives of this systematic review were to replicate Corinne's et al's original review with some modifications focusing on all research on evaluation that's been published from 2015 to 2019. We had the purpose of understanding what research has been conducted, where they made gaps, typically within the gaps that Corinne identified, are they still gaps, are they existing? So that we can actually develop an agenda for future research on evaluation, what are our priorities, what do we need to be doing research on based on the practice sort of advancements on that profession. At this point it's worth acknowledging and certainly important to recognize that this review is being led by Dr. Wanzas, this is Dana Wanzas. She's an assistant professor at the University of Wisconsin-Stout, and although she was very keen to hear that we were presenting to you, she didn't quite want to get up at 3 a.m. to join us. So she's here in spirit, but we do acknowledge that this work has been led by her, and we at the University of Melbourne are collaborating with her on this project. And as part of that, our part of the pie, if you like, or our part of the work was to look at three evaluation journals, with a total of 782 journal articles being included in our portion of our review. So what did we do? We screened all the articles, so all the 782 articles that I mentioned before based on foreign systematic review procedures with some minor amendments. So we were looking at all 782 articles, they were all published in that publication period I mentioned, all in English. They were published in one of 14 journals on evaluation, including the Evaluation Journal in Western Asia. And to be included in the review, the articles needed to demonstrate that they were purposeful, systematic and empirical, and we're generating knowledge about evaluation, as I mentioned before. So essentially meeting that definition of research on evaluation. And they also needed to include enough information for us to code them as part of the review. While we were really interested in the systematic review procedures, we weren't sure how much interest they would hold for you all. So I might be brief here, and perhaps touch on a few key points. As you can see, there's a team of us, and research on evaluation is, we learned as we went through this process, although we had some foresight that this might occur. That there was quite a lot of essentially disagreements between us, whether something was, whether an article was actually meeting that definition of research on evaluation. So we undertook quite extensive calibration processes, and we found with the US team that they were doing the same things. That was a bit of a relief. But we did reach an acceptable level of consensus with all 782 articles on title and abstract as to whether they met the definition of ROV. And we were looking at articles in evaluation and program planning, the evaluation general of Australasia, as I mentioned, and new directions for evaluation. So once the articles had met that initial inclusion criteria, that list of ROV that I mentioned before, we then went through a coding process using mal-marked coding framework, which is essentially a tool to classify research on evaluation in terms of what the subject of the enquiry is, and what the mode of enquiry is, as you can see on the screen there. Again, touching on this, this step was again quite challenging to reach consensus on. So we had to lower our benchmark there, and we did reach 66%, which again was relatively similar to what the US team was able to reach. And you can see a little bit about our process there. But just to clarify, once an article met that initial inclusion criteria, i.e. was it could have been classified as research on evaluation, we then identified what was the subject of the enquiry and what was the mode of the enquiry. And the US team are doing the exact same thing with the remaining, the living genius. Okay, now I apologize if you might need to zoom in, but I'm going to, this is essentially the results of our review before I hand over to Kat. We're in the red box. So the first three journals there, Evaluation and Programme Planning, New Directions for Evaluation and EGA. And there's a couple of observations I wanted to offer before I hand over to the next speaker. So across the three journals that we reviewed, between 22 and 38%, so I mean there's a 782 bigger than we screened, between 38% of those actually met the inclusion criteria for ROE, we look at EGA. So it's one of three, but it's quite amazing. I was quite happy to see that the Australasian Journal is proportionately contributing quite a high degree of research on evaluation to the evidence base. Unsurprisingly, this is where we essentially reaffirmed the original reviews findings. The majority of ROE studies were about evaluation practice, so the activities of doing evaluation. A lot of it was about methods, synthesis procedures and so forth. This was closely followed by evaluation consequences and domain specific ROE, which could be things like looking at ideal evaluation capacity building programs for certain groups of students or it could be an organization specific evaluation framework, be a tool for a discipline and so forth. In terms of most of inquiry, again, the patterns that we see here are almost identical to what our own Utah review found. The overwhelming majority of ROE studies are descriptive, followed closely by development tools and models. So an evaluation that's often the development of rubrics, framework development, so articles that talk specifically about developing evaluation frameworks and the processes of doing that were formed that we had agreed as well. So if we come back to the 782 articles that we reviewed, 190 of them were identified as research on evaluation. That's approximately 25%. So of course, that's the border of all the studies that we reviewed were research. And in terms of EJA in particular, this is a huge increase since the previous review. So thinking about when the previous review ended in 2014 to where this review began in 2015 to 2019, choir and e-tiles reviewed found only six ROE studies and EJA over that decade period they covered, and we identified 13 in that five year period alone. So there is an exponential increase and some of the other journals represent a similar growth, which we think is really positive. And while our colleagues in the US are still continuing to screen their articles, you can see some numbers on the table above that reflect what they are finding. And on brief glance, it's a relatively similar trend based on the subjects of inquiry and loads of inquiries relatively consistent across the journals. And that's quite reaffirming again that what we found in the choir and e-tiles review that decade we're seeing again, but there's more research on valuations being generated relative to what there was in the previous review. So I'm now going to hand over to Cache who's going to share her findings on the use of evaluation. Great. Thanks, Ruth. So my research focuses on empirical ROE literature from 2005 to 2019 on the use of evaluation findings within government. I explored ROE studies through the evaluation use framework of instrumental conceptual and symbolic use for those unfamiliar with these terms. Instrumental use is the direct use of evaluation recommendations or findings to inform decision making for change for action or change. Conceptual use is when findings do not lead to action or change, but results in users having a better understanding about the programme of policy that improves their knowledge or attitude about the programme of policy. Symbolic use, also known as political use, is when findings are used to legitimise an existing position or supported decision already made, or when evaluation findings are used for political self-interest. Outside the scope of this review is process use. This is found when behavioural changes are seen within those directly involved in the evaluation as a result of being part of the process of evaluation. Research in this area is mainly focused on evaluation capacity building, which could have been a whole capstone project in itself. The second part of my research examined whether the literature addressed influencing factors of findings use and the alignment to the Johnson & Co framework shown here, which is based on an ROE literature review of evaluation use articles from 1986 to 2005. They found 41 articles, which builds on the Cousins and Leithwoods 1986 framework of 12 factors. This framework categorises factors under evaluation implementation, decision or policy setting and stakeholder involvement. This was the search methodology, which I won't go into detail. I will note that I incorporated the ROE articles from Corinne's review, which is why my timeframe is from 2005. After completing this process, my sample was 14 papers. I will also quickly note the limitations. The review was conducted within the parameters of ROE systematic review. As such, this doesn't include publications outside the evaluation journals and exclude studies from 2020 onwards. I have to continue building on this review and capture additional publications on this topic in the future. The next two slides provide a brief overview of the findings. The icon show whether a study investigated a findings use category and whether they address use factors. As you can see, a range of findings use was studied with many addressing instrumental, conceptual and symbolic use. I will go into other use categories such as strategic use, which is when high level decision makers use evaluation findings such as a decision in a spending review as an example. The settings for the studies varied from federal to state government departments and agencies to Parliament, legislative officers to the European Commission. The majority of findings use studies investigated instrumental use, the most common form of use at a programmatic level. The findings evidence of use that informed decision making to change a policy or program, which is positive. However, the studies were mostly conducted in European nations and North America. Influencing factors were examined in almost all of the articles with eight article studying influencing factors aligned to the Johnson and Co framework. The most dominant category study was evaluation implementation, followed by decision and policy setting category. There were two noble factors of evaluation policy and organizational capacity to use evaluation explored. Stakeholder involvement was the category lease exam. In an Australian study by Maloney, stakeholder involvement was a frequently mentioned influencing factor for use. We've been learning how examining stakeholder strategies used by internal legislative evaluators across two articles. Otherwise, it was generally unexplored category. What I took away from this was that the factors that enable the use of evaluation findings is complex, which I'm sure some of you will know. The most common factors explored include evaluation quality, findings, the information needs of the evaluation audience and the political climate. I'll briefly take you through one of the studies that addresses these factors. I'm a researcher, Leederman, taking a different approach. Calling for a move away from the findings, the most defining, the most important factor for use. This study looked at the context specific necessary conditions required for users of external evaluation findings to make a decision to change a program. A qualitative comparative analysis of 11 program project evaluations within the Swiss Agency for Development and Cooperation was performed. The relationship between context conditions, so the pressure for change, and the level of conflict, and the active conditions. So the novelty of value, the novelty value of findings and the evaluation quality were examined to test for hypothesis. Whether the evaluation actors as a conciliator and a waker or a referee or a trigger for a change decision. The 11 cases of scattered across the evaluation context and active conditions with instrumental use evident in six cases. So the Awakener. In a low pressure and low conflict environment, it is expected that an evaluation court can cause a change decision by awakening people. As long as the findings reveal something new and are of good quality. This assumption was generally accepted. The trigger. Where there is pressure for change and low conflict, it is assumed the evaluation triggers change if it is good quality with novelty less of an issue. This was evident. This was evidence that supported this assumption. The referee. In a high pressure environment where stakeholders are in conflict, neither evaluation quality nor novelty is necessarily for utility with findings accepted by only one set of stakeholders. The process was not accepted and revised to an endorser where evaluation quality and novelty are independent factors. That alone are not necessary. The conciliator during conflicts where there is a lack of awareness change substantial decisions are made when evaluation are a high quality and novel but this is rare. Change decisions were not made in two cases displaying these conditions as such the research was unable to test. Studies such as this increase my awareness about the particular context related conditions which are outside the control of an evaluator conducting an evaluation within a government setting. From the studies led in Canada by Borgus, these looked at the usability of evaluation reports to know whether the precursors for evaluation use report credibility and quality were present. In addition to examining whether program evaluation findings were strategically utilized and spending reviews within two federal government agencies. It also looked at the application of findings to ongoing program design and delivery improvements. It found that the evaluation reports were credible with evaluation quality demonstrated through clear evaluation questions, the integration of stakeholders through the evaluation process and sound methods used to produce the findings. The reports provided useful information with relevant appropriate and actionable recommendations which could lead to implementation and instrumental use of findings. While these factors should support utility the evaluation findings were not used to make decisions and spending reviews so no strategic use. This is likely due to the requirements of the Canadian 2009 policy and evaluation. With the evaluations not meeting the information needs of high level decision makers, the timeliness of the reports and a program level focus. However, this level of focus did result in mostly instrumental use of findings for program improvement with some instances of conceptual use. This study made me consider the different uses of evaluation findings with the government and how an evaluation policy can enable and also impede use. I was also unaware of Canada's evaluating system main government that includes the requirements to make evaluation reports publicly available along with management response to action plans that address evaluation recommendations. Canada suddenly looks to be more advanced in Australia in relation to the level of evaluations conducted and the use of findings and transparency. However, for all the evaluation reports published there were still very few are we studies in this area. So it's hard to say what evaluation use looks like in government agencies and departments in Australia through the are we in this sample. As I only found one Australian study published by Jade Maloney in 2017 and the evaluation journal of Australasia apologies if I missed any other Australian studies. But this article focuses on AES members perceptions on the levels of evaluation use, the factors affecting utility and how well evaluators overcome barriers to use in practice. Respondents in this study rated the non use of evaluation findings by government agencies in Australia as a considerable problem. However, many have participated in an evaluation resulted in the findings being used mostly for accountability purposes when asked to account their most recent case of evaluation use. So many of the franchise factors, especially agency leadership commitment and individual openness to evaluation and supply site factors. Mainly stakeholder involvement in establishing the real purpose of the evaluation and effective communication of findings were viewed as important factors to support use. We haven't already read this article I highly recommend. I'm raising I hope to build on this review and potentially expand beyond evaluation journals as I'm interested to know whether there is research out there that looks at federal state government policymakers and program managers perspectives on the use of findings within government agencies and departments. Or if anyone's interested in being part of a study that looks at this please reach out. Thank you. On our hand over to him. Thanks Kat. Give me one moment. Okay. I'm zooming in from Gadigal land and really pleased to be here to hopefully consolidate in 10 minutes. What was quite a journey of deep diving into research on evaluation that met the criteria for values inquiry. Next slide please. I wanted to focus on this area as you heard earlier in this presentation. This was one of the under researched areas in Corrin and colleagues review. Only 3.5% of papers on research on evaluation were coded as being values inquiry. And as a new evaluator I thought I'll be really interesting to see what I could learn from the various opinions of stakeholders and evaluators in my own professional practice. So first of all I wanted to define what we mean by values inquiry and using Mark's definition. Mark defines values inquiry as identifying value positions of stakeholders and public evaluation methods used to probe the values embedded in and related to a program. So essentially I was looking for research on evaluation that included the opinions, attitudes, beliefs and values of stakeholder groups and evaluators. Next slide please. So what values inquiry research on evaluation in was completed between 2015 to 19. Well I can say that there were 20 articles in total. Interestingly the articles that our team coded they seem to have I guess we coded more articles and I don't know if there was some differences and approach across the teams. But yeah I think it's also interesting to note there were four journals that didn't include any values inquiry research on evaluation. Next slide please. Thank you so much. So I completed a thematic analysis on the literature. And while values inquiry was diverse in subject matter, I did find and maybe it was my social work background, but there was an overarching repeating theme around power. And I found it really interesting because there was lots of opinions and perspectives around how power is distributed or not distributed. And the consequences this can have on evaluation framing processes outcomes and use. So I found the power was explored across different contexts, the cultural context, mostly around Australia and New Zealand evaluation and political context program evaluation but specifically participatory approaches. And as well as that there was articles that were about the evaluation profession. Next slide please. So I wanted to give you, I guess a short summary on some of what I found as well as the implications for practice. I know for myself I'm always thinking about so what does this mean for me in the field. I'll give you a little bit of a summary across those different areas. So for articles discussed evaluation and cultural context highlighting the ways in which evaluations mainly reflect Western understandings of the world and Western perspectives of merit and worth. So in terms of implications for practice there's a need to understand the benefits of practicing in a in a culturally appropriate way, being reflexive as an evaluator, having awareness of cultural bias and challenging theories that underpin program designs. And really thinking about our position in relation to the evaluation considering the history of colonization, acknowledging indigenous knowledge systems and thinking about how we're doing this in practice and if we are being culturally appropriate in the way that we work. Next slide please. Really interesting. I found this area and following on a little bit from Kat and some of the discussions raised. But there were discussions around the political context so multiple papers discussed the power that government and administrators have in influencing the way evaluations are conducted, how outcomes are determined and evaluation use. So Evans et al reported there was a high reliance on funding bodies in determining outcomes, sometimes less reliance on evidence based sources such as literature reviews, and a small survey of 63 Australian evaluators found that evaluators employed in medium and small organizations said funding bodies were influential in the choice of evaluation design. So it's this discussion around how evaluation can feel politicized and constrained, and that sometimes and I'm sure we've all experienced the idea of walking into an evaluation evaluation is predetermined outcomes. I think we can relate to that. So in terms of practice implications, Evans proposed that there were a range of practical actions that include adopting a values based agenda, asking stakeholders what outcomes are most important to them and taking action to defend evaluation of different outcomes that really go beyond the range of short term outcomes prioritized in fiscal and social policy. Next slide please. At the program level and I was really pleased to see this because I've done like quite a lot of co design in my project management work and I'm really interested in participatory evaluation and appreciate that there's always power struggles in those spaces. Well, there often is anyway. And so studies highlighted a range of different considerations, including how they can be different agendas between program implementers and those people who are engaged to work in a participatory way, and those who are essentially the people who are meant to benefit from the program. And it spoke about how there's often a political interest that influences how big I guess the extent to which participatory evaluation can happen. There was one single factor in experiment by chronic and Roman and this was so interesting because it really showed in this case that it can evaluation use is higher when you don't use a participatory approach when compared to when you use a participatory approach, but you do it in an inauthentic way. And I think that would ring true for most of us here who participated in participatory engagements and it hasn't felt like your contribution has really been taken into consideration. So I think this could be a really interesting area of future research. And next slide please. And a little bit separate but still connected to all of this is discussions around the evaluation profession. And mostly it was about the progression of the profession and getting evaluators views on how we should progress whether it's through certification credentialing looking at competencies. And while there wasn't a clear consensus on what could be pursued, there were concerns raised about the potential obstacles that could be created through the professionalization of evaluation field. And so discussions around for who would this benefit, why and in what conditions, who would be best served by decisions that might be made around professionalization. And I guess for me, it made me think about how values inquiry views and findings from the research are probably quite important when thinking about the progression of evaluation as a discipline. Next slide please. So the research to me has shown that there's a need to think about the way that power manifests across all levels of evaluation. And I think values inquiry research could have discussed in this thinking. And while these findings can't necessarily been neatly synthesized because there were diverse subject matter covered in the articles and articles that weren't included within this broader theme. I think that certainly there could be a need to interrogate some of these areas further with more research. And I think there could be a link between values inquiry research and how we might capture that and capture that information synthesize it and integrate it into conversations about the evaluation profession. Next slide please. So what was some of my key takeaways? They're still happening as I reflect on this topic. Certainly it was a challenge in synthesizing these articles because it was diverse content. But I think values inquiry can tell us useful information about how we do evaluation and who and who we want to be as a profession and where we want to go. It helps illuminate illuminating give voice to some of the complexity we face when working in the field. There was so many times I read these research articles and I felt really validated and seen. But essentially, this is not the only space we can find that validation and find this really important content that there's also not empirical research that provides a significant contribution to these discussions as well. And it was challenging for me sometimes to not include research papers that I thought would, you know, were making really important points around values and values inquiry in evaluation. And that's all from me. Thanks. I'll hand over to Steph. Thanks so much, Hannah. And I'll try not to rehash too many similar reflections over the next 10 minutes. So my nested literature review was on professional ethics in evaluation and I drew on three common definitions of ethics for this, the principles of morality, the rules for a profession and the study of ideal human behavior. As an emerging evaluator like Kat and Hannah I was really keen to explore how research on evaluation is building our understanding of evaluation ethics and what this means for practitioners. I think when when I've been thinking about evaluation and practicing this, it's like there's lots of components that on their own seem feel it fairly straightforward. But then when you bring it all together, there's the context stakeholders values politics, etc, etc. All of that messiness can sometimes lead to ethical dilemmas and, you know, despite an acknowledgement of the ethical dilemmas that evaluators face and that and evaluation ethics be my apologies. Ethics being considered important for the field of evaluation, the research on it over the years has been fairly patchy. So I'll briefly talk through the nature of my review the gaps and implications I've identified and finish off with a brief, hopefully not too rehashed reflection of patterns. Thanks, Karen. So my intention through the literature review was to focus on literature that really explicitly framed was really explicitly framed in terms of evaluation ethics. So it was quite, it was quite narrow in its scope. And I was looking at, yeah, I had three focuses through the through the review. I looked at how the study framed ethics in terms of those three relevant definitions I mentioned previously. I then looked at how the research sought to progress our understanding through contributing testing or generating new knowledge. And then using the AES guidelines as a proxy for the needs of practitioners, considered what the implications were for practitioners at different stages of the evaluation process. So what did I find? Thanks, Kat. Honestly, not, not much within the inclusion criteria I set for my review, but at the same time found quite a lot. So this is a map of the findings from the five articles that were identified. And so in terms of framing of evaluation ethics, there was some mention of this study of ideal human behavior and it generally referred to previous studies rather than being the focus of the paper itself. So ethics was generally framed in terms of the principles that underpin the rules of the profession. So for example, following mandated ethics review processes is important from the perspective of not doing harm to participants. In terms of how the research contributed to our understanding of evaluation ethics, the articles exclusively contributed to existing knowledge. So they built on previous research findings, which isn't, I mean, this is common in research, right? I suppose just in terms of pushing the limits of our understanding of evaluation ethics in terms of generating new knowledge and testing hypotheses and things like that was less of a feature. But in contributing to the existing knowledge base, they did so through exploring the ideas in local localised and contextualised ways. So one example of that was pleasure and colleagues took four previous studies about pressure on evaluators that were in different countries and then elevated that for a country comparison of pressure on evaluators. In terms of the needs of practitioners, again, there was some reference to the studies that, to previous studies that looked at the entry and contracting stage of evaluation, but they really sharply focused on the conduct and reporting stages of the evaluation. So for example, Nathan and colleagues looked at the development of a capacity to consent protocol for youth mental health evaluation, where parental consent was the norm in that jurisdiction, but was a challenge for their evaluation context. Interestingly though, these articles through the on the conduct and reporting focus did tend to sort of mention the entry contracting stage. This is something that we know through other literature that it's a really important stage in the evaluation. So really alluding to the opportunity for further research then. Looking at this little visual it's pretty clear that there's some gaps but it is only five articles. And as I said my literature review is quite narrow in scope so there were some articles that didn't meet my inclusion criteria that absolutely have implications for the ethical practice of evaluations. So, for example, where he pay Anna and McKay, I think Hannah also has this listed in her slides as well, have a paper on evaluative thinking took that talks about the ethical imperative to uphold various cultural ways of deliberating during an evaluation. I'm kind of heading into implications here but I think the other thing I found quite interesting was the recent writing that sort of falls outside of research on evaluation or even the literature that we reviewed that is calling on research on evaluation to do a bit more about shifting for professional ethics. So, one key writer is Thomas Schwant and Schwant and Gates in evaluating and valuing that's not going to work on the video. But evaluating and valuing on social research they call on our way to address topics such as the efficacy of deliberations in evaluation and other types of emerging practices in evaluation and there's another more recent publication I think practical wisdom for an ethical evaluation practice Schwant has a chapter in there where he and he calls on ROE to look at things like the practical wisdom as an as an organizing framework for our professional practice and you know perhaps ROE being able to look at enablers or barriers to that type of work in the field. Also you know as has been mentioned already there's you know current current work there's more you know these review this review ended in the period was 2019 was the last sort of literature we were looking at and just looking at the most recent issue of the Evaluation Journal of Australasia there's definitely some articles in there about ethics and even the book review Kylie Kingston's book review of a research agenda for evaluation that's even calling you know talking about professional ethics so certainly a lot a lot going on. Thanks Kat. So I probably covered quite a few of these implications already but I suppose in terms of professional ethics there was what I found through the literature review but there's also all of these other topics that are being written on that absolutely have implications for our practice and I think there was through reading the literature and taking that the lens of ethics to the papers there's a lot to learn with talking about methods unintended consequences evaluative thinking etc etc. But as I mentioned you know Schwant Schwant and Gates they're also calling on ROE to address some new topics as well. And as Hannah mentioned empirical inquiry is just one type of inquiry and I think you know ROE has its role to play but there are other types of inquiry that will be really well suited to complement ROE that are well suited to complement ROE in building the evidence base for professional practice, ethical professional practice in evaluation. On a more personal note this this process has really helped me this process of engaging with the literature has helped me to sit with my own ethics and sit with the messiness of evaluation. I think you know being able to see that experiences we have are reflected in others experiences through the literature. I mean it's certainly given me that opportunity to reflect and just I guess be cognizant of the values that I'm bringing to my practice and sometimes if I feel a bit of tension or there's sort of tension in the room or however it manifests being able to sit with that and unpack that. And just as there are many topics to still investigate with ethics in evaluation practice I have so many more questions I'm an emerging evaluator. And also really new to the research on evaluation literature so loads of questions but that is me ethics in evaluation practice. There's some stuff out there and there's still a lot to do. And thank you Katina. Thank you. Thank you. Thank you. Thank you Ruth, Kat, Hannah and Steph for the tour de force of making this challenging complex and extremely important topic. So interesting and relevant for us as practitioners. As many of you might have heard you know it's a the team spent considerable time reflecting discussing debating and trying to reach consensus on this topic of ROE and I think in reflecting on this body of work it seems to me that this research on evaluation should not only be the purview of the select few. I think if we as practitioners care about the profession and care about furthering the knowledge base on which we can draw on on this body of work then I think we need to pay careful attention to what gets added to this body of knowledge. Who is setting the agenda for example and you know what is getting prioritized and you know the other important question to us ourselves is you know are we willing to invest time into this into this important work. Now I promise that we will set some time for questions and answers and discussions but I just wanted to make this a call to action today. And we're very keen as a team and we're just a team of people who are interested in this topic came together and of course a part of that is fulfilling the capstone requirement but I think it's become so much more than that. For all of us we've said that you know we're very keen to continue to work together and we're wanting to make this a call for action to you as well today. And you know and we're wanting to invite more participation and more involvement from you and this is an opportunity I think as a community of practitioners can come together. So on the chat box I've put a very quick two minute form for you to stay in touch with us if you're interested to have more conversations. And we can think about three ways for us to continue to engage for you to engage with us so if you'd like to just be kept in the loop in terms of you know the future developments of this project where is it heading what are some of the findings. And you're wanting to just stay in touch that way then that's that's great. The other initiative that we're embarking on soon is around trying to get a sense a better sense in the renewal of the needs assessment. In the Australasian context around what are the topics that will be relevant for us as practitioners in this are we space and kind of the priorities that needs to be said so that's kind of a gender setting. So if you'd like to be a participant and for sure let us know and if even better if you want to be part of that in part I want to collaborate with us then you know be very very welcome and we'd be very keen to hear from you. So pop your name into the into the form and then we'll be happy to stay in touch. I think there are also opportunities, you know we see that the work in the collaboration with you know see a Wisconsin start as absolutely a strength to this collaboration and you know we are talking about opportunities in the future to kind of make this a resource for us as practitioners and open you know open access repository where people can access some of some of these works so. So yes, please you know we're wanting this conversation to be ongoing, and we're wanting to definitely have more conversations. So we have about eight minutes now for questions and answers and thank you again john for facilitating that and I might pass the time to you to feel the few questions for us. Great thank you Katina and thank you to our speakers. We do have a few questions on the chat room but before I get to those. I'll pose one question to all three speakers but not to answer it straight away, but to think about it and perhaps and towards the end. I suppose my question is, and in part, I know that I think it was. I think it's Hannah who said that she likes reflect on so what what does this mean step to get into the implications. So my question is, what are the, what are the immediate or implications for the as as a professional body. So I'll ask you in a few minutes to come back with if there was if this if there was one standard implication about the as as a professional organization, what might that be. But while you're thinking about that I'll now get into the chat room. And I know Hannah has already responded to this first question but I will ask it anyway. And that's from Karen which is that Karen said that the yes well the EGA had two special issues on the particular topic of our E, and the question is did you see and did you see similar concentration on the values topics in other journals and I know Hannah you responded and said, I think in short no but you want to elaborate on whether any differences across the journals that you saw in terms of the focus on values. Yeah, there was so with that question. There was the one special issue, but I didn't see other special issues. And in terms of the different journals and their focus points I haven't really done an analysis on how those different journals. You know whether the content they discussed was similar or different in those in that way so I probably have to have another look at that to answer that question with confidence. But yeah, thank you. So the second part of Karen's questions is similar lines but rather than comparing across the journals a little bit more the time horizon. If you did you notice any and you may not have been a specific area of study, but did you notice or have any sort of observations around whether there's been a shift across time in the focus or content regarding the journals on values. I didn't do an analysis on current at ours articles that they coded as values inquiry, and I didn't really look at the shifts over time. But when I went back and had a little scan of the time period we were covering there wasn't anything that stood out. So the articles were sort of across all time periods covering, you know different content so you might get something in 2015 talking about participatory approaches but then also in 2019 it comes up again. So I didn't see shifts in our time period but I think it would be really interesting to go back and have a look at that because I imagine, particularly with things like participatory evaluation. And the cultural context, those things I don't think would have perhaps come up as much in the previous review. Thank you. Another next question is well to Hannah and Steph, which is about the interchangeable to have have people use the terms around ethics and values interchangeably. And I'll add to that question. What's your take on those two, what differentiates those two terms, and where do they overlap. So first of all, have you, did you see in the literature, any confusion or interchangeability, but what's your both of your perspectives on the use of those two terms. I'd know who liked to go first perhaps Hannah, if you can go then I'll ask Steph to follow up. Yeah, such a great question. So rarely if ever did I see ethics talked about explicitly. But when I think about the themes that I've raised today that came out of that collection of research articles, I think there are many ethical questions and implications to some of the discussions that were had. So, yeah, it's a tricky one. Steph and I spoke about this throughout our process. And, yeah, I think that it was it was lacking, I think that drawing that line, particularly around I was thinking about the, the one study with the participatory situation where, you know, you can think about the harms that could be caused if we don't do it in a way that is authentic. And for me that brings up ethical questions. Steph, did you want to pick up on anything else. Thanks Hannah, I, yeah, can only echo that I will admit that I really naively came into this saying I said I think I said to someone, I don't want to think about values, I don't talk about value, I don't want to talk about ethics. And so, this is definitely, yeah, still a learning, a learning journey for me. I don't, I, yeah, I, I didn't read anything that alluded to the terms being used interchangeably but I think that the interrelationship between the two is, is really and I'm, yeah, still trying to unpick it myself. So to that second question, I'm not sure that I have a very, an answer that is articulate at all. That's okay. Katina, how much time do we have left because I know there's a few questions that might have a cherry pick. How much time do we have left. Right. Okay, I think the speakers are happy to stay on for another 10 minutes. But if we can take one more question, and then those who are happy to stay on could but we'll just wrap that up after the next, after the next question. Okay, not a problem. I might go to the question of gaps. I'm just scrolling back up now. There was a, we'll see it. Okay, this is from Santi Owen. Other than the, what other research gaps are given that you've all undertaken this review of the different aspects of ROE. What do you see as the main gaps that you would like to see address. I'd like to go first, but Kat or Seth or Hannah, from your perspective, what do you see to be the main gaps. Kat, do you want to go? Oh, sorry, I'm having technical. Sorry, apologies if you saw my screamer for I didn't realize apologies and we did that. Sorry, my bad. Yeah, I think for me, the tricky part was because we were looking at empirical research was we had to exclude a lot of articles that I think are really important and I think potentially for me in terms of the value space, some empirical research on values theory could be an area worth exploring further. Okay. And Steph and Kat, do you want to add to that? Yeah, I think I'm just looking at my, my Schwann and Gates book which has been, I've been reading over and over and I think that idea of a lot of the research on ethics thus far is kind of fit within a particular way of evaluating and I think there are those opportunities, you know, even though the research is patchy anyway, opportunities to sort of push our understanding a bit further and even, even elevate to the level of the profession rather than just the sort of individual professional ethics. That was my sort of takeaway from it here. Okay, thank you. So we don't run out of time because I know now we are out of time but because Tina said speakers happy to stay on for a little while longer. If there was from the worthy of done. Do you see any immediate implications for the AS should be doing as a body. Yes, yes, and just drawing attention to brands sort of reflections on that in the chat as well. I think one of the biggest challenges and why I was so grateful that that Steve, Kat and panel willing to do this as part of their study is, is getting funding to support work like this and I know there are many people on the call that are publishing our ways but for the reality I think for most of us practitioner academic or not, it's sort of an add on or I'll do it when I can and you can almost see that play out when we look at the patterns of publications. So, so that the big one, the other one that I would personally love but a yes to consider is to the way that the US team is really facilitated a lot of this is having a T get topical interest group or an hour language interest group as part of their association that is dedicated to research on evaluation. So that could be perhaps a more actionable implication that the association can consider. And maybe that's something we can all be a part of those of us who are on the call we were clearly interested but yeah just having a structure so that we can actually be talking to each other and even just sharing, not necessarily the load but you know the work of doing research in a more collaborative way I think that would be great. I think the annual conference to be an opportunity to shed some more light on this whole area. Yeah, absolutely I do. But I'm sure we yet everyone else is not into it. Yes, we're biased but yes we said definitely yes we have definitely see opportunities at the conferences there as well for sure. Yeah, so thank you everyone and thank you so much for coming along I think we've run over time as a testament of how rich a discussion we're having. I can already see people responding to the to the survey. Thanks John for so definitely guiding us and shepherding us through the Q&A. Thank you everyone for your time this evening. And it's been a pleasure to share our work with you and I think yeah like I said we're very keen to keep this as an ongoing discussion. The few of us are probably going to stay on for a bit more so if you'd like to have a chat with us, be very welcome to but otherwise have a great evening and thank you again for your time.