 So in terms of the why does the distinction even matter, in our experience it matters because it drives the questions that you ask when you're commissioning an evaluation and it also drives the kinds of questions, the kinds of answers that you provide when you're doing an evaluation. So ultimately it can affect how useful your evaluation is. So after Ray and I looked into this, we developed this presentation which just represents our view and we're really hoping it can act as a springboard for some great conversation today. So I'll just get Ray to head off to the next slide. I just have to read there, Nat, because something's not working. So please continue and I'll do that while you're doing that. OK, no problem. OK, so essentially just the agenda for today. We'll do a bit of a presentation at the start and then we'll have more of a group discussion just to hear different views. So we'll look at evaluation versus research in terms of the purpose, process and end result. And then just some take home messages for both commissioners and evaluators. And then we'll head into discussion and some breakout rooms. So I wanted to start with two polls, two particular questions which flow if you're able to assist with that. It is essentially around what your background is. So if you have a background in evaluation, evaluation, research or both. And then also do you commission evaluations, conduct evaluations or both? So if you're able to start plotting in your answers, the reason that we wanted to do this is that we realized the context in which people come into this discussion can impact their views on the topic. And also we're just genuinely quite interested to see the backgrounds of people here today. So we'll wait for people to put in their answers and we'll see what the what the spread is today. Yeah, so it's looking like we have quite a few people who have a background in both, which is quite interesting. And Flo, are we able to see the answers to the next question? Around whether people commission or conduct evaluations? Yes, am I sharing the right screen? Yes, yeah, we can see that. OK, so a few more that conduct evaluations, but also quite a few that do both have a nice mix there. And then have a couple there that commission evaluations. OK, so this is, yeah, interesting, nice mix of people here today at the presentation. Yeah, so people can keep putting in those answers, but I might get Ray to reshare the presentation. And we will get stuck into it. Just trying again. Yeah, OK, thanks, Ray. So you can see the slides, right? Yeah, let's see if I can get a slide. Yeah, OK. Thanks, Ray, continue. So essentially, we've structured our presentation according to three topics. So we'll be looking at the similarities and differences between evaluation and research according to the process, the purpose and the end result. So on the next slide, we have a really useful diagram from an article written by Dana Vanza on the different relationships between evaluation and research. And I think this is quite an interesting one because it shows all the different ways of thinking of how research and evaluation intersect and that there's quite a few different views out there. So again, I think, Flo, if you're able to share the poll there, we wouldn't mind getting people's insights around how they think, what they think the relationship is between research and evaluation. And we have five different options there of six different options there. It could also be F is something else. So if your your view is not actually listed there, then please feel free to to mention that as well. And also, we recognize that the views may change throughout the discussion as you discuss with your peers, particularly in the breakout rooms, potentially even after this presentation, you might have a different view. Your view may shift. OK, yeah, so most are in that that category D where there are differences and similarities as a bit of overlap. It's kind of hoping there was going to be something in F, something else. So then I could hear people's unique views around that. OK, yeah, that's really great to see. Oh, Martina could go with. OK, maybe that's great to see. Maybe Ray, if you could reshare the screen. Thanks, Flo, for getting those those polls up. OK, so we'll get into the presentation. Firstly, looking at the purpose. So evaluation and often cited definition of evaluation comes from theorist Michael Scriven, who defined evaluation as the process of determining the merit, worth or value of something. So it's about providing specific and applied information to better understand and improve the effectiveness of a course of action. So it's often very specific to a particular program and determining about determining its effectiveness. Questions guiding the evaluation are often developed by the primary intended users of the evaluation findings. So in essence, those who are commissioning the evaluation. This can be compared with research where the ultimate purpose of research is theory testing and producing new knowledge. So it often involves more generalized findings. Its merit is judged by other researchers in that field. And the questions are often developed by scholars in the field. Ray, if we can move on to the next slide. Thanks. So given that the differences in purpose, the evaluation and research questions are often framed in different ways. So we have up here on the screen that evaluation questions tend to make more of a judgment about how good the program was or how well it was done. So the examples we have here are how well was the physical activity program implemented? How effective was it in improving students' mental health? And to what extent did the program provide value for money? Research questions, however, tend to be a bit more neutral and general. They often draw conclusions about how things work in the world rather than just being focused on a specific program. So the example we have here is what's the relationship between physical activity levels and mental health in Australian high school students? So now moving on to the process in our view, there is the greatest overlap between research and evaluation in the process of collecting and analysing data. So with the example of the previous slide relating to physical activity for young students, similar methods might be used to recruit students and to measure their physical activity levels and assess their mental health. But looking at the differences, so evaluation, time and resources for data collection are often set by the commissioner and tend to be a bit more constrained. If the program budget is modest, then usually the evaluation budget is also modest, but this is not always the case. And timeframes may be driven by fixed deadlines for refunding decisions or because the evaluation is feeding into a broader review. The timeframe and resources for evaluation are generally determined by the program owner rather than the evaluator. And so this may also narrow the choice of feasible methods. Comparing it to research, time and resources for research are usually more in the researchers control, but we appreciate this is to a degree. You know, funding is competitive and all of that. Ray, if you're able to move on to the next slide. Thank you. So now looking at the end result, the end result looks slightly different between research and evaluation, and it's often used in different ways. So evaluation findings can be used to improve how a program is working to decide about future funding or expansion or to provide accountability for funding that has already been spent. The end result is usually a findings report to the program owner or the evaluation commissioner, and the report may also be published. Looking at research, publication is the key driver for research projects since that is the main way that new knowledge is communicated. So research findings may be shared in journal articles, in industry reports, conferences or other channels. And just looking at some final take-homes, both for commissioners and for evaluators. So first looking at for some take-homes for commissioners. Essentially, don't be scared to ask value laden questions. So, for example, did we do a good job? How good is it good enough when planning and evaluation part of the job is to make explicit with program owners and stakeholders what is considered good and clearly articulating the parameters of how worth will be measured is often actually harder than it sounds. Some take-homes for evaluators, firstly, don't be scared to make an evaluative judgment when answering key evaluation questions. Remember, the definition that we looked at earlier of evaluation is about determining the merit or the worth of something. So it's important to actually take that step and to draw a conclusion based on all the data. Of course, that judgment should be based on careful interpretation of evidence that has been systematically collected and rigorously analyzed. And the evidence behind your judgment should be clearly explained so that readers can firstly understand how you came to that judgment and then also decide whether they agree. So it's really important for evaluators not just to present all the data because the people that have commissioned the evaluation need to decide what to do with it all. And that is essentially why they have engaged the evaluator, you know, in order to make a sound judgment about what it all means. The second take-home is when you're prioritizing the use of evaluation resources to think about the decisions that need to be made based on the evaluation results. So the evidence that you collect will need to be robust enough to make those decisions with confidence. And this is really important because misunderstanding the difference between research and evaluation may create unrealistic expectations for both commissioners and practitioners around methodology, both what can be achieved within the available time and budget and also what's necessary for the particular program and the decisions that need to be made. So we have a few discussion questions and this is where we might go out into breakout rooms. The three that we've put here is why does this distinction matter? And then we have one question specifically for commissioners, which is does the distinction between research and evaluation cross your mind and how does it impact how you write an evaluation brief? And then for evaluators or researchers, has there been a time when you felt research and evaluation have been confused and what has been the impact on the project? So I'll just quickly check in before we do go into breakout rooms. Ray, was there anything further that you wanted to mention before we have a bit of a discussion? I thought it might be good just to ask if anyone has any questions or comments before we go into breakout rooms. There's been a couple of comments in the chat. It might be good to hear from you, Martina, your F, something else. Sure, I can I can share my F. I mean, I voted D, but when you said, you know, be good to hear an F, I was like, no, I could I could come up with an F. So I think we could view research and evaluation as separate activities, not the same thing at all. Research being sort of the set of methods that we use to build our understanding of the world, collect data, understand that data, that sort of thing. And then evaluation being the logic of evaluation and how we actually make those judgments. And so that logic of evaluation can use the evidence generated through research to make evaluative judgments. It could also use something else. Applying the logic of evaluation doesn't necessarily have to be applied on research. So, yeah, maybe there could be separate things that one can be used for the other. There's one idea anyway. What does everyone else think? That they could that they could be different things. Any comments on that one? Julie, did you want to speak to the comment you made in the chat about the main two of the AES evaluators professional learning competency framework? Yeah, I can I can talk to that briefly. So it's really about evaluative reasoning and what the logic of evaluation involves in terms of identifying criteria of merit, setting performance standards or thresholds, choosing measures and then synthesizing the information you have to arrive at some sort of evaluative judgment. So evaluative actions coming from SCRIP and involve things like grading, rating, scoring, ranking, comparing, attributing. These are really essential things for evaluation. That's a really nice way to set out the logic of evaluation. I really like that. I think as well, there's those adjectives that Julie just used or those verbs. Sorry, verb is better than adjective for that word. Those verbs, they're not necessarily part of the research process. They are actually an add-on or something you can use research to do, but it isn't necessarily research to do those things, but they are actually fundamental to evaluation. They're a core part of that activity. Would you agree that evaluation, by necessity, involves collecting some kind of evidence? You have to have something to inform your judgment. But I mean, we definitely have people who make judgments without evidence. Are they following the logic of evaluation in that? Are they? Maybe their evidence is gathered through methods that are not research, perhaps, I don't know, thinking out loud here. Part of it depends on how you define research, I think. Yeah, I think that's true. Yeah. I think it's neatly contained in the little statement, fully described, fully judged. So you do need research methods to come to a description of what the situation is and whether the change was being thought about or not, but you also need to fully judge. So fully describe and fully judge are a neat little combination, I think. Where does that come from? Robert Steak, I believe, in the 1990s. Fully described, fully judged. Yeah, that's nice. Chloe, you had your hand up. Yeah, I just wanted to refer to my own experience around when the difference became very evident particularly in the health sector between research and evaluation. And I mean, overall, the approach, especially when you're talking about action research, there's lots of similarities. And when I realized the difference is around the questions, and you had a slide around the questions, the difference between research questions and evaluation questions. And then evaluation questions are really instrumental. And it's also the primacy of the questions. The questions are what drives the nature of the inquiry, the evaluative inquiry. They are first. Whereas with a research lens, you may start with the conceptual framework like the consolidated framework for implementation research, other types of framework, behavior change frameworks. And this is your starting point. And then you will use constructs that you will test, acceptability, deserability, feasibility, all these kind of things. But your starting point is the conceptual framework. The questions are second. And the evaluation is the other way around is the questions are first. And then you'll peak on different conceptual frameworks or methodology toolkits to support that with the view to inform a decision. I think that's my experience with the difference between both, which is at the end of the day, it's quite, you know, it's a huge overlap. Kate Williams. Oh, hi. Thank you for the presentation. It's been really good. I think the way I think about it is that I agree with Flo that you have a different starting point and you're trying to find out different things. The questions you're asking are quite different. So research is really trying to understand how the world works, you know, what works, how it works, does it work? And then evaluation is further down the track. When we know, say that a particular intervention has been demonstrated to have, you know, beneficial effects for a particular population, we want to know, does it work? Can it work in the real world under these circumstances? You know, in not in ideal conditions. So I think we do have quite a different starting point. Maybe a bit further. We draw on the findings of research and there and the conceptual frameworks, but we have a different starting point and much more practical starting point, I think. Yeah, I tend to think of it in that way too. Research is how does the world work and evaluation is how does this program work in the world? Yeah. Yeah, Chloe. So I sort of like this conversation interesting and I work so explicitly in domestic family and sexual violence evaluation space, which is probably a particular context where transferability is very, very low. And we also there's very limited evidence in a lot of what works. I think no one knows what works. So what evaluation is often in this space is trying to formally present and understand what practitioners have developed. That's never sort of entered that kind of formal research now. So I said there's very little sort of research informing our evaluation space and a lot of it is practitioner knowledge and it's about translating practitioner knowledge. And it's largely because transferability in this space is very low and evidence for what works is close to non-existence around perpetration, for example. So it's very pragmatic. When you say transferability, Chloe, do you mean transferring from one context to another? Yes. Yep. So men's behaviour change programs are all about context and the communities they're from and their background and sort of what the legal system is like over there. It's an industry voluntary. So there's just very little that transfers and there's little belief that one thing that would work in one place would work in another. Even if you did some sort of decent kind of cultural adaptation, it would be essentially a different program in the end anyway. So what you're trying to do is kind of a systematic approach at the local level and building on that. Yeah. And often just understanding context and trying to document what's being done and sort of what the service models are. And even before we even think about anything, I think a lot of process, a lot of developmental but outcome to us is very far down the line outcome impact evaluation. We're just trying to sort of improve readiness and document what's being done and sort of consolidate that knowledge of personation. Practitioners in different spaces. We're just trying to bring knowledge together.