 So yeah, one of the key findings or key takeouts from the blog was get to the points. And I guess that the kind of things that raised for me is one thing, you know, look, I honestly think I look at a report and I will go to the start of the report and I will try and find what page the first, after the executive summary, what page is the first finding on. I can tell you, it's often, you know, I want it to be less, I want it to be four or five and it can be 10s and it can even be 20s in terms of people using that introduction method evaluation questions. It's a long, long time before people before you get to the findings and you can tell people, look, you don't have to read the first 20 pages, but a lot of people aren't going to take that advice. They're just going to start at the beginning and stop when they get bored. And so if you don't get to findings earlier, I think you're going to have a problem. I also think that we, you know, there are a lot of things that we feel need to be in reports, but my feeling is if they don't, if every reader slash most readers don't need to read them, it would be better to put them in an appendix or in technical reports and that might include context not needed by most readers and even any results that are not central to whatever you are trying to say. All right. So the next point is sort of related is the report length and I don't know, we always talk about this, it's keeping reports short, 20, 30 pages. It's very hard to do, I think a lot of the time, but it's, you know, I think it's a good goal. I continue to try and strive towards it's often a really, really short executive summary and can be useful for people who, you know, need to write a brief on the basis of the report, for example. So you can, that can be like a page or less. And then that doesn't necessarily exclude or rule out a slightly longer summary, which can be at the beginning of the end. So the 1, 3, 25 model is often mentioned as something people like to use. That's one, one page of, like very, very short summary. It could be like a sort of base or an infographics or something that's very catchy to the eye. And then you have the three pages sort of traditional executive summary and then you 25 pages of actual content and then your appendices at the end, which is a way of structuring up a report. And the one thing I like about that one page is within government, it is often the case that somebody is writing a one page on your report. So it often may as well be you or be people who are involved in providing that report. Okay, developing the best structure. So look, I guess my feeling on the blog set out three possibilities. Part of me thinks that I've read many evaluation reports which don't do any of those three things. And so what we're talking about is who gets gold, silver and bronze in terms of ways to structure the report. This, like Mary, I wouldn't give this one gold. I think I'd probably give it bronze, but it could work. One thing I wouldn't do though, and I have seen it, is listing the evaluation questions and then not answering them because it's so frustrating as a reader to have these questions which are often quite finely honed and then not be able to put your finger on the answer to that question. So if you go this way, I would drop the evaluation questions out of the main report. And just sort of following on from the point of evaluation questions and structuring report and I guess this point's really here about the making sure evaluation questions are evaluative. So if we're going to use them in a structure report, it's important they are evaluative. So by that we mean to so what, not what's so. So sometimes you can come across evaluation questions that are a bit how many did this program reach or questions of that nature that aren't necessarily evaluative in giving us a sense of how good was this thing or how beneficial was this or was it worth doing, which is kind of a magic of evaluation. So another thing to remind and then there's some example questions there often have process outcome and economic questions. It is important to make sure they're evaluative in the true sense of the word. Great. And then if you're using the LACD death criteria, I guess my advice would be to answer the questions that are under each of those headings, not just to talk around the criteria because again it is it can be, you know, you can have somebody say, you know, I can think of examples where you say what I want to know is whether this program is effective and they'll say we'll turn to the effectiveness section, which has pages 54 to 58. And I'll say I've read them, I still want to know whether the program is effective, you know, and that's perhaps why I agree that that evaluation questions which prompt evaluation answers is the best way to go. So if you do go this way, there are six good questions there, you know, they might not be perfect, but they're perfectly good questions. Okay. Second point of the article is to report strong findings. Hard to disagree with that. The question is how do you do it? And I guess one thing I would recommend to people is that you road test contentious elements through, which could be through a results workshop where you're doing your hypothesis development of testing or through informal meetings. The other thing I would say that this finding prompted for me is the idea that you've got to make sure that you're pushing hard enough. I think, you know, people who write reports can be keen to satisfy their clients. And I think satisfying them in the first instance could be letting them down, you know, and that you might be better off being a bit more courageous and then having some difficult discussions. And maybe you'll be, you know, I can think of reports where I've been asked to turn it down and I turned it down. But it was good to have that discussion and to know how far I could go and how, you know, when I really probably did have to stop. The other thing that's important to remember is that you've got a range of important readers and you shouldn't, it's important not to focus only on those readers who, with whom you are in frequent contact, because they are going to be the ones that want the detail and the long, and they are going to be the ones that can read a long report and want to lead a long report. But there are lots of other readers who are making important decisions based on your report who you may not be in frequent contact with. Indeed, you may never meet them, but you still need to be thinking of those users and writing for those users. So, you know, often the people who are managing an evaluation are below the director level or assistant secretary at the national government level. So, do you think of those other users, even though you may not know them and you may never know them? So, the related point is the scope and then really just trying not to, I think I mentioned this earlier, but trying to limit how many sub-questions you've got so you're not overwhelmed with the million questions that you need to try and answer. And that's just really, yeah, mainstream scopes appropriate for the size of the project and what they are collecting and people are tempted to throw in more questions and not necessarily remove them, but it's really for a report structure or a reporting, it makes your life a lot easier when you have a succinct limited number of questions you're trying to answer. Great. And then our last sort of reflection from Francis and I on the blog and what it has made us think of. I love this quote from Woodrow Wilson. If it's a 10-minute speech, it takes me all of two weeks to prepare it. If it is a half hour speech, it takes me a week. If I can talk as long as I want to, it requires no preparation at all. I'm ready now. You know, I just love that idea that it's hard and it does take a lot of work to make things short and to work out what the key messages are. I guess some advice I would offer there is make sure that there are lots of times and I have been on both sides of this where you submit a draft report where it's not really as evaluative as you would like it to be because you're still thinking things through to a certain extent and you're almost asking for feedback in this draft report. If the idea is that there is only going to be one draft report and then a final report, that's not the way to go. You do need to submit a good draft report and I know that where I'm working now, when we contract out evaluation reports, we are asking that the executive summary is in the draft report. It's not a complete report unless it's got the executive summary because you occasionally do have these situations where people will say, well, we'll do that once we're all agreed on what the content is. I can understand why people might want to do that, but it's not the right solution. I guess the other piece of advice I would give is do everything you can not to write a long report which you then shorten. It's really inefficient. It's so hard to reduce, to get rid of words and ideas that you have crafted. I think you're much better trying to start off writing a very short report and leaving things out and then considering whether you add them in. Okay, so that's some reflections we had on the basis of the blog that we wanted to share with you. What we're going to do now is again moving to a quick little groups where we're focusing on two questions. What do you think we could do? What could we do to increase the effectiveness of evaluation reporting and indeed what could you do? Is there anything else you would add to these major points that we've been discussing? There's one that we have wanted to add, but is there anything else you would like to add? So we'll probably go have a quite short group, probably about five minutes. I'll set up breakout rooms and our number of participants is, have you got it? Sorry, I mean. Yeah, that's it. I'm happy with that. Cool. Okay. Does anyone want to? It's actually, why don't you leave those questions up Francis? Just to help us. So anyone want to volunteer a summary of some of the interesting stuff from their breakout group? So I'll just say briefly that I think infographics are actually a really great way of just summarizing the report findings. And you can actually include images and quotes and key results into an infographic. So I think the 1325 model is actually pretty appropriate for actually writing up evaluation reports. And it actually reminds us to actually try and develop something fancy to accompany the main evaluation report. Yep, good one. Yep. Yes. Cool. Thanks, James. I'll pop my hand up. Yeah, we had a big discussion about it's only effective if people read it. And, and the thought that maybe only only 80% only read 20% of the report. And the more readability we talked about diagrams, dot points, good referencing all those sort of, you know, if like desktopping things are critical to make it effective and readable and particularly around just making sure that it's easy to follow with structure and all those sorts of things. Yeah, good one. Thanks, Duncan. Yes, I can think of a report I read recently where the executive summary didn't mention that the report included recommendations. So when I got to page 75, and it was 75 and saw recommendations, I was just, I was amazed, you know, and I thought if I was reading this as an executive, I might have can read that whole report and not notice that there were recommendations unless I'd looked at the table of contents. Yeah. Laila. Well, our group had, like, in addition to the, to the items or the suggestions that was given, our group also added video, video, some short videos as part of the reports, like engagements with some of the participants, if it's okay to be identifiable. And some parts, some video, short videos as part of the report could be very good or some other aspects of accessibility, like, for example, an audio file to go with the report. Yeah, infographics and images were already mentioned. So yep. Yeah. Great. Thank you. Fiona. Thanks, Ben. Our group was talking about basically starting with getting, understanding who the audience were so that we got the questions answered appropriately for the different audiences and recognizing that you can have a provider, you can have a funder, and you can have government agencies, ministers, so multiple levels of people who are interested in the report and who might want to read it, but who might have different levels of involvement in it. And the other thing that we talked about was getting a situation where people felt capable enough to say, to talk about the things that perhaps didn't work so well in order to have things that can be changed and improved for the future. Saying everything's going well is very nice, but it doesn't have any learnings out of it. So creating a culture where that was possible as part of that process. Thank you. Okay. Anyone else want to throw anything in before Francis and I give you one thing that we would add, as in anything else, to the main messages from the blog? Can I add something? Yes. So our group just at the end heard from one of the participants who was talking about, she works in an Aboriginal controlled organization as an internal evaluator, and she was talking about different ways of reporting that are not a physical 60-page report. And actually, if we're talking about the heart of the matter of being use and utilization of the findings, the different, what we haven't sort of touched on in great depth here would be lovely if we could, one day, is really going into what can you do that's not a report that can get our messages out? Yeah. Yeah. Get the messages out, but also send it back to the people who participated in the evaluation that their voice has been heard. And this is what we heard. How do we actually do that? Yeah. And do we promise it and then not deliver it? Because I think we occasionally do. Every time. Yeah. Exactly. Yeah. Yeah. In terms of getting back to the end users, that's a great point. Okay. Okay. Thank you. So I guess we were very happy to see recommendations in the middle of our word clouds and surprised, but happy because we just wanted to have a bit of a discussion about recommendations and how and whether to do recommendations in reporting. So I guess what we would add is that it's important to decide early whether a report is going to include recommendations, which are answers to now what questions. And I think, yeah, I think either option is possible, but whatever, and we'll come to this in the next slide, but I guess we would advise do not develop recommendations alone. You know, you will struggle on the budget budgetary and political implications. And even if you get all of that right, you cannot deliver the psychological benefits of jointly determining action. And this is a sort of I'm not I'm saying this in part because because I think it's what works best and moving to the next slide, Francis, it's also what's a very my favorite evaluator, my favorite evaluation theorist, at least, also argues. And, you know, the idea there is that we know from participatory and empowerment approaches that there's something incredibly powerful about being involved in choosing a course of action and that it gives people ownership of a decision, personal stake in it and a commitment to see it through. So even if you sort of feel that you know what the recommendations are going to be, if people got together and were given the opportunity to develop them, even if you are right, there are reasons to allow that process to occur, to facilitate that process occurring. So if you are developing recommendations, we would recommend building in a process to develop them jointly with client stakeholder input. And that can either be as the final step to a final report, an action planning type workshop, or alternatively an action planning process following the final report, in which case it could be done by the evaluators or other experts in facilitation and action planning. And the last thing we would say would you know, and it does make it more difficult when you do it in workshops, make sure that as evaluators we are speaking for the data and the findings and that recommendations, action planning workshop is not an idea, it's not an opportunity to sort of wipe the slate clean, blue sky, you know, clean piece of butcher's paper and start again, you know, it is something where recommendations can only be made as they are linked to findings. Okay, so yes, I'd like to thank you all for coming and hope you've found the blog interesting and discussing the blog interesting. A couple of other resources that I would recommend to you. Kylie Hutchison has written a short little book called A Short Primer on Innovative Evaluation Reporting and goes into a little bit of detail about the alternatives to a written report. And I'd also recommend Stephanie Evergreen's website. And the AES often runs workshops on reporting. And I can say that I think the, I haven't been to all of them, but all I do know is that the one that Anne Markowitz runs is excellent and I would recommend that to anyone. Lastly, just wanted to let you know that we meet on a Thursday afternoon. I think it's, I think it's the last Thursday of the month, but it's definitely the 24th of August next month. And that is an AES similar joint event, which is always one of our highlights of the year and brings in a range of different people from different organisations and with slightly different mindsets. And that's going to be on place-based evaluation. So thanks all for coming and hope to see you on the 24th of August.