 So we're live on YouTube now. Good morning, everyone, or good afternoon, or good evening wherever you are in the world. My name is Matt Granger. We've got a tale of two mats today. Matthew Page is joining us for this workshop on reporting guidelines to ensure transparency of evidences when and how to use them. The session is being live streamed, and if you have any questions for Matt, you can ask them via the SACRFON Twitter account by commenting on the tweet about the session. And also in the Slack. We'd like to draw your attention to our code of conduct, which is available at the Esmikonf website, www.esmikonf.org. So have a look at that if you have any concerns. So without any further for me talking, let's get on with our interesting workshop on reporting guidelines. Thanks, Matt. Great, thanks so much, Matt. So welcome, everyone, to this workshop on reporting guidelines to ensure a transparency of evidence synthesis. And so I'm based in Melbourne at Monash University. And so it's a late afternoon for us here, early evening, I should say, and welcome to everyone around the world wherever you may be. So just want to declare that currently I'm supported by a Monash University future leader postdoctoral fellowship. And I'm gonna largely focus on one of the reporting guidelines available for evidence synthesis, the RISMA 2020 statement. And I just need to declare that I co-led the development of that statement that have no commercial interest in the use of this reporting guideline. So for this workshop, what I'm going to do is I'm gonna start off by introducing attendees to different reporting guidelines for evidence synthesis that are available with a particular focus on the RISMA 2020 statement. I'm gonna give you all an opportunity to apply the RISMA 2020 checklist to a published systematic review. So there'll be a 20-minute small activity or activity that you'll each be able to do individually. And then I'll bring everyone back and we can go through, I guess what my assessment of that paper was compared to what you all found. And then I'll just finish off with just a couple of slides outlining some future tech developments plan to better implement reporting guidelines like RISMA into practice. And it might be good to get some ideas that you might have or any here of any challenges you have with using reporting guidelines if you've already become familiar with them. So why the focus on good reporting of systematic reviews? Essentially, systematic reviews need to be well reported because those people who end up using our reviews be they patients, clinicians, guideline developers or other researchers effectively need to know what we did and what we found as clearly as possible because if we have clear reporting of our review then this should enable the use of our review in evidence-based decision-making. So people can, I guess, trust the findings of our review or not necessarily trust what we found because they determined that we end up using methods that were not necessarily optimal. Clear reporting also enables others who might be interested in replicating our methods to see if they find the same result. And I think that's something that's one of the main, I guess, causes of the reproducibility crisis in healthcare, sorry, not in healthcare, in all types of research is the fact that it's hard to just know what people actually did in terms of what databases they searched and what outcomes they were interested in extracting from the papers and things like that. And the other things about clear reporting is that it enables us to assess the trustworthiness of the review findings and the applicability of the findings. If authors, sorry, if readers know that our findings only extend to a particular population then they're more likely about to use those findings appropriately to guide their practice and policy. So I'll go to the next slide. And so another issue about reporting is that we have pretty good evidence from many studies conducted over time, including this recent one done by one of my colleagues, Phoebe Nguyen at Monash, that have evaluated how well systematic reviews are currently reported. And so down the left-hand side, we just have some items recommended for reporting in a systematic review. And we've got a comparison of the reporting of systematic reviews in 2020 versus an earlier sample in 2014. And as you can see across the different items, there's quite a bit of variability in terms of what things are reported. So some things tend to be reported quite frequently. So I think because of journals, we often see a lot of authors are prompted to report their conflicts of interest, but things like reporting whether they worked from a protocol or how they prepared the data for meta-analysis is reported much less frequently. And things haven't really improved that much over time. So many of these items that were poorly reported in 2014 haven't really seen a large increase in more recent years. So a way to help prompt better reporting in systematic reviews has been to develop reporting guidelines for authors and other people in the publication process. And so what reporting guidelines are designed to do is to provide guidance to authors on what should appear in a systematic review manuscript. And they're effectively designed to improve the accuracy completeness, I should say, and transparency of manuscripts. And much reporting guidelines are typically presented in the format of a checklist of recommended items. Several include a template flow diagram to help authors display the flow of records and reports throughout their study selection process, as well as some explanatory text and examples or I should say, exemplars of high quality reporting. Now, what might surprise some people is that there's quite a large number of reporting guidelines available for evidence synthesis. I would say at least in medicine, Prisma is probably the most well-known guideline, but other disciplines have made their own guidelines for their area of research. So we have Roses developed for reviews focusing on environmental research and conservation research. Cochrane collaboration had been the SEA standards and the American Psychological Association had the meta-analysis reporting standards and May and Net is a reporting guideline for evidence synthesis conducted in economics. And so I published a paper a couple of years ago in Research Synthesis Methods, which has a table at least back in around 2020, 2021, illustrating different reporting guidelines for different disciplines. So it's worth looking at that. But in addition, a good library of reporting guidelines not only for systematic reviews, but for other types of research is the Equator Network Library, which includes a database I think in total of about 500 reporting guidelines for various study designs. And this number of 30 reporting guidelines for evidence synthesis was where I essentially got that information from and there's many more being developed as we speak. And so what about Prisma? For those who haven't heard it, I really like this joke from Andrew Booth where I think it resulted from the fact that a lot of people tend to include on Twitter or as a way to visually represent their systematic review rather than presenting, say, a forest plot for a meta-analysis. More often than not, people present a flow diagram showing the flow of studies throughout the study selection process, which led to this observation that this is Prisma might be standing for. People letting readers know that please realize I screened many articles and that's why I wanna show you this as the key figure for my systematic review. But no, Prisma does not stand for that. What it actually stands for is the preferred reporting items for systematic reviews and meta-analysis statement. And so originally it was published in 2009. And so I think it was one of the earliest reporting guidelines for systematic reviews and was very influential over the subsequent decade. But my colleagues and I convened a group to essentially update it to Prisma 2020 over a period of a few years recently. And Prisma 2020 essentially is an update of the original Prisma statement, which we published in 2021. And since that time, it's already gone on to be cited more than 23,000 times, which I think reflects that there's a lot of systematic reviews are being conducted. Because we know in some of that previous work, looking at the quality of reporting of systematic reviews, I think we noticed that not all review authors do actually cite a reporting guideline. I think it's about only, maybe it's about 70%. So there's still another large group of review authors out there who don't actually consult or at least don't acknowledge whether they consulted a reporting guideline. And so Prisma 2020 was essentially designed primarily for authors who are doing a systematic review of studies that evaluate the effects of health interventions, irrespective of the design of the included studies. So it could be a systematic review of randomized trials or non-randomized studies. But we do know that a lot of the items that appear in Prisma 2020 are applicable to other types of systematic reviews. And we're even in the development process for coming up with Prisma 2020, we did do a mapping exercise where we essentially looked at all the items across existing reporting guidelines and noticed there was quite a lot of similarity. So if there's not a suitable reporting guideline developed specifically for your area, then of course I'm gonna be biased and say consider consulting Prisma 2020, but definitely make use of the ones that are available for your discipline first. And so who should use Prisma 2020 and for what purpose? Effectively, I think there's several parties who might find it useful to access a reporting guideline like Prisma 2020. First of all, authors when they're actually drafting their review manuscript in that it should help prompt them to what should appear in their manuscript as they're going along writing it. In addition, peer reviewers who are submitting a submit a manuscript might find it helpful to fill out a checklist so that they can convey to the authors what's missing in their report. We know a lot of methodologists have very much taken up a checklist like Prisma 2020 to assess the reporting completeness of published reviews so that we can essentially evaluate I guess the lay of the land when it comes to systematic reviews and to determine where are the areas that people are struggling to report their methods and what methods are they effectively reporting very consistently and well. What we would not recommend is people to use Prisma 2020 when they're interested in assessing how well a systematic review was conducted. And the reason for that is that for the most part Prisma 2020 is at least aspires to be agnostic towards the particular types of methods used. It can't fully be agnostic in that including a recommendation to report for example the methods used to assess the quality or risk of bias in your included studies is effectively providing guidance that you should have assessed risk of bias or conduct of your of the included studies. But it doesn't necessarily say that you should report how many authors do it have conducted that assessment and what tool you should have used to assess risk of bias. So there is essentially effectively it's a prompt to indicate what methods were used under these broad categories of steps in a systematic review rather than providing a list of which particular methods you should use. So other methods have been designed to indicate what methods those authors believe everyone should use in order to conduct a high quality review that essentially outside the context of Prisma. And how to use Prisma 2020 effectively what we see most people end up doing is pretty much coming but their first exposure to it is at the point of submitting their article to a journal. So some journals will actually ask you to fill out a checklist and submit it along with your manuscript files. I think the problem with that is that if that's the first exposure to it you've probably got to the point where you've worked on your paper hours and hours and days and days with all your collaborators and you've essentially you've made the decision that my paper is ready for submission so you probably already think it's perfect and you're probably less or more resistant to actually making any changes just to adjust or just to comply with what Prisma recommends. So I think accessing it early in that writing process means that it won't come as a shock at the end when you come to submit your paper and suddenly realize that you are meant to have documented X, Y and Z. And even I should say it's probably useful to consult it even earlier than the writing process because there are some items recommended in Prisma that suggest that you have collected information on certain steps of your review process at the time that it was being conducted. So when it recommends that you report the number of records that you screened hopefully you will have had that information available to you before you sit down and start writing. So our reading it early as possible is the way to go. So I'll just give you a bit of an overview of the contents sorry, this content and structure of Prisma. So it's effectively divided into seven sections and it contains 27 items but there are some items that have sub items where we broke down certain steps into their component parts as a way to help increase the clarity of the guideline. We have effectively fitted across two pages and it's essentially you provide recommendations on all the components that you would expect to see in a systematic review manuscript that's I guess following your introduction methods results discussion format. And the checklist itself provides space for you to indicate where the item is reported in the paper. If that's something the journals want you to produce for them. So that was the first page before the second page looking at the results and discussion and then some general administrative or other sections at the end. You can access the Prisma checklist at the Prisma statement website which some of you might have been aware that it was down for a couple of weeks last month but it's now back and available. So we have PDF and Word versions of the Prisma checklists that you can freely access and download. Luke McGinnis was a colleague of ours who hopefully created a shiny app. For those who don't want to fill out the checklist as a Word document, there is this shiny app where you can fill in your Prisma checklist online available at the link down here at the bottom. And another helpful group, Neil Hadaway, Chris Pritchard and Luke McGinnis have created a app for you to produce, a shiny app for you to produce Prisma 2020 flow diagrams. And we've also got a link down here to a paper published in Campbell Systematic Reviews which outlines how to use the app and what some of the functionality is. I will also point out I will put all these slides up to the Open Science Framework at the end of the session and hopefully it'll be included at the bottom of the YouTube, underneath the YouTube video of this as well. Now there's two main papers for Prisma 2020, a checklist paper which really just is just a simple outline of the content of Prisma. And then a much larger document called an Explanation and Elaboration Document which essentially goes into great detail what needs to be reported in a paper and to give some exemplars that have been picked up from the published literature. And so for each of the 27 items in Prisma 2020 we have in the Explanation and Elaboration Document, the item itself. So item four asks people to provide an explicit statement of the objectives or questions that the review addresses and explanation as to why we think people should report that. And then a list of what we call elements which are really just break down in a bit more detail what the reporting recommendations are. And some have a couple of bullet points and some have several more, especially when we get down to the meta analysis component for which there's several options for what could be reported. And then finally we have an exemplar to show an example of good reporting that people might help consolidate their understanding. All right, so I might just stop there and see if there are any questions people have. I think you can raise your hand or maybe the other unmute yourself. No, I think everyone's good. All right, so what I wanna do is now turn to the practical exercise. And what I'm gonna do is I've put up onto this website here. Oops, I see something in the chat. All very clear, thanks Wolfgang. So on the open science framework I've put a two files, a PDF of an article that we're going to have a go at assessing. And a Word document which essentially includes the Prisma 2020 checklist with options for you to fill out whether each item in this systematic review on the effects of wine on blood pressure has been completely or partially reported or not and gives some support for that judgment. So I'll put the link in the chat in a second but essentially what I want you to do is to see whether you think the authors have reported their review in accordance with the items recommended in the checklist. And you would select completely if all components of that item were reported, partially if only some of the items, sorry, some of the components of the item were reported or no, if you think the item hasn't been reported at all. And then in that final column you can indicate the page number and maybe it include direct quotes to support your judgment for that item. I'll stop share just to show you a link to the web. So here is what it should look like when I get you the, I'm gonna put the link in the chat. And essentially, if you go down to workshop material you can download the Word template for the checklist as well as the PDF. So stop sharing and go back and the link to the files is I'll put in the chat here and I'll put it to everyone. So that's the link to the chat. So what I'm going to do is effectively ask you to do this at your own pace by yourself. I know that some people are doing this string on YouTube as well. And what we'll do is we'll come back in let's say 20 minutes and what I'll do is we'll go through what we found. So yeah, if you can have a go at that. And if you have any questions, I'll stay on for a little bit but then I'll just make myself muted for a bit too. But any questions for now? So I'll go through the answers to everyone and I'll put all the documents online at the end as well. But yeah, so it'd be good to stick in to have your own go at seeing how well you think this systematic review has been reported and then see what we all found. So I'll put the, I'll leave the instructions up as well just so you can see them. Yeah, I'll just give a couple more minutes and then no matter where you got up to, don't worry. And then I'll start going through some of the answers for my assessment, I guess, of answers. Just a few more minutes. Okay, I'm back now to go through answers. First of all, I just wanna check, you can hear me. Matt, if you can let me know that I'm actually broadcasting. Yep, you're still going. Fantastic. So I came back to my computer and I found that there was a, it said, zoom unexpectedly crashed three times apparently during the time I stepped away from the computer. So I did jump back in. So yes, good. All right, then what I will do is start going through some of the answers for this activity. First things first though, I wanna point out a couple of things. This review that I have asked you to take a look at, it was published before Prisma 2020 was released. So there are several items in Prisma 2020 that weren't in the original Prisma statement. So I just wanna point out, it's not as if I expected the authors to have reported everything that was recommended in Prisma 2020 given it didn't exist the time the review was conducted. However, there were some items that have been around and recommended for many years that I was surprised to see missing in this report. And one other thing I should note is that what's interesting about this report and I've started seeing it more commonly in the last few years. Authors sometimes are putting things like a Prisma compliant review in the title of their review. I'm not sure why they or I'm not sure who sort of recommended that they do that but I strongly advise against doing that just because I think as you'll see in this example that this review that claims to have been Prisma compliant many elements it unfortunately was not even the original one. So I don't think that we need to be putting that indication in our titles for our review. So yes, but some people start seeing that others are doing it and they think they should start doing it. So it has increased slightly over the last few years and I hope it would stop. So what I'll do is go through some of the items or I've essentially got answers to all the items and as I said I'll be able to give you a completed assessment or at least completed by myself I'll put that onto the open science framework at the end. But it would be good to hear your thoughts as well for certain items. I don't know if we'll be able to speak about every single one of them given the time but I'm very happy to hear any counterpoint to my assessment and any sort of questions for clarification about the items in the checklist for themselves, some of you might have found some more difficult to assess than others. I'll start off just with the first two ones. Essentially I didn't ask you to assess the abstract but at least I'll show you my assessment. So for whether they indicated that this is a systematic review on the title, they actually didn't they called it a meta analysis. We were a bit more, I guess, made a more like a strict recommendation in Prisma 2020 encouraging people to use the term systematic review rather than meta analysis if they have in fact conducted a systematic review given that we would see a meta analysis as one component of a systematic review that is the statistical combination of studies. And so that's why we're a bit, I guess, maybe harsher than the original Prisma statement was. So in that sense, we didn't say that they had reported the term systematic review on the title. And just for, I didn't ask you to assess it because I didn't include the checklist for abstracts. There were a few things that weren't covered in their abstract, which I know is sometimes hard to convey all in when you have journal with wordlets but certain things we wanted to see were an indication whether a risk of bias was assessed, what the actual direction of effect was, any limitations of the methods and specified a funding or registration of the review. These are the things that we know that a lot of people throughout the world don't necessarily have access to the full review. And so that's why we wanna see as much information in abstract as possible. Alternatively, people could start posting preprints of their systematic reviews for which the authors would be able, anyone in the world should be able to access them freely, ideally in the future. See, some things come in the chat. Yep, just a question, I will be sharing all my slides as well, I'll put the slides and the answers up on the open science framework. So that was those first two items. Probably just stopped for a few ones to open them to discussion down track. Whether the rationale and objectives of the review were reported, I thought that they were completely reported, having included all the introduction texts but I think they gave, I mean, when I'm saying reported completely, I'm not necessarily saying that they're the way that I would have reported them, but I would say that I'm comfortable with the information that's provided is enough for certain items. I see Wolfgang's raised his hand, can I? Yeah, I'm just gonna ask my question directly. So with, okay, this meta analysis is pretty succinct, right, pretty brief, right? So the introduction is pretty short and yeah, we're struggling about the rationale, like is that sufficient, what they're providing, I did say completely actually. So I was trying to be generous here, but one could also argue, well, no, I mean, you need to embed this much more broadly, much more detail is needed. So when it comes to sort of assessing whether somebody has followed these guidelines, would you recommend that actually two people independently do this and sort of compare their results? Absolutely. I mean, I think in most of these studies where we're having, trying to document reporting quality of reviews, we do tend to recommend two people do it because there is an element of subjectivity in all of this. So yeah, I strongly recommend having two people do that and coming to discuss their discrepancies and trying to resolve them. I will also say too, one other thing is that what I've essentially asked you to do is just look at the sort of the standard Prisma checklist. We do have a lot more guidance in the explanation and elaboration document, which puts in bullet points, some more detailed recommendations on what a rationale section effectively should include. And so there is an expanded checklist that is also available on the Prisma website, but I just mean just a time I didn't get people to assess that, but that does sort of include things like indicate sort of what's the, I guess the theoretical rationale for why this invention might be effective and why a review needs to be done on this topic at all. So there's a lot more that probably could have included, but yes, sometimes I think I'm somewhat inconsistent across this workshop, whether I'm generous or harsh. So we'll see as we go along. Okay. And in terms of the objective, I thought that it was complete enough in terms of saying that their objective is to carry out a meta-analysis of the effects of wine intake on blood pressure. And so effectively including at least in medicine we use the sort of the PICO framework for participants intervention, comparator and outcome. They've effectively included their participants as in people with type two diabetes, evaluating effects of wine on them and on the effect of these certain outcomes. So I was relatively okay with that. Now, the next item on eligibility criteria has essentially two components that were, the latter which was introduced in Prisma 2020. It's essentially asking people to consider eligibility criteria for the review as a whole, but then also sort of say up front how they would be grouping studies if they were to come and do some synthesis down the track. So which is effectively defining eligibility criteria for the synthesis themselves. And so I thought the first component of what sort of studies they would want to include as a whole was effectively okay, but there wasn't really any indication of what sort of studies would be grouped. So how you would consider, if you were to do a meta-analysis of wine, the effect of wine does it have to be all red or over a certain duration of time and things like that. And so saying up front what sort of decision rules you have to sort of combine the studies and essentially define synthesis you plan to conduct or we think would be quite useful. Wolfgang, another question. Sorry. So just as a general question, say, so people are supposed to, now this comes a bit later, but just to give an example, are supposed to maybe indicate how they assess risk of bias and what was used. So that's supposed to be in the method section. Now let's say there's anything about it, but in the results section, then they explain, we used the Cochrane Risk of Bias tool, blah, blah, blah here are the results. Now, how would you score that? Would you still say in the method section they fail to report what they will do even though it's fully explained in the results section? No, I would not say they fail to report it. The thing is, I know that the way the checklist is designed looks like it's suggesting that one must follow this structure to the letter. That's not the case. We essentially would say, or we do even say, we do say in the Prisma statement papers that as long as the information is reported somewhere, then we're happy with that. We know that not everyone can include all this information in the main manuscript file and sometimes the information has to go on to appendices or into repositories. So I don't have, if I feel like the information was reported elsewhere, not necessarily in the method section, I would still say yes, but there has been reported. So yeah. Any other questions so far people might have? If you can raise your hands so I can see or you can just start speaking. I think you can unmute yourself. Nope. All right. Okay. So information sources effectively asking whether the database, what databases were consulted? So for this one, I consider this was partially reported. This item here is effectively wanting to know what databases were consulted and when they were actually consulted. Is pretty much the authors have just briefly indicated that they've used PubMed and M-Base and Scopus and specified it was published up to November 2018. What we strongly advise people now is to actually specify the actual date given that it could be any date within that period for which the actual search was done and in terms of being able to reproduce it, it would be just much better if the actual dates were provided. And sometimes there's an issue too where some people, some authors will say they've sort of done their search for articles published up to a certain date. They don't tell you when they actually ran that search. It could have been a month or two later during that time, more and more records can be inputs into those databases. So being a bit more explicit about those dates is useful. Now, the search strategy itself, I'll go to the audience. Does anyone, how did they rate this particular item? Does anyone want to have it go? I said completely, but I was trying to also be a bit generous here. I mean, you can always say there should be much more detail here, right? I mean, this was a nice example of a PDF that you could read in 20 minutes, right? If it was a lot longer, yeah, we would need more time. So, but with any of these, I felt like, well, yeah, there should be more detail provided. But yeah, I said, okay. I mean, they just specified the databases. They mentioned that they sort of like screened the references. Yeah, filters limits used. They didn't quite, I don't think I would say that explicitly, but okay, I feel like you also sometimes have to read between the lines if they don't specify that they, right? Do you have to explicitly say we didn't use any filters or limits or do you? Not every time for these items, right? So to what extent do you have to read between the lines sometimes, which you of course always have to do when you're assessing the articles themselves. You're often reading between the lines. To what extent do you do this when you're sort of checking here whether people comply it with these guidelines? So it's tricky generally, right? It is, and it depends. Explicit, right? Write it down explicitly. That's always better. And don't make people guess, right? But yeah. Yeah, that's the ideal. And I think that there are those of us who would be more expecting that explicit information just so that can indicate whether it was appeared or not. But I do realize that there are challenges with reporting in general. I mean, with this one, my problem was more about the fact that what I'm expecting or encouraging for this is that people give the full line-by-line search strategy as exactly as run. And the thing that I found problematic about this was that they gave you some mesh and entry terms, but not exactly how they'll combine and run in the databases. And I think that would be the original prisma was sort of saying at least provide one of those line-by-line search strategies. But nowadays, we think that given you can upload and export your whole search strategy after you've run it and included as a text file on the open science framework to allow people to actually take it. And if they wanna update your review, they can essentially rerun it exactly as run or look through the search strategy to see if there are any errors in how it was constructed that are likely to have led to certain papers being missed. So definitely wanted a bit more information for this one here. So yeah, yeah, Matt. On this one, do you recommend them having like a appendices with the search in or as you say, linked to somewhere else? That's what they should be. That's the ideal thing is to have your search somewhere findable and usable. Yeah, absolutely. I think that's like the ideal scenario. And I think even Neil Hadaway has been working with some librarians like Melissa Rithlefsson, I think they've created a special repository of search strategies that people can upload the search strategies they've used in their review so that they're findable and reusable. And yeah, I think that's a good way forward given that we, it's sort of similar to the encouragement to share data for studies. It's sort of like this is an underlying material that played a significant, probably likely played a significant role in the identification of your studies. So yeah, I think that's something we'd wanna see more of. All right, now the next question about selection process, I didn't really see anything about the selection process in this review as in how many authors screened articles, whether they used a two-stage approach, screening titles and abstracts versus full text. I might be wrong if people disagree that you can say, this is a text, but I remember when I looked through it, I was not really, I didn't come across it, I thought. But yeah, if you found something, then definitely I'm happy to be proven wrong with that. But I was expecting to see how many people on who was involved in this process and didn't. On the other hand, there was a lot more information about the data extraction process, which I kind of see them as separate steps. And I mean, I know that you could assume that it complies to all of them, but at least in this instance, I'm where I'm being kind of harsh or demanding, I guess not demanding, I don't want to say demanding, expecting more explicit information. And sometimes it can just come down to saying things like across all these multiple steps, including risk advice, data collection and screening, two people were involved and this was how the information happened. Got a question, did you wanna ask that question by person instead? You can unmute yourself, take that as a no, and I'll just read it out, one human plus machine warning counts is two for you for screening. Yeah, I mean, it's not so much the number, it's the who was involved in the screening. So if they indicated that one human and a machine was used to sort of rule out clearly ineligible articles, then that would be fine. That's providing the information, so I don't need to know the actual number and I guess two entities, I guess you could call them, human and machine. So yes, as long as it's clear how that was done, then that's good. Yeah, I've been super harsh here and I probably will get rid of that not clear data was sought from study authors. Like going back to Wolfgang's question about do you have to explicitly say we didn't actually do that because I think there are certain elements where, yeah, if it just never occurred to you because you didn't need to do it, then do you really need to say it? Probably not. So sort of just going through, if you were taping your very harsh lens and wanted to know, then yeah, that sort of information was not there in this item. Then the data items or specifically the outcomes that we're so interested in. But this one, I sort of like said, well, they've given us sort of some information such as like what the outcome domains of interests were such as fasting glucose and insulin and HVONC, but what we encourage these authors to provide a bit more information such as what are the sort of eligible time points at which they were extracted the data. And sometimes we know that people in certain fields, I know that people essentially extract every single result available for an outcome and they might do meta-analysis, use models that counts for the dependency between them. But in other fields, we know at least in medicine, a lot more people tend to do meta-analysis where they're kind of including a study only once in the analysis. So by default, they have to only include one time point or one measurement instrument if say depression is measured using three different scales. And the hardest part to know that is when they're doing that, why they're actually selecting those time points and analyses? Is it because of some clinical rationale or is it because those measures and time points had the most favorable results? So that's a type of thing that we're encouraging people to provide a bit more information about maybe any decision rules they use to extract the data for all to extract the results from their studies prior to doing their analyses. And then in terms of other variables, again, that kind of provide a summary of certain variables for which they selected information on such as publication, location, age. It was unclear if any assumptions about missing or unclear data items were made. Again, I think that is another one where it could fall into potentially assumed given they didn't say anything, then that therefore they did not make any assumptions about say missing age values or missing sex and gender and things like that. But again, keeping that, if we wanna follow the guideline to the letter, this is what we just wanna see a bit more information on. And in terms of the risk of bias assessment, this one also found a bit of reading between the lines. This sort of said that there was disagreements addressed by containing consensus between reviewers. I think it was still a bit unclear to me whether that was done independently or whether they were just seeing side by side and doing it together. So some parts of it were a bit unclear for risk of bias assessment. And things like, as I say, just sort of a blanket sort of, or more accomplishing statement, sort of pointing out which methods or processes we used that were similar across all stages of the review could have accounted for that. So I know that we're not sort of saying that everyone has to say the screening to all who's independently did X, Y, Z and then for data extraction to all who's independently did that. Even you can do that if you want, but it sometimes can be easier just to use more text that sort of covers multiple steps at once in that process. Well, in terms of effect measures used in their analyses, I think this was more straightforward. I thought they said they were using a main difference for their continuous meta-analyses. And so that's essentially all I'm needing to see for that particular item. Now, then I'm gonna go through some of the synthesis methods again, for the grouping of studies, I think it kind of was still a bit of a black box of whether they're just gonna be happy to include everything into an analysis or whether they were gonna subdivide by certain times and whether that was done sort of pre-specified or done post-hoc after taking a look of look at the interventions and the outcomes that were measured. So I wanted a bit more information there in terms of whether they needed to prepare the data for the synthesis in certain ways or again, let me see what people thought about this one. Any thoughts on the data preparation item? So I assume since the tables were showing sort of means and standard deviations that these were sort of reported directly. And so there wasn't a need to sort of convert data or maybe like medians and so on, right? Sometimes you can convert those, but yeah. So as always, I would have liked to see more details but in general, I thought like, okay, it's pretty okay. So either actually or completely. Yep. Any other counterpoints to that? Definitely. So I was a bit more like, I'm not sure because I'm kind of taking the, I feel like maybe I've worked too long in this area of reporting methods where I kind of just assume if I don't, I guess it's more thinking about having worked on many reviews myself and knowing how often I come across having to do all these data conversions, at least in my field and working with so many authors as well. I kind of feel like it's so often happening to me that I kind of assume that it will be a common occurrence that people are having to sort of impute missing standard deviations or converting medians to means. And so in that sense, that's why I wanted to see that information or even an indication that this is the data that we didn't need to do anything. We just kind of use what we had. So I think in that, having that information provided can be useful if you want to sort of backtrack and see whether the data that has been entered has been imputed in some way and whether you would have used something alternative. So, yeah. I'll just make sure the next time I submit my meta-analysis, I'll at least you as an excluded review. But I'm here to help, oh gang, I'm here to help improve the manuscript. So yeah. In terms of tabulation and visualization methods, this was a bit, this is, I've said knowing that they haven't really specified how they sort of structured the information, but I'm kind of back, I might backtrack on that. Yeah, I'll think more about that one, the way that things were prepared. So let me think more about that. But I will go to the next one and want to know if people thought the meta-analysis methods were okay. Come on Wolfgang, I can see you desperately trying to say something there. Well, I'm just not a great fan of this sort of data dependent selection of models. So if there's a certain amount of I square, then we use a random, I just don't, I just, I'm not a big fan of that. So... Yeah, I mean, I agree. And I agree that I'm not a fan of that either. I guess coming back to that distinction between whether it's completely reported versus whether it's done well or appropriately, I think, yeah, that's the thing that we kind of would need to focus on with Prisma at least. So I mean, I mean, I totally agree with, and in fact, I said completely actually, because I mean, we can debate about methods and approaches and strategies. And I think what is crucial is the completeness of the reporting, which makes it possible to have even a discussion about differences in results, depending on how you do things. So I think that is, I'm not critiquing the reporting here, right? And I said, this is fine the way they did it. Yeah, so completely reported, but not necessarily done well. And I think that's the case in various items. You could say that when people indicate that only one author extracted data, you wouldn't necessarily agree that that's an optimal method, but at least you know that they've done it and that enables you to assess how much you trust you might place in those findings. So, yeah. Now, I know with anything, I think we're still at 20 time, hopefully. I didn't really see anything about exploring heterogeneity, any methods used to explore heterogeneity. In this paper, whereas they did indicate some type of sensitivity analysis of sort of taking out each study one by one. Again, they could have done other analyses. It's not the be-all and end-all, but at least, yeah. I'm assuming that they've indicated that completely. So I wasn't so harsh on that one there. Now, in terms of reporting by assessment, following up from now, the webinar we had earlier, this early today, I think, I mean, they've indicated what sort of methods that these statistical methods have used. I mean, I know personally, I wouldn't say that they're the greatest things they could have done and could have used some other approaches, but I was indicating that was complete. Now, in terms of certainty assessment, this is a bit more, maybe it's more specific to health and medical reviews where we recommend people are using sort of a grade approach to assessing how confident or certain they are in the body of evidence. And this is kind of expected more in medical reviews and I didn't see any indication of that in this paper. Any reference to considering certain factors that indicated how confident or certain they were in the evidence? So that's all of the methods, items. In terms of the flow diagram, did we see a flow diagram in this review? Yep. So, yep, there was a flow diagram indicating well, the studies in this particular systematic review. The next one though, where it was pretty much, what's different about Prisma 2020 compared to previous iterations is that now, rather than just sort of indicating the sort of reasons for exclusion in a flow diagram, we think it's helpful to cite some of these sort of maybe NIMS excluded studies just so readers can actually follow up and see if they actually agree with the eligibility, or the degree that those studies are actually ineligible. So, I didn't really see any excluded studies cited in this particular review. And it's not as if you cite every paper you excluded, kind of just thinking of some ones that, and you think that a reader who's knowledgeable in this area would possibly at least maybe come after you if they noted that you didn't include their study in their review, so that you can at least back up yourself and provide justification for that. In terms of study characteristics and risk of bias, I thought that was pretty clearly reported in table one and they had some traffic light plots for the risk of bias tables as well. So, the information on that as a whole was reported. Results of individual studies, I was relatively, well, so I was fine with that too. You can sort of see this information, such as the means and standard deviations and the mean differences appear in these porous spots. So, that's hard to omit that information if you're using software, most of this meta-analysis software. Speaking of that, the Mori study, the chances that the standard deviation was exactly the same in the two groups is pretty, I would say. So, maybe that's some kind of pool standard deviation, right? So, that there's maybe some missing data and some we have fudging or some approximations that are being used. So, yeah, I didn't look at it carefully, but now that I'm looking at it. No, and then, I mean, even, yeah, so Gepner as well has the same SD. So, yeah, maybe this is one of those things. Well, maybe when I read this, this is where I thought maybe that the standard deviation, it's completely impossible that the standard deviations were exactly the same in the two groups. Yeah, yeah, yeah. And so that goes back to that earlier item on data preparation. And in this case, it looks like they've done something. There must be some explanation for that that has not been declared. It could be because I think they included crossover studies, maybe, I don't know, anyway, right? Yeah, yeah, yeah. I think that's right. So, yeah. Now, results of synthesis characteristics. I wanna know if people thought of this one, 20A. I said partially because they didn't do this percentages. They did some overall sort of risk of bias assessment, but they didn't sort of tailor it for each specific analysis that they did. Yeah, and that's essentially my conclusion as well. So that wasn't, that's sort of something with this is another new thing recommended in Prisma because a lot of people do a great job of saying, okay, these 40 studies, the average age was X and there was this many people from this certain countries. But then when they've sort of reporting the results of a meta analysis of say five of those studies, it's a bit of a unclear what sort of, how representative the patients in those five studies are for the population to which you're trying to extrapolate the results. And so we're not necessarily saying you have to make, do it in a very cumbersome way of every meta analysis to present, you have to go, essentially create a characteristics table, but at least some indication of how they differ across the meta analysis could be a way of summarizing that just so that then people can interpret the result appropriately. So if the result is basically based on one ended up, the included studies for which that had data amenable to meta analysis all tend to be very young people in Australia, then your conclusions of generalizability would be quite different compared to say what all the other studies in the root of town. So that's why we want this small per synthesis reporting. Results of the meta analysis, I think we're fine in terms of, let's sort of standard statistics presented on your forest plot here. So that was okay. Assessments of heterogeneity, given there was no real mention of exploring heterogeneity, they said no for that. Whereas for sensitivity analysis, they have sort of indicated nothing really changed once they took out each study. And this is the results of, they even put supplementary table showing the results when each one was taken out as well. So it's helpful indications of reporting bias and certainty of evidence that you give, I think the p-values from the BEG and EGGA test in their table two. Again, not necessarily saying it's the greatest way to assess reporting bias, but at least they've reported consistently what they said they were gonna do. Whereas there's no information on the certainty of evidence. And then nearly done in terms of their general interpretation, I thought it was absolutely fine. It's pretty, I think it's hard to sort of not do this one, I guess, well, kind of just providing your interpretation of the results. They did provide them a discussion and some indication of some of the limitations of the included studies. But what I didn't think they did very well was provide any reflection of possible limitations of the methods that they used to themselves. I didn't really see them say anything about the possible problems of the methods they used, such as whether they thought search that they did was sufficient enough, whether they'd search from published data, for example, and things like that. Implications I thought were relatively fine. There's a lot more information in the document I've put online as well. And then the last few items which are more administrative information, I didn't really see anything about registration of the review. I didn't see them mentioned that they worked from a protocol and as a result, there's no indication of any changes between registration and the final review. The funding source of the review, I didn't really see anything about funding source either. I mean, I think sometimes it can be, I think a lot of reviews are unfunded and so it can be just default to just assume that it wasn't funded. But I think it's better to just say that up front if there was no funding received and they have indicated that they didn't have any declaration competing interests to declare. And then finally, whether they actually made sort of reusable data files available and code and other materials, I've said no in this instance and haven't shared any files in certain repositories, not always share code depending on the software they've used. But I think this is something that we added to Prisma that we wanna see a bit more of, especially I think it's more valuable for reviews that are using more complex methods and code for their review. So that's a few minutes left. So that's essentially just a whirlwind examination of this particular review. I just wanna finish off by just, I guess signposting a couple other developments. So as I said earlier, I think most people kind of just are exposed to guidelines like Prisma at the journal submission stage, which is I think a bit too late for them to actually make those changes. And I think there's also not really, other than kind of getting them to fill out this sort of standard checklist, there's not really tools available for peer reviewers to efficiently, as well as sort of assessing the sort of the importance of the question and the rigor of the methods to see whether, to indicate whether that's actually been completely reported, that's not necessarily easy to do. And so we've just received funding from the failure National Health and Medical Research Council to develop some web apps to help better implement these guidelines into practice. So Prisma web app to help essentially encourage people to report their review completely at the writing stage and then a different app for peer reviewers to help generate sort of customized reports that indicate what's missing from a manuscript. And we'll also be doing some randomized trials with authors to see if these apps that we create are actually more beneficial than just what's standard practice. Another thing that's where we want to try and work on too, some of you might be aware there are now many extensions to Prisma that focus on particular components of the review or particular areas. And the slight problem with that is that sometimes people need to consult multiple Prisma guidelines for a single review. So if you're, say, doing a systematic review with network meta-analysis of individual patient data on the harms of, say, COVID-19 vaccines, then you effectively have to look at Prisma 2020, Prisma IPD, NMA, and Prisma harms. And so we want to try and get to harmonize the guidelines and create in these online systems more customizable checklists that only pick out the most relevant items that are unique across all of these guidelines. So that's where we'll be embarking very soon and carry on over the next few years. But just to summarize, the take home message from this is that we've got in Prisma 2020 some reporting guidelines that reflects advances over the last decade or so in methods to identify selective praise and synthesize studies. We have shiny apps for people to use to complete their checklist and flow diagrams. Any information on the latest version of all these checklists is available on the Prisma Statement website. You can also always contact me on Twitter or email if you have any questions about the guidelines. But I'll stop there if there are any questions people have or any feedback for this session. As I said, I will post both these slides and sort of an answer sheet into the open science framework so you can take a closer look if you're interested in doing so. Coming back to my question from earlier, do you know of any studies, research, where multiple people filled out these sort of checklists and then results were compared to see how consistent I'm not aware of that. Yeah, it's I mean, there probably are some studies. There's definitely not studies on Prisma 2020 yet, because it's new, but the older ones there might be have been some studies looking at the inter-rater agreement, but I don't. Yeah, it's funny. I can't think of any that comes straight to mind. I know there's quite a lot of studies that have evaluated inter-rater agreement for risk of bias tools. And so this can easily put them, but I'm not sure if people when they're evaluating reporting quality, they tend to just kind of expect that there'll be sort of some disagreements and that they just need to resolve them. They'll just report that rather than sort of giving you like a tap as statistic for the preliminary assessment between the two, but yeah, I'm not aware of that, sorry. I think this would make for an interesting project, right, just to see which items are easier to rate for people consistent versus others. And yeah. Yeah, definitely. No, that would help get me to give feedback to us in terms of how we've put the items together. And yeah, what means work? Any other questions? No. Well, in that case, I think I might close the session a minute before it's finished. So thanks, everyone, for saying. I will say I'm stunned at the last time I did a workshop like this where as soon as I said the word, we're going to do a small exercise, practical exercise, nearly everyone left. So I'm glad that you stayed online and some of you have contributed to the discussion. That's been really helpful. So thanks. I'll let you wrap up, Matt. Yeah, just to just finally say thank you to everyone for attending and thank you so much to Matt for that wonderful session. I've learned loads, so it's been really useful to me. Thank you so much.