 and we are now live on YouTube. Hello everyone. So, welcome to Esmeralda 3 and this panel discussion seven, the role of rapid reviews in our evidence systems ecosystems. This workshop is being live streamed to YouTube. Welcome to all of you. If you have any questions for our presentations, presenters, I can't speak, that you can ask them via the ESAC on Twitter account by commenting on the tweet about this workshop. You can also ask questions via live YouTube stream or you can comment and chat with other participants on the dedicated Slack channel, which we've been sent the link to with your registration information. We'll try and answer all your questions as we can live, but it might take us some time to get through them all. Finally, we'd like to just draw your attention to our code of conduct, which is available on the Esmeralda website at www.esmeralda.org. I've been moderating a lot of these discussions this week, but my name is Matt Granger, a researcher at the Norwegian Institute for Nature Research, NINA in front-time, in NOE. But I'm gonna ask the panel the first question and get them to introduce themselves. So the question is gonna be, what is it that we mean when we talk about rapid reviews? So how do we define that? So please introduce yourself and then tell us the answer to that question. Argi, do you wanna take us away? Absolutely. So hello, Matt. Hi, everyone. So I'm Argi Veroniki. I'm a scientist at St. Michael's Hospital and based in Toronto, in Canada. I'm also an assistant professor at the University of Toronto. And it's pleasure to be today here and discuss about the rapid reviews, which is really a hot topic nowadays, I would say. So what do we mean when we say rapid review is actually a knowledge synthesis review. It's a systematic review, let's say, where some steps are omitted or modified, we could say that, in order to produce evidence to decision makers in a more resource and time efficient manner, if I can say that. And really, since the start of COVID-19 pandemic, we have seen an increase in those publication of the rapid reviews. And that's mainly because of the time. We always need more time efficient products. So systematic review might take, let's say one to two years, 12 to 24 months to be conducted, whereas a rapid review using the streamline processes, it can be conducted within five to 12 weeks to complete. And also an important item here is that we can reduce the cost of conducting such types of reviews. So I'll say in Canadian dollars, this would be around 25,000 Canadian dollars. So it's really also cost-effective. So that's what I would actually say, that is a rapid review. Thank you. Thank you. Matt, what do you think of rapid review? Two of you, introduce yourself first. Hi, I'm Matt Jones. I'm a post-doc at Exeter. I work more on some systematic review, meta-analysis stuff. Done a couple of rapid review type projects with the Environment Agency in the UK. Yeah, I guess other reason I'm on this panel is that sometimes it's, I think it's hard to define what a rapid review is. As Argie said, yeah, this sort of leaves out certain parts of the traditional systematic review process in order to make it much more feasible, often in working with policymakers or find sensitive issues. But I guess it's sometimes unclear where to draw the line. And yeah. Thank you. Hi folks, I'm Gavin Stewart. I'm a scientist at Newcastle University. So I think rapid review is actually a bit of an unhelpful term, if I'm honest. Because it means everything, so it means nothing. No, a rapid review could be a systematic review that adheres to conquering methodological expectations or Campbell methodological expectations that's on a narrowly focused question that's done really, really well, but with lots of people who really know what they're doing really, really quickly. That's a rapid review. It could be something in one domain but doesn't have any duplicate extraction and doesn't have a critical appraisal. So someone in medicine would say, oh, that was a rapid review. You took lots of dodgy shortcuts and someone in ecology would turn around and say, what do you mean? That's what I call the systematic review. So it's an unhelpful term and then it gets even worse because people start talking about ultra rapid reviews on all of this kind of stuff. But the basic kind of idea is that it's a review that's done with limited resources or in a tight timeframe. And if you're gonna do one, that's great. Without lying, the potential advantages of that kind of approach, you know, try to be cost effective, but you've got to make some value judgments about which things you're not gonna do and that carries a risk. You know, if you make good value judgments there, it's fine. And I've done loads of reviews where I've done very little searching in non-English languages. I've done reviews with very, you know, if I had an information scientist on the team that say the search was cursory but I've got nearly all the studies and let's face it, usually all the studies are crap anyway. So, you know, does it matter if you miss a couple of studies? Sometimes it doesn't, you know. So sometimes you can make these value judgments and they're justified. Sometimes you make a value judgment like you're not gonna have a critical appraisal. That's a big problem for me even though I see lots of things that are called systematic reviews that just don't do it. So yeah, that's kind of where I am with rapid reviews. I think everyone sort of agrees that they're, something's done rapidly and there's some corners cut, I suppose. But yeah, I agree with Gav really that the sort of stuff that we do, well, I do every day is, yeah, a rapid review but just by, because I'm an ecologist and we don't bother with a lot of all the stuff we should be doing. It's important. We're gonna talk about sort of how R can be useful within rapid reviews. So let's try and think, how can, how do you think that R could for a start but then also how can it and what sort of areas do you think R might help in speeding up some of the processes that we're talking about? So Matt, what do you think about that? Sorry. Um, yeah, I guess, I guess, I mean there's packages to help design search terms, for example, so the lit search R package can be useful for speeding up that process. I guess one of the things that these rapid reviews is that I think they often occur in contexts where you might not have a lot of people with R or R or other coding skills. But I haven't, yeah, typically found in those projects that I did use a lot of them. Yeah, Arju? Yeah, that's a great question. I think that since, as Matt and Gavin said, since in a rapid review, our number one priority is to decrease time. So I think R would play a key role here and maybe if we can introduce machine learning and AI systems here to speed up the processes potentially for screening. Screening and data abstraction actually are the steps that the more time consuming steps in a systematic review. So I believe if we had these tools to reduce this conduction of screening and data abstraction to reduce them from months to potentially days, I think that would be a high success. And even for risk of bias, currently, I know that there is this automated tool, the robot reviewer tool, that assesses these steps of certain steps, certain items of risk of bias in our cities. And I believe this tool has high accuracy. So what this tool does mainly is to the user has to upload the PDF, the RCT. And so then the tool actually pulls text from the PDF to derive this bias assessment, some quotes from the manuscript. So this, I believe, is a very helpful tool to reduce those steps. I mean, reduce time in the conduction of the review. But certainly we could potentially use R to appraise the review, the systematic or rapid review, however we want to call it. Using, for example, Amstar or Robby's tools, those, to my knowledge at least, there is no automated approach still. And although we know that researchers have noted so far that an automated strategy to assess the quality of those reviews would be valuable. So I believe the key question is how we can work together to potentially improve the flow of the data from trials, to the systematic review, to the visualization, even of the data and in the end, provides the right evidence, the high quality evidence to produce guidelines for best practice. So I believe R would play a key role in the whole process of this rapid review. Yeah, I think, yep. I think that as well, but I think one of the big issues that I know that we have in ecology and conservation is this lack of standardization within publications. So I know in some fields, they have very, very good standards about, you know, there's always the effect size somewhere in a paper stated like, this is the effect size. Whereas ecology and conservation, we often just write stuff and ignore all that thing. So I think there's some probably, we're probably at the earliest stage, would you agree, Gav, that we're at the earliest stage? Yeah, I think the kind of advantages of AI on the screening and appraisal side of things, I think they're there now in disciplines where, as you say, where you've got reporting guidelines like consort and you're talking about RCTs, where the reporting is done in a kind of fairly standardized way. And where you're doing a big review, you know, because then it's worth training your AI to do all of that if it's big and complicated and that's gonna speed things up. I think where you're doing, if you're doing a rapid review by focusing and having a narrow focus question, at the moment, I'm not sure that R does actually have a great role, but I think it has a potential role in the future that's massive, which is what's just been outlined, you know, the ability to screen is there, the ability to think about appraisal is there. The ability, you could even think about strength of evidence as well, because you could pull the bits and pieces that you need out of your risk of bias tables and your meta analysis. All of those things could be semi-automated under elements of that in things like Revman and some of the kind of bits and pieces that I've done. So I think all the elements are there and that's definitely the way the ecosystem is going. If you look at the diversity of the products that people are developing and exploring that are being presented this week in this conference, if you start bolting all these things together in an intelligent way, then I think that that potential for semi-automation to really speed things up is there. It's worth talking about that chat GP side of it as well and the whole AI side of it because I think we're a million miles off being able to press a button and have a computer do a systematic review for you. And would you want to, I guess? And the big problem because of that is, you know, these discrepancies between what the words say and what the data says and what the data in the table says and what the data in the figure says. All the stuff that we're also familiar with the systematic reviewers. And I think that's something that needs articulating that people kind of think evidence synthesis can be automated. I definitely think of it as semi-automated. It's about speeding up the process, doing the same thing the same way, but you're still always going to need a human to look at this stuff. And the last thing I'll say before I shut up and give others a chance is the other thing that's going to happen as a result of this is fabrication. Okay, so already it depends on your discipline how much of an issue is fabrication of data, but it's going to be incredibly easy to make data up. And not only to make data up, to make data up that is internally consistent. So it looks as if it's properly randomized, even if it isn't, even if it's completely made up data. So you won't be able to look for baseline imbalances because you'll just say, make me up a real data set. And so our mechanisms for checking for fabrication are going to need to change radically, I think, over the next kind of 10 years. So that's something, those two things will go hand in hand, I think. We will start sticking these elements together to get decent automated systems for thinking about evidence synthesis, but they'll have to evolve to some of these other challenges that at the moment aren't really a problem. Yeah, I would still come in. I don't think the major issues facing rapid reviews are really to do with either, I think. From my experience, what I faced is just basic stuff about study, how to design your review, asking really broad too many questions, which especially when working with policymakers, and often are can get in the way, especially if you're working with a team who doesn't use that as their go-to software or doesn't have any coding experience, you can actually slow things down, which is why sometimes I use non-R tools with more of a graphic user interface, like for screening and, well, it's just stuff like web-based things. And as much as I like Shiny, I guess sometimes the performance isn't there to present, to handle. It just doesn't compare to other web-based tools. And if I can also add here. So I certainly agree with what all that's have been said so far. So I believe that ChalkDBD and all these AI tools, all these technologies should be used, applied with the human oversight, right, and control. So we should not skip the key researcher tasks, if I can say that, where we actually use those, I mean tools to help us. And the researcher should always be responsible for the interpretation of the results for drawing those conclusions. We shouldn't really rely on those technologies. And when we use them, we have always, we always need to be transparent and disclose this in the manuscripts and or in the report that we produce. So I certainly agree with all that has been stated. Yeah, I think with Wangov. I was going to say one bit that I think it probably could help with sooner rather than later is to kind of multiplicity issues, which I'm not sure how much you've been talking about then this year, you know, but you know that problem where, you know, you've got 56 treatments and you've got five different populations and you've got high and low risk of bias and your outcomes have been measured in two different ways. And suddenly before you know it, you've got, you know, 5,856 analyses to run. And, you know, people talk about trying to kind of do the meta averaging or the Bayesian model leveraging across all of that and all of those kinds of things. And I can see how as part of a kind of living review type ideas, some of these bits of software, like threshold analysis, network meta analysis that lets you kind of look across all of that all in one go or the way that you could kind of loop through all of those analyses in R. It would let you do something really, really complicated that could then feed into a decision model of some kind. I've never seen it done, but the potential to do that, I think is there. But I'd kind of see that. I'm not sure I'd call that a rapid review. That's kind of doing the next mad mega complicated review in a kind of, that isn't feasible at the moment, but might be feasible. Yeah, I mean, that, yeah, there's sort of a distinction between rapid reviews and speeding up system interviews in a way like. Yeah. And there's one undermine the other. Yeah. It's almost a way of pulling together multiple systematic reviews to actually answer a question. You know, even when you've got a big systematic review looking at multiple outcomes and all the rest of it, it's usually only one element of the whole system, isn't it? But you could potentially start sticking everything together. And I think R is very exciting in that way. Absolutely. And if I can also add here. So we have, as you said, we may also have multiple systematic reviews addressing the same topic, the same specific topic. And currently from our team, Dr. Lani is leading a project on identifying the most valid systematic reviews, but address a specific topic. So that this specific systematic review is most rigorous and with high quality, she calls this tool wisest. And so she plans to use R and potentially AI tools to try to make this selection of the highest quality systematic review more efficient. So that would be, I believe, very, very important to be able to identify the most valid systematic review for a specific topic that we may want to make a decision. I just think on the terms of the chat GBT type of things, I've just got to say that, I know that Kate Nio is drinking every time we mentioned chat GDP at the conference. So I'm going to mention it a few more times just to get a drinking away. But I think I agree that we're too far away from that really. And we can't really trust that yet to be transparent enough to give us, yeah. I think the systematic reviewing your priority, the whole process is trying to be as rigorous and transparent as possible. And I think then relying on a black box is really going to throw that sort of span in the works. I know we rely on black boxes a lot, but that's one of the problems. I had a really interesting question or comment from Trevor. Trevor Riley. He suggests maybe that we're closer to realising a rapid systematic map than we are for a rapid systematic review using our tools. So what do you think about that? Yeah, absolutely. I'd agree. Yeah. If you thought about, if you kind of, if you did a systematic map and you sat all that stuff in shiny, then, you know, you kind of, it is like a living systematic map for them, isn't it? And you could even, you can have the ability to change the scope and all the rest of it. In fact, do you not already have a tool a little bit like that? Does Neil not develop something a little bit like that for displaying systematic map data? That's a lot easier, isn't it, than generating effect sizes and appraising effect sizes and synthesising evidence. So I think, yeah, you're much closer with a systematic map. Yeah. But who wants a systematic map? Any other comments, Matt? Yeah, I guess, I guess, you know, thinking about R generally, one of the criticisms of the availability of so many statistical pills is that, you know, people that actually sit, like by the old school, is that people don't actually sit down and try to understand what's going on underneath the bonnet kind of thing. So I guess there's that to bear in mind. Yeah. I don't know if anyone's used like chat TPP to design. I'm just, I'm getting in on the game here to design search terms. But yeah, I do think that, yeah, that would create an interesting relationship with your information specialist who is the useful in a lot of other senses as well. And he's a big collaborator on the project for me. So I, yeah, I think, I think something I think generally with systematic review, you have to think of the implications of automation as well more broadly. I think one of the things we have talked about in the past is in ES hackathon and as a comfort is this sort of sort of chaining tools together. So starting off with such are chaining that to something else. Yeah, ending up in every Atlas, which is the tool you think about go for systematic mapping studies. They just trying to chain things together into into workflows that are transparent and repeatable using GitHub to store the information and to share that in certain ways. So you can end up with that sort of, yeah, living systematic review type of system, map type of approach. I think it's something where sort of, I don't know if there have been stated anywhere, but I think it's that sort of approach we're thinking in terms of. Yeah, and it's, you know, it ties in really nicely with all of the open science practice doesn't act as well, you know, in terms of repeatability, transparency. Someone wants to use it in a slightly different context that can take it off and change the population. You know, or someone wants to update it. Well, you can. So yeah, going back to what Matt was saying about, you know, folk criticising the black box end of this kind of thing that to some extent, it might be a valid criticism, but that criticism's always been there when the so, then when Decimonian and lead put random effects meta analysis method into an Excel spreadsheet. People said, oh my gosh, this is the end of the world. The barbarians are at the gates. Anyone can do a meta analysis now. So, you know, I think when somebody invented the abacus, they probably said that. Yeah, I'm being a bit of a machine breaker. Yeah. Yeah, I wasn't, I wasn't so I was more meaning. I guess as these tools become more like, like the technology itself is obviously really useful, but as, you know, like, if it doesn't come with like really detailed tutorials, which cover the stats like things as well, which fortunately, and we have. Yeah. It can lead to a situation in which people don't know. Yeah. Including myself. Yeah. If you provided a beautiful tool that kind of semi automated it and did it or did everything that we kind of saying, Hey, you know, we're not there yet, but maybe we will be in 10 years. If someone naively used it and pressed a button on it, and it's very tick boxy, isn't it? And it does all these things that it does. And it might be very clever and tick boxy, but it's still tick boxy by definition. And sometimes you might want it to tick different boxes and you need to know. You need to have that element of human judgment in there to know that. So, yeah, it is a risk, I guess, that people will do that. But yeah, that that that misuse of the technology and lack of understanding and not thinking they're always challenges and problems. I think fundamentally that is the big problem with evidence synthesis is it's hard and you've got to think, you know, this stuff about let the data speak for itself. Well, no, you can't. It needs interpreting. You know, so it's hard, you know, getting away from that. No, I certainly agree. But I think that it would be very helpful if we had such a tool, because you mentioned about living systematic review, so we would need to conduct those literature searches again and again. So if we had a tool that's actually did those searches automatically, even if we, you know, gave the tool, the Pico criteria, patient intervention, comparator and outcomes, and then develop potentially the search. And then the same search, of course, once a librarian would say that this is appropriate to do. And then this search would automatically be translated to the other databases. I think that would be a huge help as well. But certainly I agree there are risks and we always need to be making sure that this is of high quality, whatever is productive. We need to confirm that this is of high quality and addresses the question that we want to answer. Yeah, on living systematic reviews, I think it was an interesting blog post a few months ago. What's happening when living systematic reviews stopped by Hilda Bastion? That's sort of a good, yeah. The ideal of living systematic reviews is great. But I think I do, yeah, I'm not sure I've seen in practice working really well, let alone rapidly. Yeah, I think that's right, Matt. And I think the big issue with rapid reviews is why is it rapid? Why do you need it to be rapid? And it's that thing about, well, there's a policy deadline usually, isn't there? Either it's some kind of a project and it's resource based, so it's resource limitations, in which case what you're saying is I can get an approximate answer to this question, but it's not going to be as reliable as it should be because I don't have time to do it reliably. If it's not an important question, that's okay. If it's an important question, that's clearly not a great approach, especially thinking about research waste. If it's not being driven by that lack of resources, then it's being driven by, we need to know the answer to this question for some policy deadline. We're updating the guidelines next year, so next week or there's a crisis or whatever it is, you've got some kind of immediacy to the problem. And then it is about what other bits that we can miss. And most of that is about question setting. You might be ditching bits like doing things in duplicate and all the rest of it, but the big savings in systematic review is the question setting. And the bits of systematic review go wrong most often when you get the futile reviews, just because you've got the question wrong and you've not got it right. So I'm thinking Matt about our little rapid review that we did in a couple of days with the Ph.D. students that looked at flooding and trees. On the one hand, that was a lovely little systematic review, but we picked one outcome that was very easy to measure. And not all what you're interested in when you're looking at flood risk. So it's very nice to produce a little meta analysis and say, well, hang on a minute, slow down a little bit. The evidence isn't necessarily as equivocal as everybody says it is. And I'm not saying that we haven't made that point. But it is just looking at one outcome that probably isn't the outcome that you're really interested in if you go and ask a load of hydrologists. And so I think that that's the big risk with doing a rapid review. The best way to do a rapid review is to narrow down a little bit. But if you're looking at the wrong elements, they're going to be very tightly defined. That's how you can do it quickly. And that carries an inherent risk because if you're looking at the wrong elements, it's not useful. And I don't think automation helps with that. I think that's just those value judgments about what's the question that we're interested in. I'm sure we've all been sat in the room with the people commissioning the review banging their heads against a wall because they don't actually understand the implications of the decisions that they're making in those kinds of meetings. Absolutely. And if I can also mention here, I believe that if we conduct a rapid review, that could be potentially the basis for the systematic review. And I think that's the best way to do it. With all the process. And then I believe that the ultimate goal should be always to conduct a systematic review in the end. A high quality systematic review. I hadn't thought about that. So the really nice idea, you could do it as a rapid review and then you could think about your strength of evidence and you can go, if I turn it from a rapid review into a systematic review, it's going to change it, then let's do it. And if it isn't going to change it, then let's not bother and leave it as a rapid review and do something else. Then we're going back to the idea of a systematic map, essentially. It's an intermediate step, isn't it? You do systematic map, rapid review, full review. You'd have to kind of update your protocol 50 million times, but that's all right. I'm thinking about what's all done for us. I think a lot of it is that maybe just the small steps that are really does speed up. So things like your data visualization, some of the apps that Neil and I have been involved with and Matt on sort of trying to speed up citation chasing, making processes easier, not necessarily speeding up the actual review part, but the other the other auxiliary parts that go along with it. Matt, with your Revman converter to get your data in the right format for it from Revman or whatever it was, whatever it was. Oh, yeah, Rayan. The Rayan screening platform. Yeah, I mean, so that was we made a sort of primitive package. Rayan, you help me package it up. Switch. Yeah, we had this like sort of non R based platform that was so I think works really well for screening, but then it outputs the data really in the not intuitive processable format. So he just did some pretty basic seeing functions on that to get into a better format. Which we should probably publish somewhere because it might be helpful before Rayan do it themselves because I know they've just had a big investment. Yeah, and the Synthesis packages as well, it's worth saying what a massive difference they make. I mean, if you look at metaphor now, a few lines of code and bang, you're there, aren't you? You're fitting really nice, beautiful models. You know, running every different test for publication bias that exists. You know, it's published in stats in medicine. Wolfgang's got it in there six, you know, six weeks later. It's incredible. Or if you take, you know, David Filippo's stuff with multi NMA, you know, you're able to do network meta-analysis that combines individual patient data with aggregate level data across, you know, a network of 15 different treatments. That would have been years of work. If you were doing that without that software as traditional systematic reviews and doing each of those reviews and all of those analyses and then pulling it all together. You only need to go back a few years before that was, you know, well, a big multi NMA just by itself. That was a lifetime's work. You know, so it is speeding things up. There's no doubt about it. I think Archie wants to come in on that. No, that's okay. You go ahead, Mike. Yeah. But again, it's about this distinction between just speeding up systematic reviews, which I sort of agree with. Well, I definitely agree with. And rapid reviews, which arguably sometimes undermine the whole idea of systematic review done in the wrong way. Yeah. No, I agree. And I also was thinking about all these multiple arsenic tools that exist for metanalysis, network metanalysis, the meta insights, the cinema to assess the credibility of network metanalysis results. So all these really have speed up the process and save us a lot of time. Certainly. We also try to present the results of multiple outcomes for the ranking of the different interventions in a ranking plot just to facilitate the interpretation and decision making around that. So I highly agree. All these tools are very important. But I think we're not there yet. We, we, there are still a lot of tools, I believe that should be produced potentially. For example, in a network metanise, it would be nice to have a tool to once the user uploads the data to automatically assess the prerequisite assumptions. I know we can do this in that meta and are using some code, but people are not always familiar with all these codes and with our so I think if we had an arsenic tool that could actually help with those steps. And also in the analysis but also the visualization of the results. This would be very, very helpful. Yeah, I was just thinking if you was going to if you're going to give me a magic bit of software that would mean that I could do a systematic review faster what what things would it do. And I think there's two things that you'd look for and I'm not sure that AI can help with either of them. The first thing it would make reviewers adhere to methodological expectations and use their brains. And the second thing is it would make people doing the primary science do decent studies and report them in the right way. They're the two problems that are the two things that slow you down aren't they, you know, and they're going to be the two things that are problematic for AI to sort out. Because the more straightforward those things are the easier it is to kind of do systematic review by row. And the easier you can do a systematic review by row, the closer it is to a rapid review. I think that that's the nub of the problem there. So maybe some of this open science stuff and improving standards. Maybe that's a prerequisite for having rapid automated rapid review. So, you know, thinking about ecology mat if ecologists did have core common outcomes and did report things in a standardized way then straight away you'd be into kind of robot reviewer territory for doing critical appraisal. At the moment you just can't do that because the diverse is just too high. So it might be that, you know, this develops in different fields in different ways, but there's kind of precursors to really making rapid review and automation of reviews a reality. I think one of the perhaps simpler things, but most useful things they could do is some working on a project where we work with information specialist to design searches and like good searches and run those and get articles and do systematic review and meta analysis. But then you have this thing where you'll get to the end and then you have to publish it soon, but you have to make sure your searches are up to date with six months and then and then there's this constant pressure like in the run up to the deadline and then so if it could sort of I don't know if I can do this but update those searches as the librarian designed them. That's an obvious thing for me. Yeah, even if it even if it just said, you know, here's a here's a little tick box. Here are two new studies that are read that might be relevant. And you have to decide whether they're relevant. It would be really helpful, wouldn't it. Alternatively, you just tell reviewer to to shut up and the strength of evidence is terrible so it won't make any difference. Well, I mean, we have a conference full of people who code in our in the conference full of people the same conference full of people who do systematic reviewing so the whole idea this conference is really to bring people together and I like RG talking about what your idea one of the ideas would be for making a package or a shiny app that would actually improve for you. So I think it's good to have these ideas because they can easily become future hackathon ideas and we can start to think about how to do that because it's. Yeah, the type of stuff that people have done in the past and people are doing now is to sort of come together and with experts in systematic reviewing and information retrieval and our code is and we're making a change so I think it's really important to be thinking those those terms. In the reporting at the end of all this process, we need to be very careful and transparent with what we use. So we, as we know we have those Prisma guidelines for rapid review, we, we don't have a complete guideline yet. I'm happy to say that we submitted a grant to report and to develop those Prisma. I'm happy to give you guidelines with colleagues like Dr. Stephens, Dr. Moore, Dr. Gariti, Trico, we have so many people on the ground. So the aim for this grant is to reduce of course the research, minimize the research waste. But what I would like to point here is that if we had again a tool, once those guidelines are available to help with the reporting in those mind scripts, I think that would be again very helpful. We have a lot to do I'm sure we will in the future in the in the near future but it's good to have all these ideas in this meeting. I reckon that's a bit like Matt's kind of searching bit at the beginning that bit at the end. I think that's probably more achievable isn't it in the in the near term in the short term, even if you even if you just kind of produce text that was kind of going through messier and Prisma and kind of saying where have you got this bit you know it knows what the effect size is because you've run, you've run the forest plot so it pulls that out it knows what the heterogeneity is. So it pulls that out and says, don't forget to report the heterogeneity is this the heterogeneity you want to report. So could kind of semi structure results sections and things like that the potential for that is high. And that's one of the things that Wolfgang was working on in the first the first yes hackathon we did I think in Stockholm. There's one of those there's automatic reporting of what you should report from a method analysis but people never report. Those sort of tools are quite achievable. Yeah and like you're saying that the kind of the work's been done there. Most communities have got this is what you should report from a meta analysis. This is the things that you need to report. We certainly you've gotten for social sciences you've gotten for medicine with Cochran and Campbell, you've probably gotten for CE have you. Pretty much ish yeah. So you know that's a lot that's a lot of disciplines and domains sorted. So those bits of air. That's a grant isn't it. So, do you think that that we can change the definition of rapid review with the addition of of these are tools or workflows that we're talking about. Is there is a time in the future. Is there a time in the future when we can, yeah, make a. Yeah, have this changed rapid review so instead of a rapid reviews just reviews become rapid. There's still that's robust, but the rapid due using the tools that we've developed. Yeah, definitely. Yeah, I mean we're already on that process aren't we. If you think if you think about the scale and sophistication of modern systematic reviews. And think back to, you know, charmers or somebody like that sitting there. Or let's go back to glass sitting there with his psychotherapy. You know how long did it take him to do that whereas now you can do you would do that same job much faster and the volume of literature on that topic could be horrendous but you know. If you were doing like for like you do it so much faster the knowledge is there the tools of and they're not going to get worse. So we move forward so yeah I think. Yeah, it's interesting because what's happening to the volume of primary research is to primary research include improving in evidential value. The volume of the kind of meta mass production year and lots of really crap systematic review and evidence synthesis. And we want to do away with that, but arguably we need evidence synthesis more than ever in a post truth world. So doing it efficiently and well and making it fit for purpose so it can answer more complex questions more directly. All of those things are kind of in the mix with us aren't they but it's, I think that is, if you look at the history of systematic review that's what it is. It's a history of getting better and faster. I certainly agree. I'm only thinking if we need to change the term. So we call it a rapid rapid review or a rapid systematic review. So far, at least to my notes, a rapid review is the review with those streamline processes. So we should not certainly confuse those two types. So certainly we need more time efficient systematic reviews. I think we should still call them systematic reviews. If we use those tools, hopefully they will help us. Of course, we need to highlight any risks that we have as we said. But yeah, I'm not sure if we should change the title. This is certainly that we need to discuss. Yeah, I mean, I would completely agree. As people working in systematic review, we know the dangers of proliferation of terms for the same thing. Yeah, I kind of feel that the focus should just be on speeding up system review and the creation of this new ill-defined rapid review kind of undermines that. And it reinforces criticisms of systematic views such as it's a made up field. And they used to just call these literature review, which I've heard, you know, recently past few months and people trying to squeeze systematic reviews into six months. Like they're not even calling them rapid reviews at this point. But I guess, yeah, that's the corollary of that, isn't it? I guess rapid reviews makes it a bit more seem more palatable, even if it's not achievable. Yeah, it's biobi-ware, isn't it? It can be called a systematic review and it may or may not be and it can call the rapid review and it may or not be. And, you know, one person's rapid review is another person's systematic review when it's all just maybe the tools and the AI will help us actually define what on earth it is and what it did and what the potential biases are, which I guess would go back to what RG was saying about, you know, is it Amstar compliant and that kind of thing. Yeah. Yeah, I mean, I sense that one of the things holding back the speeding up of systematic reviews, perhaps, not that they aren't being sped up, is that there's a conversation to be had around what compromises can be made in searching, for example, in screening. Like, I don't know, have we really had the conversation about? Yeah, like, it's what I probably haven't probably just ignored it, but missed it. I think there's a tension between the whole idea of it being systematic and following this time on the protocol and speeding up. Yeah, and we need to met a epidemiology to guide that, don't we? And in some fields, that's there. And the open science movement is great for that. You know, I mean, go back five years, how many papers were there about p-hacking and harking and what's the average effect size in a field? There was hardly any, you know, I think in ecology, we had Michael Jenians had a kind of, you know, a couple of little stabs at that and there was a little bit of stuff from Doug Altman in medicine and what have you, but, you know, it wasn't widespread. So that idea of metorepidemiology and meta science more generally, not meaning meta-analysis, but that understanding the science itself, that will help a lot. You know, it'd be great, wouldn't it, to know to just have that kind of information. If I search in a non-English language, what's the probability of me getting extra information? If I miss out these databases, what's it going to do? What's the probability of a risk of bias if I, you know, if we start assembling that kind of information, is metadata as we're generating our systematic reviews, then we'll have a really rich ecosystem for evidence synthesis. Yeah, I feel like sometimes as a field we are perhaps a bit afraid of having that conversation in detail because of the fear that it might undermine the field and our jobs. Yeah, there's some bits of kind of received wisdom that I'm quite happy to do away with. Particularly on the kind of having every single study component, and there's other bits of received wisdom that are kind of absolutely fundamental in one field and ignored in another, like critical appraisal, where I've got a really strong view that, you know, everyone should be doing it and there's no exceptions ever. But, you know, the meta-repidemiology to back that position up isn't there. It's just what you think, isn't it? It suddenly addresses all these biases, right? Because changing, I mean, the number of studies and the studies that we include in every review potentially increases biases as well, right? So the quality again, again, will highlight the quality of the review is important. Yeah, your mistakes can be illuminating like that, can't they? I did one review where I got my treatment and control mixed up the wrong way around. So my effect size was in the opposite direction, for example. And, you know, I'd say that was a perfectly good systematic review. It wasn't done, the data extraction wasn't done in duplicate. So maybe if it had been, then that mistake wouldn't have been made. But if it hadn't been a systematic review, it wouldn't have been study characteristics tables that let other people go, Gav, you've made a mistake, you're great Muppet. And then people published a load of papers saying you made a mistake, it completely invalidates your results and you're talking rubbish. And it doesn't make the slightest bit of difference. Either to any of the heterogeneity or to the pool of facts, you know, okay, the confidence intervals will probably have been a tiny bit wider. Big deal, you know, so sometimes you can make these shortcuts, you can misstudies, you can make mistakes, it doesn't matter. And other times, it could have a devastating effect and you just, the trouble is knowing when it isn't when it isn't. No, absolutely. And particularly when we have only a few number of studies in the analysis, that's a really huge issue. And hopefully those tools will help us avoid some of these mistakes. Eighth, of course, are developed for data abstraction, for example, because we, at least in my experience, we always have data abstraction errors. We always have to go back to the papers and see, or we have to contact the authors, is this what you report here? Standard errors are often misinterpreted as standard deviations and vice versa. So things like this are very important in any type of review, even rapid or systematic review. And also the value judgments and the arguments, you know, it can take, if you want to, you know, if you've got a complicated study and you try to figure out how you should generate the effect and which of the confounders and all the rest of it, you know, you could have a massive debate and in fact a whole field could have a massive debate about that for quite a long time before it came to a collective judgment. So yeah, the value judgments as well that are in there are always open to question, aren't they? So I think that, I mean, what we'd be basically saying is that the things that the ARC can do is speed up those small things at the moment, and in the future you can speed up some of the bigger things. There's always going to be a need for a human oversight. There's always going to need someone, particularly for the sort of more in depth parts of the process. And the critical appraisal perhaps is one bit that our package might not be able to do very well by itself. I think one of the things we probably haven't mentioned enough is that this having our code open and available allows people to look at your code and see what you're doing. Matt, you mentioned before about sort of, yeah, the ease of doing some of these, these tools isn't necessarily good because a lot of people will just do them and run them and without thinking about them. Just run a mass analysis and then we know what they're doing and why they're putting data in the right places, but at least with our code that is open and shared, we can someone can go and have a look at that code and rerun it and say, hang on a second, as Gav found out with his paper, hang on a second, something's gone wrong there, so we can have a look again and try again. So I think that our combined with open science will speed up normal reviews, but then there's also this question of what a rapid review is or an ultra fast review or a super duper speedy review, all those things that we don't really know what they are yet. So we've got one minute left, I don't know if anyone's got any final thoughts. Yeah, I just on that point, I think there's a lot of this links into sort of more general conversations about like in open science about sort of like version control publishing and iterative like because often what is holding us back is like this goal towards this perfect end product and especially when you were going to see certain viewing you're dealing with other people's data and loads of it. That's that can be an anxiety inducing thing which holds you back. Yeah, I mean so like Eli's new model, although it's controversial is sort of moving toward that concept. Yeah, or the idea that you was talking about where you kind of sit between a systematic map and a, and a full systematic review and then you have, you know, a systematic review and then someone comes along and adds another population to it or adds another outcome to it and into kind of whole living systematic review type idea. Yeah. Yeah, I much prefer that sort of like the idea of this automated living system review which updates itself like being able to sort of chip in on other people's research and in a tag tag in tag out. Yeah. Yeah, people have talked about that so alternative to peer review as well haven't they, which is interesting in the context of critical appraisal that you know, if 556 people have all worked on it and looked at and they're happy with it. It's telling you something about its validity. Yeah, slow intro tips. Okay, I'm going to call it a day there. So thank you all for joining us and thank you everyone up for watching this and I hope you enjoy the rest of the conference.