 All right, I think it's time to get started. So welcome everyone to Esmar Conf 2022 and this workshop on how the collaboration for environmental evidence can help you with Ruth Garza, and I'm delighted that she's here to talk with us today. Just a quick note, if you're watching on YouTube and have a question for our presenter, please reply to their tweet in the at Ash hackathon Twitter feed. We'll reply to those as soon as possible. And again, we just want to make sure that you're aware of our code of conduct, which is code of conduct, which is available on the Esmar Conf website at esmarconf.github.io. So welcome everyone, that's in this live workshop and everyone who's watching on YouTube and Ruth, take it away. Thanks very much for that. So I'll just get my slides up. Okay, so hopefully you can see that. So hello everyone, welcome. Good morning. I'm going to be talking a little bit about the collaboration for environmental evidence and what it can do for you. I'll cover a little bit just background about systematic reviews and evidence and the season, why those matter and then talk more specifically about CEE and the kinds of activities it does and how it can help you and how you can get involved. My name is Ruth Garcide and I'm from the European Center for Environment and Human Health, which is part of the medical school at the University of Exeter in the UK. And I'm also part of the UK Center for the Collaboration for Environmental Evidence. And just to say many thanks to my colleagues, including Neil at CEE for many of these slides. Oh, and also I'm really happy to be interrupted. So if you've got a question for clarification or you want more detail on something or something didn't make sense, you're very welcome to raise your hands and talk or drop something in the chat. Or as Emily said, use the other roots on YouTube and elsewhere to ask questions. So why do we do reviews? We often want to combine small parts of the whole picture. And we know that single studies are rarely so large, robust and unequivocal to answer the question alone. By doing a systematic review and if that includes a meta-analysis, we can include the power that we have to answer specific questions. And that is because we can increase the precision of our answer and prove the accuracy and look comprehensively at the evidence which is out there. Systematic reviews also consider the risk of bias or the quality of the underlying studies which is really important when we're thinking about how much confidence we want to place in the answers. And they can also look at different study contexts and try to understand what the impact of that is on study findings. It's also a way of dealing with expanding evidence bases efficiently. And we know that the amount of research that's being published is increasing. It's very difficult to keep on top of any one particular question. And systematic reviews are a way of bringing together and combining what we understand about these expanding evidence bases. And it also can help us understand why study results conflict. We often see that some studies on a similar question come to different conclusions. And systematic reviews by examining things like quality and context can help us to try and unpick why that might be happening. So we certainly need reviews. Why do we need reliable reviews like systematic reviews are? And these might be particularly important where there's uncertainty about the impact, where there's controversy or disagreements about what the answer is. We also need to produce reviews where there's accountability and transparency about how we've identified the studies and how we've dealt with them in order to come to the conclusions that we've come to. We need to be objective about this and not allow our preconceptions to influence what we're looking at. And reliable reviews are also good where there's a large or disparate evidence base. Traditional literature reviews may be subject to a range of limitations or fatal biases. The problem with traditional literature reviews, and this probably isn't an exhaustive list, but it's things like selection bias, so including only some of the studies on a particular topic and particularly selecting studies which have the desired answer from your perspective. They, we really need to look at the whole evidence base on the question to come to an unbiased conclusion about what's going on. They also may not be comprehensive missing studies and that may be linked to publication bias. We know that studies are more likely to be published if they have positive findings. They're more likely to be published in higher impact journals if they have positive findings. They're more likely to be published in English language journals if they have positive findings. And so this can lead to publication bias where we are overestimating the impact. We need to look at other sources. We need to make sure that we've identified the smaller journals and unpublished studies, great literature as well, in order to be certain that we've got a comprehensive picture. Traditional literature reviews also tend to lack transparency. So there isn't necessarily a method section for these sorts of reviews which tells you how particular studies have been selected for inclusion in the reviews. And they may not treat all of the studies equally. So you get discussion bias where some studies are given more consideration than others and we don't necessarily know why. So all of that can add up to a lack of replicability. So because you haven't got a transparent description of how studies have been selected and dealt with in your study, then the review itself couldn't be repeated by another team, which means that you can't check their conclusions in any robust way. And then there's also the issue of vote counting results rather than doing a synthesis, which as I'll talk a bit more about in a minute can lead to misleading findings and quality bias where we talk about particular sorts of studies in particular ways. So just a reminder that systematic reviews and meta-analyses overlap, but they aren't always the same thing. So you can have systematic reviews where it's not appropriate to conduct meta-analyses for a range of reasons, usually to do with the heterogeneity of the interventions or the range of outcomes that are used or the differences in context of the studies that are included. And so these reviews will have a narrative synthesis. And there are also meta-analyses which don't base themselves on systematic review methods. So a single lab, for example, might conduct a meta-analysis across the studies that they have conducted, but not include all the studies that are out there on a similar topic. So there is overlap. Systematic reviews can include meta-analyses and meta-analyses might be part of a systematic review, but they have this overlap in the middle. So I was gonna talk a little bit about the problem of vote counting and why this can be a challenge to coming to robust conclusions. And here's an example. Say we've got nine studies in a review. None of them show any significant positive or negative effects. So you've got nine responses which are not statistically significant. So you might conclude that there's no impact if you were just gonna vote counts on these nine studies. But if we were to look at these in a meta-analysis and this shows the results from these nine studies, these imaginary studies are labeled ABCD, et cetera. And the graphic here shows the effect size which is the blob and the confidence interval which is the line. And here you've got negative effects and positive effects and this line up the middle is the line of no impact. So you can see that all of these studies have confidence intervals across the line of no effect. So we might think that from counting them that there's no impact, but if you were to meta-analyze these studies and a meta-analysis effectively treats all these studies as though they're part of one very large study, this can increase the precision because you're effectively dealing with a larger sample size and reduce the uncertainty around the effect. And so you may have by combining these studies a positive, a statistically significant positive impact which wasn't visible simply through vote counting. So that's one way that vote counting can give misleading responses and why systematic reviews can be so important. An alternative example of this, so we've got again nine studies, but we've got two that show a significant negative effect, four that show a significant positive effect and three with no statistically significant finding. Again, if we look at these in terms of meta-analysis, again, each of these findings represents one of the study findings. So you've got these two, which have a significant negative effect. These three significant positive effects, I'm sorry, four, and then you've got three where there's no statistically significant impact because they've got a confidence interval that crosses the line. But again, if we were to analyze these, rather than perhaps if you were vote counting suggesting that there was a balance of evidence showing a positive effect because you've got four studies showing that, actually meta-analysing these might show that there's no statistically significant impact overall. And again, you could further suggest that there might be something going on here, which is leading to these different clusterings of findings. So why do these three tend to imply a negative effect? And if you just meta-analyze those three studies, you'd get that finding. Similarly, for this grouping over here, you'd get a positive finding. And this kind of variability can again be explored through subgroup analyses or even meta-aggression. And you might find explanations about the context of the studies, which would provide information about this different grouping. So perhaps it's because these studies are conducted in similar but different localities or they use different methods. They're conducted on a different species or whatever. So again, you can understand a bit more about whether or not there might be differences in groups of studies through doing meta-analyses, which is not visible at all through the purely vote counting. So those are just a couple of slides just to offer a bit more explanation about why vote counting in reviews can be so problematic and can be misleading. So just to return to the definitions, what is a systematic review or a systematic map? It's systematic because it's done according to a fixed plan or system and it's methodical methodical, excuse me, it's early in the morning here. And it's important for systematic reviews that a plan of the project is written before you start. So systematic review protocols are really important in this context, again, to help to try and reduce bias through the process of doing the systematic review. The review part of this involves a critical appraisal and the synthesis of a body of work. But we also undertake systematic maps and maps don't try to answer the question about whether something works, but rather try to represent the relationships between elements such as objects, regions or themes. And a systematic map also follows a protocol and is done to a fixed plan, but it might be looking to understand how a broad topic area has been researched and to identify areas where there are lots of evidence which might benefit from a full systematic review or where there are gaps in the evidence base, which suggests there could be useful primary research done in those spaces. So the things that make a review systematic include, as I mentioned, an a priori protocol and this is your plan, which should be published somewhere accessible either through a journal like Environmental Evidence where it would get fully peer reviewed, or there are various places where you can log your systematic reviews, including a new database, which the EE is involved in setting up. And this allows people to see what you plan to do, but also it's a good way of trying to reduce research waste. So you can check whether or not somebody is already doing a review, systematic review on a topic of interest. So the next thing that systematic reviews have is a comprehensive search. So this includes looking across multiple databases to make sure that you don't miss important studies which answer your question. And the a priori protocol should include a structured question for your systematic review, which clearly lays out the question that you're trying to answer. And the comprehensive searching, as I said earlier, should also include gray literature and this is really important to try and reduce the impact of publication bias on your findings. So that can include looking for unpublished material, but also the gray literature, things like reports from research bodies or government bodies or charities who've done research in this area but don't publish it academic literature. The systematic review needs to have these transparent and replicable methods, which describe exactly what you've done and how you've made decisions and would allow a different team to reproduce what you've done. And importantly, each of the studies is also subject to a risk of bias assessment or quality appraisal to understand how robust the methods are that have been used in each study, which is included in the review. The studies are then given weight and in a simple meta-analysis, this is usually related to the size of the study. So I mean, it's related to the inverse variance to the findings, but that in turn is influenced by the size of the studies. And in some reviews, you also see people waiting by quality as well. So you can weight the studies in the synthesis by their quality or at least explore what the impact of different quality ratings is on the overall findings. And finally, a synthesis of all relevant studies is an important part of the review. As I said, this can be a meta-analysis, but it might not be, there are lots of good reasons why you wouldn't do that and a narrative synthesis is also useful. And I should say at this stage as well that this presentation is sort of focused on systematic reviews of quantitative evidence and those questions which try to understand what the risk of something is or whether or not an intervention works, but it's also possible to do systematic reviews of qualitative research. So research which has spoken to people about their experiences, perceptions and attitudes and to do an evidence synthesis of that kind of qualitative research and that's an area that I'm particularly interested in. So I'm happy to answer questions about that, but I'm not really presenting about that today. Ruth, just to interrupt you briefly, we do have a question, although maybe not quite about that, but related to, given the importance of doing this systematic process for synthesizing evidence, why do folks in conservation and also I think other fields, this is not limited to the field of conservation, rely so much on expert opinion and limited kind of evidence summaries rather than doing this full systematic and rigorous review process. That's a great question. I wish I knew. I think there are several reasons. I mean, systematic reviews are relatively recent in the field of the environment. I mean, certainly there's been a lot of activity over the last 10 years or so, but it's not as embedded as it is in other disciplines like health. And I'm just about to talk a little bit about some of the history of that. So there are still plenty of people who don't know about systematic review methodology. It's not necessarily part of any training or degree course in environmental and conservation areas. So I think it's still a relatively new field. And they're also to do them well, they can take a long time. So I think that certainly things like expert opinion and other sorts of sort of quick and dirty evidence reviews can be more appealing for decision makers because they happen quickly. And that's always been a problem with systematic reviews at the time frames of policy and practice often don't fit well with the time frames to conduct a full systematic review. But I'd be really interested to hear what you think, is it that these disciplines remain more hierarchical and therefore expert opinion is difficult to challenge? Is that part of it? I really genuinely don't know. But yeah, I think that certainly was the way that decisions were made in all sorts of fields. And as I said, I think health is the place where systematic reviews have been developed the longest and perhaps it's more of a go-to automatic that you need a systematic review on a topic to make robust decisions about it. But before it became embedded in practice, then the same thing happened. It was about literature reviews, quick evidence reviews and a lot of expert opinion and making decisions based on eminence rather than evidence. So maybe we'll get there maybe. I should also say as well that having said those things about literature reviews and why they're not systematic reviews, I think there is still a place for that kind of sweep of the literature as a kind of background document or a general understanding about what the state of the field is. I don't think systematic reviews replace the need for those kinds of sort of high level overviews. I just think they do different jobs. So they're not necessarily answering specific questions about effectiveness, for example. Well, but they do another job. It's not that they have no work. They just are not as reliable as a systematic review to answer those specific questions. Yeah, that makes a lot of sense. And I agree. I think it's a mix of reasons that that's not going on, unfortunately. And maybe different fields are more progressive than others or other fields like health, as you mentioned, need to actually to rely on evidence that they can count on because if they don't, you know, I have people's lives kind of hanging in the balance with some of these review topics. We do have a question in the chat for you from Andre. The question is, how do you keep your search systematic when you're searching for gray literature where there's often kind of this unsystematic way of looking for studies, kind of fishing for studies in comparison with when you're going through, you know, the very systematic way of searching electronic databases. You can document everything very nicely. How do you do that with that kind of literature? Yeah, that's a really great question. And it is more challenging to do that in a way that is replicable and clear. I mean, for example, when you're searching institutional websites, those are often not static. So some, firstly, you often have to use much smaller search strings because their websites can't cope with a really complex search string. So you might be just combining one or two or three terms to do that. And then, you know, if it has the option, you might sort for relevance, depending on how their web engine works. But you know that if you do that the following day, they may have added material and it's not a static thing or totally redesigned their website or whatever. So I mean, I think all you can do is record what you did do, you know, what are the search terms you used on that? What was the date so that people know how long ago that was searched? And whether or not you screened everything. Again, sometimes if you do search in something like Google or Google Scholar, you get about tens of thousands, hundreds of thousands of results and you can't look at them all. So you may sometimes decide to just search through the first 100, for example, to look for the most relevant things. But I think that's kind of inevitable that you're not going to be able to give completely as precise and replicable descriptions as you can for database researchers. But you just need to do the best you can. You need to say, you know, how you searched, exactly which search terms you used in their combiners, when you did it, which websites you used and be clear about what you did do. I think as far as possible, we just have to capture what we can do. I mean, the other area which can be, depending on the topic, really useful is asking experts in the field if you've missed anything and people tell you stuff. And again, all you can do about that is say what you did because that again is something which is probably not fully replicable at the end of the day. But at least if you describe the process, people know how you got there. Thanks, Ruth. Yeah. We have another question from Xavier in the chat around whether or not you have any good examples of conservation science systematic reviews of meta-analysis. And this may be something you get to in the talk, but I wanted to highlight that here if it's a good time for that question. Yeah, sure. Well, I think a good place to look is the CEE Journal, which is called Environmental Evidence. And there are numerous reviews, systematic reviews and maps and protocols published in that journal. It's the focus of the journal. And you would be able to find conservation reviews with meta-analysis. I think, Neil, you may want to plug one of your own on that answer. I'm not sure I want to now under scrutiny. I've put in the chat. Thanks, Ruth. I put links to the CEE website and the Journal Environmental Evidence. And a great point from Murray Assey as well on including information specialists. Yes, absolutely. I cannot speak highly enough about the value of a specialist librarian or information specialist. To conducting systematic reviews, including great literature searches, they're a really, really important member of the team. And obviously, the systematic review really stands or falls on the quality of its search because missing studies is not something that you want to do. So, yeah, I really recommend you speaking to your librarian or if you're lucky enough to have one, an information specialist about designing searches for systematic reviews. OK, so shall I go on? Yeah, I think that would be great. Yeah, OK. So just a little bit of the history of systematic reviews. Slightly unbelievably, the first one was done back in 1904. And again, this was in a health topic, but it really didn't start to take off in in significant ways until the 1980s. And this was in the field of clinical medicine. It's now routinely used to synthesize evidence and inform policy and health care. And yeah, it's kind of the go to summary. It's very much an expected part of what research does. In the UK, if you were going to conduct a randomized controlled trial, many funders wouldn't give you money to do that unless you had already conducted a systematic review to establish what's already known and if the trial is needed. So it's very embedded in practice and in the research culture. But it wasn't translated into the conservation field until 2006. So really much later. So we are a newer a newer field, but it is being taken up by other disciplines. So there are a number of sort of sister organizations like the Campbell collaboration and the Cochran collaboration. Campbell covers topics as wide as in international development, criminology, social care, aging. So there are lots and lots of other disciplines that are starting to take this methodology on and developing it for their own fields from the original health position. So just as an example about why systematic reviews matter, this is taken from the health field and just gives an example of what goes wrong if systematic reviews aren't done. So the Oxford textbook of medicine in 1987 had this statement in it, the clinical value of thrombolysis remains uncertain. And thrombolysis is a clot busting drug. So there's the sort of things that try to break up blood clots if you've had a stroke and this diagram. I'm sorry, it's a little bit fuzzy because it comes out of a print journal. But this shows the amount of research that had been done in this area by 1987. So the first trials looking at using clot busting drugs post stroke were done back in the nine in 1959. And each of these studies shows the the findings this line over one is the line of no effect. Favours treatment is on the left, favours control is on the right. So in 1987, you can see why the medical textbook might have said that most of these are non-significant. They're all over the place, they're both sides of the line. But if you had met to analyse them, you can see this tiny dot at the bottom shows a significant finding that favours treatment if the meta analysis had been done. So the Oxford textbook should have known this and even more alarmingly, this one on the right is a cumulative meta analysis. So each dot represents the synthesis of all the studies above it. And you can see that even by 1971, it was starting to be quite good indication that this was an effective treatment. And then there were a couple of studies which showed non-significant findings. And by 1973 and ever onwards from 1973, all these additional studies that were done just reaffirmed that this was an effective treatment. And so we've got this final meta analysed point in 1988. But really, we had a good picture by 1971 that this is working. And the waste that happened because these meta analyses and systematic reviews were not done is immense. You know, the cost of doing these additional 20 or so trials is huge. People were randomised to not receive the clot busting drug when there was clear evidence that it worked, so that's not very ethical. And in the real world, people almost certainly died because they didn't get given an effective treatment. For, you know, nearly 20 years after it was clear that this worked, if the systematic review and meta analysis had been done at this time. So I think it's these kind of figures that have really galvanised the health field into understanding what systematic reviews and meta analyses are worth. And it would be good to see, as you say, the kind of conservation and environment field recognised that actually we might be having similar catastrophes. And, you know, in the time of a climate emergency and risk of mass extinction and so on, I think we really need to make the most of the evidence that we have in order to make good decisions in this field. So, yeah, these are the steps of a systematic review. As I mentioned before, the overarching principles of doing this include transparency, comprehensiveness or representativeness of the data and procedural objectivity as far as possible. The data that can be systematically reviewed includes quantitative, qualitative, as I said earlier, or mixed methods studies. And these this kind of data can be used to answer different sorts of questions. Yeah, but all are possible. So the ways of synthesising can be aggregative. So combining similar studies on the same intervention and testing hypotheses. So this is often the meta analysis techniques. But you can also do configurative reviews where you're investigating heterogeneity to understand the role of context and other features in the evidence base. And this might be evidence generate hypotheses generating many reviews of qualitative evidence fall into this. But if you've got a very messy evidence base, that might also be true for quantitative studies. So someone was asking about reviews in the environmental field. These are some of the examples of the types of topics that have been considered and published in the Journal of Environmental Evidence. So things like components of biodiversity and the influence on poverty, assessing community based conservation projects or trying to understand the impacts of urban agriculture programs on food security and low middling from countries. So a really wide range of topics have to encourage you to have a look. So the policy role for systematic reviews going back to that question answered earlier. You know, we would love to see systematic reviews being more widely adopted in policy and practice. And the results of those systematic reviews being communicated to scientists and decision making, which hopefully then will lead to evidence based environmental management practices, which can then be monitored and continued research looking at different aspects of it, which again could then feed into systematic reviews. And living systematic reviews are a very current topic. I'm sure there's stuff around that in this meeting. But one of the issues with systematic reviews is keeping them up to date and being able to incorporate new research as and when it happens. So that's an area which I think would be really interesting for development in the field of systematic reviews into the future. So this is an example which I'm now embarrassed to try and talk about because I think this is one of Neil's reviews. So this was about whether or not bio manipulation could improve water quality in eutrophicated lakes. I can't say it. And this was a study which the search results identified nearly 29,000 results, which related to 14,000 articles once the duplicates were removed. The screening was done in three stages. So initially looking at the titles to see if they seemed relevant and then looking at the abstracts of those studies. So the numbers of relevant studies against the review inclusion criteria went down to 1900 after title screening and then again down to 551 after abstract screening. 22 of these studies couldn't be found. So the actual full text couldn't be identified. So 231 of these studies were relevant to the review after screening the full text papers. So further 29 were excluded at critical appraisal as having fatal flaws in the design, leading to 128 manipulations in 123 lakes being included in the meta-analysis. So this suggested that bio manipulation was effective, certainly up to four years in these analyses about Sechi death and looking at chlorophyll in the waters as well. So this is an example of a systematic review looking at how a particular intervention might lead to changes in water quality and how they came to those conclusions looking at the different studies that were under review. So just quickly then about systematic maps. Systematic maps tend to ask broader questions than reviews and aim to catalog the evidence that's out there on a particular topic. They usually don't fully quality appraise the studies included, but they might map study designs used to answer the questions. They look at what has been done and where it's been done and may have iterative intervention and outcome patterns. One of the outputs of this is an interactive database where people can search on particular topics, locations, study designs or outcomes. So people can use this as a kind of personal library catalog, if you like, of all the studies that have been done on the broad topic area. So we have got guidance for doing systematic maps in CEE. I'll talk a little bit more about that in a minute. So these are the stages for a systematic map. As you can see, they contain most of the same stages as a systematic review. They don't usually include critical appraisal and usually there's a description of the studies, but it doesn't try to summarize the findings because without that critical appraisal step and without a full synthesis, you may end up with misleading findings if you try to do that. So the outputs of systematic maps are often these kinds of visualizations. So on the left, there's a map which shows where the studies have been conducted. Sometimes these are color coded to show you different interactions. And these interactive maps, if you hover over these dots, they're produced again on thanks to Neil and team for this on a GitHub bit of software. So you can hover over, you can upload your extraction sheet and you can hover over these dots and it gives you the details of the studies that have been done in that area. So that's a really useful resource for researchers interested in research that's been done in particular areas. And then the other kind of outputs are these heat maps. So this is from a large map about the connections between conservation activities and our whole range of human wellbeing outcomes. And the darker colors show more research done on that area. So this shows you that you've quite a lot of research which has looked at economic living standards as a measure of human wellbeing, but very little that's looked at some of these other measures including perhaps surprisingly direct measures of health in relation to conservation activity. So these reviews and maps can be used to inform decision-making and policy practice and in research. So that's my sort of quick whip through systematic reviews and systematic maps. And I'm now just gonna talk a little bit about the collaboration for environmental evidence or CE. So the goal of CE is to promote and support rigorous evidence synthesis on issues of the greatest concern to environmental policy and management. It provides a platform for coordinated registration, peer review and publishing of evidence synthesis. And that includes protocols and it includes systematic reviews and systematic maps. And that's through the Journal for Environmental Evidence. It also provides guidelines and standards for planning systematic reviews. So that's when writing the protocol and the conduct of evidence synthesis for both evidence maps and systematic reviews. And the sort of standards are contained in roses and there's a checklist which shows you which elements you should be reporting about. But there's also a guidance handbook to talk through what you need to do. And all of these are available for free through the CE website. So CE also provides training on systematic reviews and systematic maps and helps to develop new methodologies relevant to the field. It provides an evidence service for decision makers telling them what's known about particular topics. And it also maintains a network of collaborations to try and build capacity for and advance the subject of evidence synthesis. So we're really trying to address that question of earlier about why are people still relying on expert opinion rather than systematic reviews to make decisions. And that's part of the role of the Collaboration for Environmental Evidence to promote these methods and to promote the reviews themselves in terms of being ways of answering critical policy and practice questions. So there are collaborating centres internationally. So in the UK, that's a collaboration between three universities. And then there are also centres in South Africa, in Stockholm, there's an Australian centre. There's one in Canada, there's one in France, there's one in Chile and there's one in the US. So it's not quite well-domination yet but we do have a good spread of centres globally. And there are also different thematic areas which the CEE cover. So the overarching strapline is around the effectiveness of environmental management interventions and the impact of human activities on the natural environments. But then more recently, we've been looking at topics like environmental sustainability, the links between public health and the environment and social welfare. And these have included working through joint projects with other organisations in the kind of evidence synthesis world. So Evidence Synthesis International and the Global Evidence Synthesis Initiative, the Africa Evidence Network. So trying to respond to the interest and needs of these other global organisations who are trying to bring together people with an interest in evidence synthesis across different topic areas. So as I said, there's a number of freely available resources online. So those include these guidelines, setting the standards for the conduct of evidence synthesis and also an online open access database of evidence synthesis CEDA. There are open software and reporting forms to support review teams in the conduct of systematic reviews and the journal Environmental Evidence to disseminate findings which is open access. There's also online self-paced training courses about systematic reviews. So just say a little bit more about training because I think this is a key part of how CED tries to sort of spread the word and build capacity about systematic reviews. So we can deliver physical workshops of different lengths. So an introductory course of one day or more comprehensive courses which take you through all the steps and have a lot more sort of interactive activities of sort of two to three day courses talking about how to conduct your own systematic review on that. And then we can also offer specialist courses on particular aspects of systematic review. So for example on meta-analyses. We can also endorse trainers and training courses if they meet the CEE standards and guidelines for what you're being taught. So if you have a piece of CEE endorsed training it will follow the guidelines and the standards for systematic reviews in environmental evidence. And then there's network support and guidelines for CEE endorsed trainers. So if that's something that you want to get involved in do get in touch with either myself or Neil about training for CEE. And the training team for CEE monitors and evaluates these CEE training activities and coordinate with other groups to ensure that our training is up to date and state of the art. So the training team, the core team includes my colleague Jackie who was on the previous photo, myself, Neil, Terry Knight and Nikki Randall. And we've also had guests who've taught CEE training larger list on the on the rights. And there's also this really good self-paced free online methods course about evidence synthesis where you can work through the different modules around systematic reviews and systematic maps. And there's also modules thinking about stakeholder engagement which can be really important in a systematic review to make sure that what you're doing is relevant and useful to potential end users. So again, the systematic review and mapping methods course online is looking to get a comprehensive, accurate representative overview of a research topic or question, accurate and precise summary of the impact of effectiveness on a particular factor or intervention, looks at things like searching, critical appraisal, meta-analysis and assessment of the evidence for high profile controversial topics. If you want to get involved with CEE, there are lots of ways to do that. And the first step is perhaps to undertake your own CEE systematic review or map. And as I said, the website for CEE contains guidance for conduct and reporting of systematic reviews and maps and the protocols for those and the journal publishes those. And there's a really supportive peer review process through that journal. If you are starting a systematic review, it's a really good way to get some expert feedback on your protocol before you start if you publish in the journal. You can also form or join a review group or a methods group. So there are groups with interests in stakeholder engagements, methods around systematic mapping, around rapid reviews and technology to support evidence synthesis. You could register for notifications from environmental evidence. So that will tell you when new papers or protocols have come out. There's a LinkedIn discussion group for the collaboration for environmental evidence. And there's also Twitter accounts. You can follow up for updates about what's going on. So lots of different ways to get involved. And we'd love to see new members if you are keen on that. So sorry, Ruth, quick question. So if someone wanted to join one of the existing groups, what's that process look like for them for? So at the moment, I think these are fairly informal. If you want to join one of these groups, if you go to the website, which is the bottom of this slide here, there's a tab about groups. And there's a contact person, I think, on that page. So you can get in contact and express an interest. And someone will get back in touch with you. So yeah, this is my, I'm nearly done anyway. This is my final slide. So the take-home messages from this traditional review is susceptible to bias. And so systematic reviews and maps are really important to provide transparent, repeatable, objective, and comprehensive summaries of the literature that are out there. Systematic reviews tend to look at questions around effectiveness or impact. They're really good at answering narrow questions in depth and produce a full synthesis about the findings of the studies that are included in the review. A systematic map is better for trying to understand the state of the knowledge or the state of the research on a particular topic. And this usually relates to a much broader question where the research topics and approaches and localities and so on are catalogued rather than there being a full synthesis. And the Collaboration for Environmental Evidence, CEE, were a diverse community, trying to support anyone who wants to conduct a systematic review on map. There are lots of different ways of getting involved and plenty of resources to try and support people who are new to this area. So yeah, that's it from me. I guess really nice to talk to you and to tell you a little bit about CEE and what it does and why we think that it's a worthwhile organization to get involved in. So if people have questions, I'm very happy to talk further. Thanks, Ruth. That was really, really wonderful. Thanks for taking questions throughout as well. What I'm gonna do is unpin you. And I think it could be great now if folks have questions, they can turn their videos on, unmute yourself for anyone who's here in the live Zoom and we'll still obviously keep taking questions on YouTube. And I guess just maybe to get questions rolling, I had a question actually. So I work in psychology and addiction science specifically and someone that I've been really engaged with are doing meta reviews, so reviews of reviews. And I wasn't sure in your work, in your field, if that's something that is becoming more common practice and if you have trainings around those things as well or what that might look like methodologically from CEE. Yeah, that's a really interesting question. I don't think we do have guidance at the moment about meta reviews. And I mean, they get called different things, don't they? Overviews of reviews, systematic reviews reviews of reviews. I guess partly because we're a younger discipline. So having multiple systematic reviews on a similar topic is not that common yet because we're still a step behind. So yeah, I think that may be something to think about for the future, but at the moment it's not such a priority because we don't find ourselves in that fortunate position of having multiple systematic reviews already out there on similar topics. Yeah, that makes a lot of sense. But maybe something to start thinking about for the future. Yeah, no, it's a good point. It's in terms of future proofing approaches. Yeah, I think, yeah, it's a real challenge. I did one that nearly blew my brain when I was trying to account for overlap between studies this was around physical activity and older people where you can imagine there's a lot of existing reviews on that topic. And yeah, trying to work out how to account for overlap between studies across different reviews was a real challenge. Yeah, it is a real challenge from a data management perspective in terms of data collection and then also how to actually look at overlap in a way that's meaningful but not just kind of reporting this percentage of studies overlapped. Yeah, that's a question that I ran into quite a bit with my meta-review work as well. Yeah, I can imagine. I think there is work. There's certainly a group in Cochrane who are looking at overviews of reviews. And I think there may also be one in Campbell that's starting to look at that. So are you part of that? Yeah, I am part of that group which is kind of what prompted my question. Well, because in some ways I think about evidence gap maps as being one way to kind of think about a meta-review or at least the first kind of initial stage and then moving on to a meta-review maybe after that if it's necessary. And so Joseph has posted a link to a meta-analysis and meta-analysis. So a meta-meta-analysis in the chat for anybody who's interested. So that could be a great example. Can that be a mega-analysis? I've also heard a call the secondary meta-analysis but I like mega. I think that's wonderful. We have a question from Twitter, from Alexandra Bannock-Brown. So Ruth, what are your experiences with growing a group of trainers and evidence synthesis? How are you kind of best trained methodologists in developing them as future leaders in the field of evidence synthesis? Wow, that's a huge question. I mean, I think there are lots of ways. Obviously the best learning is by doing. So I think encouraging early career researchers and even PhD students to undertake systematic reviews is part of their study. That's certainly been something which has hugely increased the capacity of health researchers to do systematic reviews. It's almost ubiquitous that the first chapter in a PhD for a post-grad student is a systematic review on a topic relevant to their thesis. So lots and lots of people have experience of what a systematic review is and how to conduct it. So I think that's a really interesting way into trying to support people to do systematic reviews. I also think there are lots of ways of engaging people in some steps of a systematic review. You have to do double screening and double data extraction or at least data extraction checking and quality appraisal and so on. And so to start to involve people in some of those steps of the reviews that they can understand the rigor that's behind it. But it may not take a massive amount of time if they don't have time to be part of their full systematic review team. Obviously I think the training is really useful. And as I showed, there's the online resources but there's also the training courses that CEE deliver. I think until you come to actually do one, some of that stuff doesn't click until you start to try and do it and really understand what the rigors are of doing a systematic review well. So I think those are some of the roots. And certainly there are I think increasing routes for funding for systematic reviews. So more and more funders are recognizing the value of doing systematic reviews. And so being able to get a proper funded project. I think sometimes people think that the systematic review is something you can just get on with on the side and it's not generally, it's a really big task and having a properly resourced team to do these kinds of things is really important. So I think keeping the pressure on to funders to recognize the value of funding this kind of project either at the beginning of a larger project or as a standalone project is also really important. Yeah, so those are a few suggestions. I could jump in there as well. I think it picks up a bit on Terry's comment in the opening session that there's a real need to work more interdisciplinarily, which I can't say. Lovely. Because I think having seen a few different disciplines pick up evidence synthesis methods, there's this learning curve where people go, oh, look what they're doing, that's quite cool. And then you try and practice it with a few very specific case studies that might be slightly easy review questions. And then you're like, okay, this is quite good. And so you get a small community of people who buy into it and then try and promote it. And then that takes quite a lot of work. People don't really value it, but you get somewhere and then people go, okay, I'll have a go. And then that starts to pick up. And maybe people do reviews without much preparation and try and tackle some of the more wicked problems without realizing exactly as you said, how much work they are. And then there's this battle between, yes, they're great, but they take a lot of work. And then it sort of builds up momentum as people go, okay, you need to have preparation and we need to value them from a funding perspective. And my reflection is it's just a bit frustrating when you see different disciplines doing that independently. So my question to you is how, I mean, it's possibly a leading question because we talked earlier about how great this conference is because it's interdisciplinary. But how do we stop that from having those siloed approaches of making all the same mistakes again? It's a really good question. And I agree that, you know, I think there's a lot of, there's a lot of interdisciplinary learning which is important. As I said, you know, the sort of CEE, Campbell and Cochrane are really the three big kind of coordinating bodies for systematic reviews across a lot of different topics. And there's a huge amount of learning there. But I think one of the issues is that systematic reviews sometimes get farmed out to people without training. You know, it's like, oh, this is something I can get my grad student to do or I can get my postdoc to do. And they struggle in those silos without realizing that people have been asking the similar methodological questions across different disciplines for a long time. There's a slightly cynical bit of me that thinks you might not be able to shortcut that learning. I've watched it happen in different disciplines at different times over the last, I hate to say it, but you know, 20 years that I've been involved in systematic reviews. And I wonder if, you know, there's a certain bit of stubbornness within disciplines that means they have to learn those cycles themselves. You know, because you often get that kind of exceptionism. You know, oh, yeah, well, that's fine for medicine, but we're really different. We deal with different sorts of problems. We've got different complexities. Randomized controlled trials aren't everything. We look at different sorts of evidence, you know. And actually those arguments are happening within an outside medicine as well, you know, across sorts of different topics. But there's a bit of me that wonders if actually there still is a certain amount of siloed learning that people have to try it for themselves in order to realise that actually they're not unique and those problems do exist across. But yeah, I mean, I think forum like these is great where we're getting more crossover and some of those international initiatives that were mentioned on the final slides of my presentation, you know, Evidence Synthesis International and Elsewhere, the Global Evidence Synthesis Initiative and so on, the organisations that are really trying to recognise the shared issues for evidence synthesis across disciplines, but also across localities. I mean, I think the other issue we really need to tackle is the massive inequity between well-resourced universities and researchers and universities and researchers that are not so well-resourced. All of this is predicated on being able to have access to multiple online databases, which cost money, and to be able to access the journal articles, which either your library needs to subscribe for or you need to buy them. You know, so there's a kind of really inbuilt inequality in systematic reviews, which means that some areas of the world, some disciplines don't have the same sort of access to knowledge that they need to do this kind of review. And I think that's a really challenging space. I'd really like to see us thinking about how we can support access to databases and journal articles for colleagues globally, because, you know, that's just not how it should be. Knowledge should not be something that you only have access to if you're from a well-resourced, you know, high-income country university. Rich, definitely. I think, I mean, personally, as an organiser, that's really what we wanted to do. We've seen people from Afghanistan, from Nigeria, we've got people from all over the world here, and we've been able to provide bursaries for people who need diesel generators and mobile broadband because they don't have it. I think it's so important. You see it in tool development as well. To bring this back to our is, I think you see a lot of cloud-based screening tools for review management, for screening your abstracts. And if you don't have consistent internet, it drops out and you lose your work. So I think there's that sort of, I don't mean it rudely, but ignorance about, you know, exactly what context people are working in. And so I think it's a really important point, not just to think into disciplinary, Lili, but also, you know, try to put yourself in the shoes of the people who have the least and support them. So yeah, really, really value what you're saying there. And I think that's very important. I agree. We have a question in the chat, actually. I wanted to, and this is, I want to highlight it because it's a question that I was considering asking you, Ruth, and it's about doing a systematic review for more qualitative data because I know that's an area of your expertise. And so Andre's asked if you have any tips on how to go about it when you're dealing with questions that might not fit into the standard kind of peak goal or, you know, participant intervention, outcome, comparator criteria that we often see guiding systematic review work. Yeah, thanks for the question. So, yeah, there's a whole load of kind of parallel but different processes for systematic reviews of qualitative research evidence, including different structures for thinking about how to structure your question about qualitative research. So there's things like Spice or Spider. There's also a Pico where you've got population, phenomenon of interest and context as your PICO. So there are different ways of thinking about how to structure those reviews. At the moment, the guidance for this is quite spread out. So there's a surprisingly series of papers from the Cochrane Qualitative Methods Group, which I'm part of in the Clinical Journal of Epidemiology, which is a really odd place for it to end up, but they've been really supportive of qualitative evidence of disease over the years. So there's a series of papers written by the Cochrane Qualitative Group in that journal, which came out a couple of years ago. So that's probably a good starting place. If you go to the Cochrane Qualitative Methods Group website, there's links to all those. And then I'm working on a very delayed brief guidance for Campbell, CEE on Quest, which has slightly been knocked sideways by COVID, but I'm hoping to get that out. And in the longer term, there's a book coming out, which is a joint endeavor between Cochrane and Campbell about qualitative evidence synthesis, and that's being published by Wiley and should be out in 2023-24. So there'll be more information about that. But yeah, I think it's a really interesting field and the other area, which I think is really interesting and slightly underdeveloped is around mixed methods reviews. So how do you do reviews of parallel or similar topics which look at both quantitative and qualitative research evidence? And beyond that, how do you integrate across the quantitative and qualitative evidence basis to get a more holistic understanding of issues? So those mixed methods reviews are out there, but the guidance, again, for doing that kind of integration across evidence basis is not so well developed at the moment. But there are plenty of examples out there. If you search for qualitative evidence synthesis, that tends to be the umbrella name for systematic reviews of qualitative research. But yeah, drop me a line. If you've got a specific question, I'd be happy to talk more about that with you. Yeah, thanks Ruth. And I think you bring up an interesting point about mixed methods review specifically because at least in my field, in my reading of mixed methods primary studies, folks don't understand how to integrate those evidence. They present the quant, they present the call and then there's kind of some summary that doesn't really do those data justice. And so it sounds like that's also happening in evidence of the season. Try to do the same thing. There isn't really an understanding of how to systematically go about integrating the evidence to provide like a fully comprehensive meaning for pressure. Yeah, absolutely. And also around thinking about the, are you doing these parallel methods and then trying to synthesize them? Or are you doing some kind of sequential method where you're using one evidence base to inform the next? So for example, you might do a qualitative evidence synthesis first if you were trying to understand what people were saying about what outcomes are most important to them in a particular topic area. And then you could use the findings of that to prioritize the kinds of outcomes that you treat as most important in your meta analysis, for example. And that would be one way of linking the evidence faces. But I don't think I've ever seen that happen. I sometimes have seen the quantitative and then you get differences in the findings, for example. And then you might do a qualitative evidence synthesis and try and generate hypotheses about why you're seeing these differences. And you may even go back to that evidence base and try and reanalyze it in terms of specific subgroups and terms of categories of intervention or something that map onto the concerns of the people that you speak to. And that's probably the way that I've seen most work. And there's some really nice studies from the epicenter that do that, including using formal methods like comparative qualitative analysis to unpick the elements of effective interventions, for example. I think that's really helpful to think about, especially the idea of starting with the qualitative, like what are people looking for and then diving into the quantitative. I haven't seen that either, but it makes me want to do a review that asks those questions. Yeah. Great, yeah. Neal, so I don't see any other questions. I don't know if you have any on your end or if we... I've been checking Twitter and I think we've answered everything. No more questions on Twitter or YouTube, just... Oh, yes. Two questions from Chris Smiler. I hope I'm pronouncing your name right. He's been following on YouTube the last couple of days every session, which is brilliant. So they ask, please, what is the difference and similarities between systematic review, literature review and review article if you could tease apart those synonyms that are often confused? And then the second question, can systematic reviews be submitted as part of a proposal when applying for a PhD admission application, for example? OK, so the first one, I'm not sure I know the difference between a literature review and a review article. I think those are probably the same kind of thing, which is somebody giving a kind of high-level summary around a specific topic and what's out there. Whereas a systematic review is likely to ask a very specific question, such as, does this intervention work or what are the risks of exposure to this chemical pollutant, whatever? Or what's the impact of doing something? So those sorts of very precise questions where you're trying to understand an answer. You might have some questions with multiple answers, but you're trying to understand the answer to a specific question and you're following these structured approaches, which are all designed to ensure that you are as unbiased as possible and you have as comprehensive a look at the relevant evidence as possible and you're trying to reduce bias. That's a systematic review. The other two, fuzzy for me. I don't know the answer to that, but they are in a different place. And then what was the second question? Sorry, Neil, I've forgotten. It was whether you could do a systematic review as a PhD application. Okay, so I think check in your field, certainly in my world, it's very, very common for a systematic review to be part of a PhD. So it would be considered to be the first piece of empirical work within a PhD program. So check with your funders, but certainly in some areas that is absolutely acceptable. And also people do methods PhDs where they are focusing their whole thing on different approaches to systematic review or synthesis or developing stats methods or whatever. So yeah, there's also a whole even more geeky world out there of people developing methods in PhDs for systematic reviews. And I think too, to go back to your earlier point and question, Ruth, about the difference between systematic reviews and review articles. I think it takes like looking at what the authors reported actually doing because I've seen a lot of folks use the term systematic review and review article interchangeably, but they're maybe not doing the systematic review methods. And so I think it's really important to be kind of critical when you're reading those articles and seeing, okay, did they actually do all the things that Ruth outlined as being important to doing a systematic review as we know the guidelines are. And so that line is not always very, very clear in the literature. So I really agree with that. And I think because systematic and review separately have commonly understood meanings, people sort of, if they don't know about systematic review as a methodology, they look at the word systematic and the word review and think, oh, I know what that means. That's a kind of slightly structured review thing without realizing that there is a whole method behind it. So yeah, I really agree with that, always check. And sometimes people even tell you something about their search strategy, but that's the only bit of kind of systematic review type activity that you have any information about. So yeah, I think the Rose's checklist is really useful for thinking about what should be reported in a systematic review in order for you to sort of trust that it is actually a systematic review and not just a review with another name. That's a great point about looking at those checklists and seeing if they kind of meet those reporting criteria. Just to say, I've put in the chat, in the Zoom, in the chat, in YouTube, a really shameless plug of an article that Lawrence, Mike wrote and Tamara Lotfi and I wrote recently, which is a glossary of systematic reviews aimed to say when do you need to be systematic and when is it just an overview or a primer? So it's a very basic, just sort of do you need to be systematic in how much? And just to really quickly answer that question on languages, the CE guidelines are translated into Spanish and Japanese. I believe possibly also Chinese. Rose's has been translated into Chinese, I know. But maybe Ruth knows more up to date stuff as well. I don't know about any other languages than those, I'm afraid. So, yeah. Anyone wants to take on the job of translation, that'd be great. Because the guidelines are provided in a website format, rather than a PDF, you can use Google translate much easier on the website than you would with a PDF. I can't vouch for the translation. And we do, we have a question in the chat from Andre, which I think might be an interesting one to talk a little bit about, which is really, you know, Ruth, do you think there are specific cases where you would recommend to not do a systematic review and do a different kind of review, you know, take a different review approach? So I think one of the key stages, which we didn't really talk a lot about in that brief run through of methods is the scoping stage of a review. And, you know, when you have an idea about a topic that you think there needs to be a proper answer to, that you might want to do a systematic review on, one of the first things you do is to try and scope that topic area. So to see whether or not somebody's already registered a protocol in that area, you know, if somebody's already doing a systematic review in that topic area, then I'd say that a systematic review is probably not what is the most useful thing to do. Sometimes you do scoping and you find that the evidence base for that question is either much bigger or much smaller than you were anticipating and that may influence your decision about whether to proceed with it. I mean, you can do a meta-analysis with more than one study, but there are some people who argue that if you've only got two or three studies, it's not worth doing a meta-analysis. So, yeah, if there's not enough robust evidence out there to do a systematic review, you may not want to do it. I think it's really, it's about the type of question, you know, what do you want to know? Do you want to know something? Do you want to produce something which is kind of a primer about a topic area, which covers some of the key areas and provides a nice narrative about what's going on in a field, in which case a literature review might be fine. If you want to know the answer to a specific question about the impact of something or the effectiveness of a particular activity or whatever, then a systematic review is the best approach providing. You've got the, you've got enough evidence out there to make that worth your while. Yeah, and you know, I think that pre-scoping, scoping stages is really important. There's an older article now, but I think it's still highly relevant and the authors looked at how long it takes to do a systematic review and kind of if there could be an equation that you could put to that so you could estimate how long it would take and they built into the equation 720 hours as a starting point before you're even doing like the literature searching, the screening, the quote, you know, all the pieces of the systematic review and I think that just really highlights your point about the importance of really doing adequate scoping and planning ahead of time so that you can know if you should be asking a particular question in a particular way and then if you have the resources to do it. Yeah, I mean, the other bit of scoping which is really important is thinking about stakeholders as well. You know, are you asking a question which is useful and important from the perspective of other researchers, publics, policymakers, other people in the field? You know, I think that's another really important area because it is a big investment of resources to do a systematic review well and you know, I think that really making sure that the priority questions are being addressed is really important. I mean, that's one of the areas that I'd love to see environmental evidence think about. There's a great organization in health called the James Lind Alliance and they support patients to work together to prioritize key research questions from the perspective of the patient. And I think, and they do influence policy so you're more likely to get funded if the review that you're doing is on the priority list for a topic that the James Lind Alliance has identified. And I'd really like to see other areas thinking about who those stakeholders are, who are the people who are being impacted by the sort of research that we're doing and to think about processes for that kind of stakeholder involvement to prioritize questions. So that we are addressing the most pressing questions first from the perspective of stakeholders, including public. So yeah, that would be something really interesting to see in this field. Thanks, Ruth. Just to say, I put in the chat in the Zoom and on YouTube two resources. One, which is a set of resources on stakeholder engagement in evidence synthesis that was a special issue in environmental evidence journal, environmental evidence, the collaboration from environmental evidence journal and also a link to the tool. I'm not sure if it was a detail, but it's one of about three or four studies that have estimated the time requirement of the systematic review and it does this in person hours. And a link to that is called Predictor. And that was an evidence into the Sackathon project. Great, thanks, Neil. Thanks, Neil. So I'm not seeing any other questions. Neil, do you have any on your end? No, I can't see any more coming in on Twitter or YouTube. So I think we're all good. That was amazing. Thank you both so much. Yeah, well, thank you, Ruth, for your presentation and just for being willing to kind of answer all the questions that have popped up throughout. It's been really interesting for me to think about these different ways of applying evidence synthesis methods. So thanks everyone for asking some great questions and we'll see you all at the next session which starts in about a half hour, right Neil? Great, thanks for having me and thanks for listening everyone. Thanks everyone.