 Okay, so hi everyone. I'm Phil Martin. I'm a postdoc at the University of Cambridge where I work with the conservation evidence team where we try to provide evidence to guide conservation practice. And today I'm really happy to be talking to you about an exciting new project I've been working on, developing a new method of meta-analysis that we call dynamic meta-analysis. And we think this is a particularly useful way of providing evidence for guiding local context-specific decisions using a global evidence base. And so the presentation of meta-analysis usually looks something like this. So in this case, this is a plot showing the effect of different interventions on invasive plant abundance. As you can see, herbicide is the one that has the greatest effect in terms of reducing invasive plant abundance. And from the perspective of a manager, I might look at this and think herbicide is really useful. I'm going to go and spray it on my invasive plant species. But if I chose a herbicide that hadn't been used in any of the studies that have been looked at in this meta-analysis and then apply it somewhere else, maybe it wouldn't have as a strong effect on my invasive plant species as you find in this study. And that would result in poor transferability from this meta-analysis to the context that I'm working in. Equally, I might be a manager and look at this and think, well, it hasn't got any information about the herbicides that we used. And it hasn't got any information about the invasive plant species that we used. So I can't take this general result and apply it to the context that I'm interested in. So I might just dismiss this as not being relevant information. And so what we see is that people want information that is relevant to their context. And this relates to external validity, so the transferability of results from one study to a different context. And this study by Goodsat and Dorman that I've taken the quote from here found that after interviewing forest managers that recommendations on forest conservation were better accepted if they were formulated for a specific context in which the managers were working. And in our experience at conservation evidence, you see this all the time. So when there's a lack of information provided through evidence synthesis, but it's relevant, so when there's a lack of relevant information provided through evidence synthesis, people often ignore it. So what they do instead is they pick a study or two that is relevant to their context and use those instead. Or they use anecdotal information from their friends from their colleagues who work in contexts that are similar. And this is probably true for lots of fields, but we see it all the time in conservation. And so when you think about traditional meta analyses, we think that the process of doing meta analysis looks a bit like this. If you consider all of the black boxes as being things that researchers traditionally do, and the white box at the bottom is something that end users do. So researchers define the question, they search the literature, they extract data, they analyze data, they present findings, and then the interpretation of the findings is done by the users along with the researchers. Dynamic meta analysis, as we can see with it, using metadata sets sees a shift in the roles a little bit. So what we envisage happening through data meta analysis is that people would define the question, so researchers define the question that they want. They search the literature, they extract the data, but then the data is analyzed by the users and it's also interpreted by the users to a certain extent. And so we build a tool and a website called MetadataSet, and the tool has been built in Shiny, and it allows you to do this filtering of data and recalibrating of data so that you can analyze it for your context. Currently we have information on the website about invasive species management and agricultural management, and I'm going to go through an invasive species example to make this a bit more clear. So if I was a manager that wanted to manage this Japanese knotweed species you see here, so this is a widespread invasive plant species all across Europe, causes large economic damage, maybe I'd be interested in spraying it with herbicide. So if I wanted to look at this on our website, I'd go through and click through to the section on Japanese knotweed where you'd be presented with a bit of information. I'd click on the section that says data by intervention, so as we'd give you a list of all the different interventions we found information for in our systematic review. And then you can go and click through to the section that says you're using a herbicide and then you filter by different outcomes. So click that and that gives you a summary of all of the studies that have used the herbicide to control Japanese knotweed and where these studies were located. Then you can click on expand all to show all of the different outcomes that you can look at and I'm particularly interested in looking at abundance of Japanese knotweed. It launches a shiny app, you click start your analysis and this runs an analysis, a bespoke analysis for me showing the average impact of Japanese knotweed for the studies that we pulled out in our systematic review and it shows a 73% reduction in Japanese knotweed summarizes this in a paragraph as well as a bit of information about the studies. You can get a summary forest plot and a funnel plot, but we think the really powerful thing that you can do is filter information to your specific context. So if I was interested in a mazapir which is a particular type of herbicide, I might filter by those covariates down there and then this reduces the result to show that there was an 80% reduction in Japanese knotweed rather than 73% reduction and we have all different covariates that you can filter on and we think this is potentially a powerful way of people making a meta analysis relevant to their context. One other way of doing this is by looking at the study summaries that we have. So we have written short summaries of the methodologies that are used in different studies and you can reweight studies based on their relevance. So this supplies an extra weight in addition to inverse variance weighting. So reweighting these studies can result in a change in your overall effect size. This is all available on GitHub so you can look at the code that's gone into this and I can share this on the SlackGrip. And so what we want to do next is part of the reason that I wanted to talk here is we want to engage with the synthesis community and get them to see what they think about this project, what could we do with it, what extra things could we do that they think would be interesting. We also want to engage with practitioners so we specifically want to do some user testing, see what they would want from a tool is like this because at the moment this is just a work in progress and we realize that it's a bit ugly and it's not very user friendly. And we also want to add more data so you can do more meta-analysis so I'd be really interested in collaborators working with us on this. This is also available as a pre-print that you can find here and I'll share information about this later. Finally I just want to thank Gorm Shackleford who worked, who didn't, the majority of the work on this project both had learned who is my boss at Conservation Evidence and Millie Hood who's also been collaborating with us on this as well as the Conservation Evidence team and by a risk who I'm friendly through and this is friendly through the David and Claudia Harding Foundation. And finally I just want to say thanks to you for watching. I'd be really happy to discuss this so if anyone wants to drop me any messages or emails I'd be happy to discuss this further later. Okay, thanks very much.