 All right, so time for our last speaker for this session. So I'd like to invite Caitlin Hare up to talk about systematic online living evidence summaries. Welcome Caitlin. So hi everyone, it's great to be here and thanks so much to those of you who are here for this last talk of MetaScience. Thanks for sticking around. So I'm Caitlin Hare, I'm a postdoc at the University of Edinburgh and I work in the Camarades research group there. And today I'm going to be talking about systematic online living evidence summaries or souls for short. And in my talk today I'm going to talk a little bit about evidence synthesis and the current limitations of evidence synthesis when we think about systematic reviews. Then I'm going to talk a bit about how souls can overcome some of those barriers and hopefully tackle the challenges that we face. And some other aspects of what we're trying to say. With souls projects you can kind of go beyond evidence synthesis as well and look at other things and explore the data a bit more than you would be able to without automation tools. So let me get started with my first slide. So I'm starting up my talk with an assumption that scientific advancement should proceed based on what we already know. I think most of the people in this room would probably agree with this statement. But actually figuring out what we already know is a huge problem and it's kind of a science in itself which is why we're here. And trying to figure out what we already know really requires some evidence synthesis techniques like systematic reviews for example. And we've identified when we've been doing our research over a number of years in preclinical meta research and doing systematic reviews in that area. We've kind of highlighted three key limitations or challenges to traditional systematic review approaches. The first is that there's a struggle to keep up with the pace of evidence generation. Systematic reviews are really resource and time intensive as we all know and especially in research intensive areas there's just so many new papers being published every year it's really difficult to keep up with. A few examples just to illustrate this from Cochrane systematic reviews research has shown that over 50% were published more than two years after the original protocol. That actually doesn't seem that long to me but I guess it depends on how many papers you have in your systematic review. Some of the systematic reviews in our group have definitely taken a lot longer than that. Another piece of research showed that over 85% of those reviews haven't been updated for more than five years. And updating a systematic review can actually be sometimes just as difficult as starting from scratch especially if there's just so much new evidence being published since you did the original review. The second limitation is that there's sometimes a failure to capture all of the available evidence. We found this particularly in our research area so we're focusing on animal experiments and in vitro experiments and trying to review that data and anyone who's read those sort of articles will know that there's not just one experiment in a paper, sometimes there's like four, five, six and there's not just one Pico element, there's so many, there's so many outcome measures, sometimes there's multiple treatments, sometimes there's even multiple animal models. And what happens is when authors come to write up these papers, not all of that information is gonna be present in the title or abstract or even the keywords. So we've found that by relying on databases like PubMed, Web of Science, we're actually missing huge chunks of the relevant literature because an outcome is buried somewhere in the full text of the paper. We've also shown that for when we're screening title and abstracts, we sometimes exclude significant proportions of relevant studies because it's not clear from the title and abstract that that study would be relevant to our review. And thirdly, I think there's a bit of a lack of coordination. So there can be, there's a couple of ways that this manifests. So firstly, there might be too many systematic reviews in one area and not enough in a given, in another area. And another thing that happens, which I think is quite interesting, given that we're all kind of interested in reducing research waste, I think a form of research waste quite often happens where one research group will extract lots of data from hundreds of different studies and then maybe they don't put that data anywhere. And then another systematic review group somewhere else is doing a review that kind of, maybe is not completely the same but has some overlap with that original review but has to be extract that data from the beginning again. So I think sometimes we need to maybe work in a more coordinated way towards better evidence synthesis approaches and actually covering all of the topics that we want to cover. So there's been a number of things proposed to improve evidence synthesis and we heard about some of them in the last talk as well. So things like living systematic reviews which can harness automation tools and crowdsourcing to try and produce reviews that are updated either in real time or maybe every month or every year in a much easier way than we currently update systematic reviews. And another thing is things like evidence gap maps which help us visualize the research landscape. So those sort of bubble plots that you see with larger bubbles indicating lots of evidence in a certain research area and less evidence in another area represented by a smaller bubble. And usually they're on a sort of broad research question as well. So taking together these two concepts in the past few years our research group has been working towards building souls projects which essentially kind of they borrow concepts from both but basically we're trying to bring in data from a really broad research domain. So for example a disease like Alzheimer's disease we try and bring in all the evidence in Alzheimer's disease. We tag it, categorize it, curate it and then feed it into a database and visualize it in an interactive web application for research users to interrogate. So we have a preprint out about this that I recommend you reading if you want to understand more about the technical aspects that I don't have time to go into. But at a very basic level there's a systematic search performed for each of the souls projects on a weekly basis using application programming interfaces to pull in recent citations. They're automatically de-duplicated. We have machine learning classifiers for each souls project to screen for relevance. And then we have a range of natural language processing tools that can tag for things like reporting of risk of bias, PICO elements, so outcome measures, interventions, populations. Also things about the transparency of a paper, so whether there was an open data statement for example or whether it was open access. This annotated data set is put into SQL database and that sits behind the interactive web application. We've done this already across a number of research areas so we've done it in COVID-19, we've done it in stroke, Alzheimer's disease, we're now doing it across lots of different mental health topics. One of the more recent ones was in pesticides. So we worked with a EU funded consortia called Sprint who are interested in the harms of pesticides on human health, animal health and the environment. And that was a really rewarding project because the souls that we built kind of allowed them to do rapid reviews in their area which they were struggling to do just because of the sheer volume of evidence that they had to sift through. So souls allow us to hopefully keep up with the literature and this is just a screenshot from one of the souls showing you how many new citations have been uploaded each week. They allow us to look at a visual summary of the evidence by different Pico elements. So for example you could see how many studies look at a certain outcome measure or how many maybe look at a certain treatment for example and there's all different ways to visualize this. Importantly we look at the full text of the study so we don't have that title abstract problem. We have evidence gap maps kind of embedded within the souls platforms as well. So this is an example where we have, you probably can't see it, but we have outcome measures on the y-axis and your behavioral outcome measures that are common in animal models and then all of the different interventions along the x-axis so that you can see where the gaps are. The idea behind souls, and I was really inspired by the previous talk because I wish we could do that. I wish we could get the data from these publications because that would really take souls to the next level. Unfortunately there's too much variation in the way that figures are reported right now but maybe in the future. So at the moment they can't really replace systematic reviews or meta-analyses. The idea behind souls is more to act as an accelerated starting point for people doing systematic reviews where they can search the souls database, filter studies and download a curated citation list that's hopefully gonna contain more of the research that they're interested in and hopefully more relevant research as well. And I think it's the idea that maybe not every research group doing systematic reviews will have access to automated tools so we kind of wanted to do a bit of work for the wider community. Going further I just wanted to mention three extra things that souls can do quickly that we maybe hadn't originally intended them to do. The first thing is that they can help benchmark research improvement so because we look at things like risk of bias reporting and transparency we're starting to look at the trends over time and get some quite interesting data about that. I think they can also be used for community building and coordination. As I mentioned the sprint project earlier we actually tracked progress of that project on the souls application for sprint and we had a leaderboard and I think kind of going forward into the future I would like to think about how this could be used for building communities around systematic review and given research areas. Most recently just in the past few months we've actually started thinking about how we might merge together souls and build the metasoles across different research areas because what we find quite often is actually the same outcomes are measured across different research silos. So if you're interested in Alzheimer's disease maybe the way that they induce a model in Alzheimer's disease might be quite similar to how they're inducing a model of schizophrenia elsewhere. Sometimes there's these parallels that we don't even realize because we're so focused on our own research domain when we're doing a systematic review. So we've been trying to put the souls together and we've started to map across the shared compounds, gene targets and underlying biological pathways across different research areas as a means of future hypothesis generation as well. And it could be used for things like potentially drug repurposing and all sorts of hypotheses around that. So that's my last slide. I just wanna thank all of you for listening and the organizers for the chance to talk today. I also want to say massive thank you to the Cameradis group in Edinburgh and across the world. All of the funders that made this work possible. And finally I forgot to put my email on the slide but there's not that many Katelyn here's in the world actually. So if you just Google me, you can find my email or come and talk to me after. I'd love to hear what you think about all this, so thank you. Thank you. And I think we have time for one quick question. Fantastic talk Katelyn, thank you so much. I was wondering, one of the things that we've seen some studies about is how is the impact of retractions of papers on the conclusions of systematic reviews which is not addressed at the moment. Does, do your tools allow us to start tackling that problem? That's a really good question. We've grappled with this as well because at the moment, Retraction Watch which is kind of the main repository for all this stuff. Oh. They, we have not been able to sort of work with them directly or get an API to directly access their database. So we have done some sort of manual checking and one of the PhD students in the group is actually quite focused on this and she's been trying to work on a way to automatically detect retractions in other ways. For example, if they're in the title or somewhere else in one of the databases, then it would be flagged as a retraction. So long story short, it's in progress and we're aware of that and we want to include it in soul soon. Thank you, Katelyn. Thank you to all of our speakers in this session, all the sessions and thank you all for being here.