 Hi everyone and welcome to the closing session of EsmarConf 2023. We hope that you've enjoyed this week as much as we have. It's been such an interesting set of presentations, workshops, tutorials and panel discussions and it's been really fascinating. It's been so interesting to see all these talks and I really hope that you've enjoyed them as much as I have. I also wanted to mainly spend this final session just saying thank you. A huge thank you to all of you for tuning in, whether you're watching this live or in catch up, maybe later during this week or the week after the conference or maybe you're just catching up later this year in 2023 or in the future. Thank you so much for coming along and taking part. It wouldn't be the same without you. I also just wanted to go through some of the numbers again. I think in the end we ended up with 27 presentations, 6 special sessions, 10 workshops, 9 panel discussions, 59 tutorials and more than 150 organisers and presenters. Just take that in, that's a huge amount of effort, a huge number of people who've provided in excess of 50 hours of presentations. That's a huge collective effort and it's really going a long way to help make your lives as evidence synthesis or meta-analysis easier by increasing access, accessibility and understanding of tools and frameworks related to evidence synthesis and meta-analysis, much of which is related to working in R. So this, as with the last couple of years, of ESMACON represents a huge amount of effort. It really can't overestimate and thank enough all of the people who've been involved. That includes a huge team of people who've helped in some way in the organising team. Thank you so much to all of you. This wouldn't have happened without you. Also, thanks to our workshop organisers, all of these people have worked together to provide between 90 minutes and up to I think 6 or more hours in the case of Wolfgang in workshops. These interactive workshops have required a huge amount of planning and effort. So thank you so, so much. We really are excited about this really important resource that's available. And our workshop from last year, you'll remember from Alison Bethel is one of the most accessed videos from ESMACON for 2022. So we know that this much more expanded programme that we've been able to provide this year thanks to all of this group's effort is going to continue to be really, really popular. So if you haven't checked out these workshops yet, then please do. And if you have, I hope you will join me in sending some virtual thanks. And if you do want to join in on Twitter and express your thanks, then these are the people to tag and say thank you to. I also wanted to thank again our funders. We're still provided with funding from Code for Science and Society this year that helped us to provide bursaries for caregiver and resource constraint support. So thanks to that final pot of money that we've been able to spend this year. And to our donors, we've had donors over the last year or so and all of you who registered and were kind enough to provide a financial donation during registration. It's a really huge amount of support and it just goes to make next year as accessible and equitable as possible. So a huge thanks. If you've appreciated this year and didn't donate when you registered, please do feel free to donate again. You can visit opencollective.com. Esmar Conf and provide a donation there. If you want to register again but provide a donation, you can do that anytime registration will remain open throughout the year. And that will then provide you with an invoice that you can claim back to provide institutional financial support if you want to and you have the budget for that. But thank you so much. I just wanted to sort of end up by talking about Esmar Conf 2024. I do think Esmar Conf 2024 is going to happen. It's been a huge amount of effort to get Esmar Conf 2023 out, but I think we're still keen. So it's going to be happening in spring 2024 most likely. So keep an eye out for announcements over the summer. With more presentations, more workshops, hopefully more tutorials and maybe more people getting engaged as well. Only time will tell. But if you found it enjoyable then please do spread the word and help us to grow the family of Esmar Conf even more. We want to see more of everything basically. So we will also be sending out a survey to everybody who's registered to ask you what you think we could do better or what you liked as well to get feedback so that we can continually improve. If you want to provide specific anonymous feedback then you can do that by following this link bit.ly.esmarconf.feedback. As I said, please consider donating. And if you want to, you can join the Esmar Conf team. You can find us on Twitter at ES Hackathon and send us a DM or tag us in a message if you're keen to get involved. You can send us an email at ESHackathon at gmail.com or Neil Hadaway at gmail.com. And as I said before, spread the word. Please do remember that all of the recordings for the last three years Esmar Conf 2021, Esmar Conf 2022 and this year are going to be fully online on our YouTube channel and free forever. So you can watch those and catch up whenever you want. They'll also be added to the Esmar Conf 2023 videos will be added to the database of videos on the Esmar Conf website that's esmarconf.org and you can find those and search across the videos either through the program or by searching for specific words or titles or abstracts and you can filter different stages of review processes and different types of tools as well and find all of our workshops, tutorials, panel discussions and presentations across all three years there will be adding this year as soon as possible. So do continue to use that resource if you want to come back to it and find videos on a specific topic or a specific package and spread the word as well. Everybody is free to use those videos however they want. And finally, closing up the session and closing up Esmar Conf is Ellie Ackle from the American University in Beirut Medical Center who's going to be telling us a bit more about living systematic reviews. Over to you, Ellie. Hello, my name is Ellie Ackle. I'm a professor of medicine at the American University of Beirut in Lebanon. It is my real pleasure and honor to be presenting today at the 2023 Esmar Conference. The question I will be addressing is whether living evidence synthesis have delivered on their promise or tried to make the case that yes, to some extent they did. However, they could deliver on their full potential if we can improve their processes and their tools while complying with the principles of open synthesis. I have no financial conflict of interest to declare I do have some intellectual conflicts of interest related to some of the content I will be sharing today. So after talking about the concept of living systematic reviews I will review the living systematic review work during the pandemic and end with discussion of challenges and potential solutions. In this 2017 paper we discussed the why, what, when, and how of living systematic reviews. In terms of the what, living systematic reviews are systematic reviews that are continually updated incorporating relevant new evidence as it becomes available. You could tell from that definition that it's on the optimistic side promising to continually update the search and incorporate the evidence as soon as it becomes available. Something we realized after the pandemic experience that it's not as easy done as said. In terms of the when we had defined three conditions for taking a living or taking a systematic review into a living model. One is that the question is of sufficient importance to decision making. And this is exactly why the living systematic reviews became a la mode during the pandemic because policymakers has so many questions of importance to them and they wanted quick answers. The second condition is that the certainty in the existing evidence is low or very low, meaning that adding new evidence will likely or potentially improve the certainty of the evidence. And three is that there is new research evidence in the pipeline. The how is, you know, this is one representation of the how you don't have to worry about the details. I'm just trying to show that it could be a very complex process to conduct living systematic reviews. So the pandemic was really a stress test for the evidence synthesis community, similar to many other communities like the trial list community, the guideline methodologist community and the policymakers. So how did the evidence synthesis community respond. So they responded by deploying many of the tools that have been developing for a while like network meta analysis, social intelligence crowdsourcing that responded with the rapid methodology because when the pandemic hit, the policymakers needed quick answers. So we use the rapid systematic review methodology, but we also use the living and the living helped us because there was a deluge of information. There was information coming out on a regular basis. We needed the living process to make sure that the evidence synthesis is updated. So yes, living evidence synthesis were essential for the success of living guidelines. And without living evidence synthesis, we could not deliver on living guidelines. And there were many living guidelines during the pandemic developed by organizations like the World Health Organization, but many other professional societies trying to advise clinicians and public health workers in terms of how to deal with the pandemic. However, the living evidence synthesis have not reached their full potential. And I'll give you some data about this. We recently published this paper about the life and death of living systematic review. This study was not restricted to COVID-19 living systematic reviews, but many of the published reviews or included reviews address the pandemic. Just to give you an idea about the methodology, you could see here that in terms of the availability of the protocol, almost a third or a little bit than a third of the living systematic reviews did not have a protocol mentioned or reported. More than 30% of the living systematic reviews did not assess the certainty of the evidence. More than half did not use grade tables, which are standardized table to present the statistical information along with the certainty of the evidence. And only 4% engaged stakeholders. So these are kind of indicators that the methodology was not as optimal as it could have been. When you look at the peer review process, looking at the percentages, about half of the protocols were peer reviewed, but then the percentages go lower for what we call the base version of the living systematic review, which is the first version that is published. 0% indication of peer review for the partial updates. And when we had full updates, which is a fuller report of the living systematic review, it was also a third for which we had evidence of peer review. Interestingly, when we explored how reliable was the update in terms of sticking to the plan period to update. So we calculate the ratio of the actual period to update over the plan period to update. And there was variability in the plan periods for update, but as you can see that that ratio was very close to one, meaning that whatever they promised in terms of how frequently they would update, they were able to deliver maybe with a slight delay. I would say 12% delay in terms of period of update. And I would say that's very impressive. But this is for the actual updates. What we analyzed next was the time period since the last update. So since the last published update. And when we did the ratio of the how much time had elapsed since the last update over the plan period of update, we've seen about more than doubling of the period, meaning that if they had promised that they would update within let's say three months since the last update on average, the period that had elapsed was more than double of that time. To make it maybe a bit clearer, we have this graphical representation. So the midline is when the last update was published. And each dot is the one, a publication of one version of each systematic review or living systematic review. So I missed to say that each line represent one living systematic review. So and again, this line is when represents the latest updated version. So here, as I said previously, you can tell that there was a regular update of the living systematic review. Overall, this is where the ratio of one comes in. For most of them, there's regular update was very close to the plan period of update. But then for many of them, since the last update, a significant period of time had lapsed without any update. And for about 40% of those living systematic reviews, three, three times the plan period of update has elapsed and no update has been published. And it's interesting that none of those living systematic reviews had any indication in the latest version that they might not update or they might have, they might have to stop the updating. So the conclusion of this is that people have done really well whenever they updated, but at one point it came a time where they stop updating without any notice. This is another graph from that study that shows the quality of those systematic reviews assessed with the Amstar instrument, acknowledging that the Amstar instrument is not designed specifically for living systematic reviews. But you could see that on many of those questions, the percentage of living systematic reviews that met them was not very impressive. And when we stratify the results by rapid living systematic review versus non-rapid, the blue bars that represent the rapid show that you have lower quality for those living systematic reviews where they were conducted in the rapid mode. So the challenges are what we describe first as the living fatigue. And this is where people have kept updating on a regular basis. Then at some point they went missing in action and we're calling this the living fatigue. And from our own experience, we could tell that, you know, people just get, you know, kind of sick of that living systematic reviews, lots of work. They had diverted their attention from other projects and they need to go back to those projects. So this is the living fatigue. And then as we've shown, there's a problem or a challenge in maintaining the quality of those living systematic reviews. The other challenge that we've seen is that the flow of evidence from trial lists to evidence synthesizers to guideline developers was not as smooth as we wanted it to be. So as the guideline developers, we would learn that a new trial relevant to one of our recommendations is published just by through a press release. And, you know, press releases are very flashy, big news, big impact. And then it took months until the trials were actually published for the systematic reviews to assimilate them for the guideline developers to use them in developing the recommendation. So that was a real challenge, that flow of evidence that did not allow appropriate translation of the evidence into recommendations, into influencing practice. And actually practice was just moving ahead because as soon as clinicians heard the press release, they were changing their practice and the recommendations came months later to catch up with the changing practice. So potential solutions are first more pragmatic models of living evidence synthesis, better organization of the health universe and describe this a little bit. The principles of open synthesis, which are extremely important and better collaboration between the actors in health. So this is our reconstruction of the process of living systematic review. On the top, you will see that the standard living systematic review methodology is one that is linear developed protocol, you run the search, you go through the different steps, analysis, dissemination at one step or at one point you might update. In the living systematic review, which you see over here, it's more of a cyclical approach. It's the coil concept where you have to go through different cycles. And these different cycles require lots of coordination in terms of the processes, understanding the evidence that is coming out, publishing the evidence and ensuring that adequate access to that latest version of the systematic reviews. And we've seen a major challenge with people landing on a version that is not the latest version. And all of these processes require improvement in the flows, requires improvement in the tools for extracting the evidence, analyzing the evidence and managing it, sharing it with the public, etc. In terms of the what we call the evidence universe, we published this paper just at the time the pandemic was starting towards a series about the future of the evidence ecosystem series. And we talked the evidence synthesis 2.0. One of the major concepts in that paper is that currently once new evidence is generated, it's dumped in the universe of evidence, meaning that it is put in a database, and then with some mesh terms and some tags, and then when people have to search for evidence relevant to a Pico question, they end up with tons of literature to go through. A more ideal approach would be to organize that space or evidence space into sub spaces where as soon as a study or a publication comes out, instead of just throwing it into that large database, you would have for each Pico question or for each population, you would have a space where you would say this is where this study fits in. So at some point if someone would like to conduct a systematic review that would easily go to that cell that contains all the relevant studies and without having to do a search that is nonsensitive, they would just pick those studies and take terms of the next steps. So this is a work that would need, you know, some technology tools, appropriate platform to organize that space of evidence to make it more easy to search. Probably another important or even more important concept of the concept of open synthesis, which became even more important during the pandemic. So this is an ice grab that represents all the components of evidence synthesis from open collaboration to open discovery to open methods and the availability of free accessible tools, data, open code, open access, obviously open peer review, and then being transparent about the declarations of interest. So having these would be important. I mentioned earlier the problem that we faced during the pandemic in terms of guideline developers in terms of having the data shared by the trial list in order to have timely recommendation generation. So having these concepts will help with that free flow of information in a way that would benefit the intended population. So I'll end up by talking about the importance of collaboration between the major actors and there are many actors. We talk about knowledge users, knowledge generation community, so these are the trial list, the knowledge synthesis community, the systematic viewers, the guideline transmission community, and also the knowledge translation community. So these actors have to work together to make sure that the end goal of serving the community and the society, the public, in terms of guidance is achieved. Currently, these different actors have disconnected goal and the goal is really to publish their products. Again, the example of the trial list during the pandemic, they get the paper out with significant delays, they publish it in a highly cited journal, journal with high impact factor, and they celebrate. This is their achievement. We've published in Journal X, and that's it. They get the credit for doing that. They don't get the credit for handing this information to different synthesizers. And this is really important. So what we are calling for is that these different actors have a common goal, which is delivering the needed knowledge to inform decision making. And this is really when they can declare victory. So the trial list should be able to declare victory only when they make sure that their data has been delivered in the right way to inform decision making. So we've heard a lot during the pandemic about this is not a sprint. This is a marathon. What I would say, this is a relay race. People have to work together. They have to deliver that information. And if you see that relay, that's not easy. This is where the work of the developers of tools and processes is really important to make sure that this is happening. At this point, I would like to thank you for your attention. I hope that next time we can meet in person and discuss further how we could move things forward for the good of everyone. Thank you so much. Thank you so much, Eli. That was a really fascinating insight into living systematic reviews. Well, that's it for us for this week and this year. Thank you so much for tuning in. We hope that you've had a thoroughly interesting and fascinating week. Please do continue to dive into the materials from this year and indeed all of the materials from last year as well. We hope that you continue to enjoy them and find use in them. Thanks to everyone who's taken part and helped made this happen. And we look forward to seeing you in 2024. That's it for now. Bye-bye.