 Hello, my name is Ali Akal, I'm a professor of medicine at the American University of Beirut in Lebanon. It is my real pleasure and honor to be presenting today at the 2023 ESMAR conference. The question I will be addressing is whether living evidence synthesis have delivered on their promise or tried to make the case that, yes, to some extent they did, however, they could deliver on their full potential if we can improve their processes and their tools while complying with the principles of open synthesis. I have no financial conflict of interest to declare I do have some intellectual conflicts of interest related to some of the content I will be sharing today. So after talking about the concept of living systematic reviews, I will review the living systematic review work during the pandemic and end with discussion of challenges and potential solutions. In this 2017 paper we discussed the why, what, when and how of living systematic reviews. In terms of the what, living systematic reviews are systematic reviews that are continually updated, incorporating relevant new evidence as it becomes available. You could tell from that definition is that it's on the optimistic side, promising to continually update the search and incorporate the evidence as soon as it becomes available something we realized after the pandemic experience that is not as easy done as said. In terms of the when we had defined three conditions for taking a living or taking a systematic review into a living mode one is that the question is of sufficient importance to decision making. And this is exactly why the living systematic reviewers became a la mode during the pandemic because policymakers has so many questions of importance to them and they wanted quick answers. The second condition is that the certainty in the existing evidence is low or very low, meaning that adding new evidence will likely or potentially improve the certainty of the evidence. And three is that there is new research evidence in the pipeline. The how is, you know, this is one representation of the how you don't have to worry about the details I'm just trying to show that it could be a very complex process to conduct living systematic reviews. So the pandemic was really a stress test for the evidence synthesis community, similar to many other communities like the trial list community, the guideline methodologist community and the policymakers. So how did the evidence synthesis community respond. So they responded by deploying many of the tools that have been developing for a while like network meta analysis artificial intelligence crowdsourcing. We started with the rapid methodology, because when the pandemic hit the policymakers needed quick answers so we use the rapid systematic review methodology but we also use the living. And the living helped us because there was a deluge of information there was information coming out on a regular basis we needed the living process to make sure that the evidence synthesis is updated. The living evidence synthesis were essential for the success of living guidelines and without living evidence synthesis, we could not deliver on living guidance, and there were many living guidelines during the pandemic developed by organizations like the World Health Organization but many other professional societies, thanks to advise clinicians and public health workers in terms of how to deal with the pandemic. However, the living evidence synthesis have not reached their full potential. And I'll give you some data about this we recently published this paper about the life and death of living systematic review. The study was not restricted to coven 19, living systematic reviews, but many of the published reviews are included reviews address the pandemic. So just to give you an idea about the methodology you could see here that in terms of the availability of the protocol, almost a third or a little bit than a third of the living systematic reviews did not have a protocol mentioned or reported and 30% of the living systematic reviews did not assess the certainty of the evidence. More than half did not use great tables which are standardized table to present the statistical information, along with a certain certainty of the evidence, and only 4% engaged stakeholders so these are kind of indicators that the methodology was not as optimal as it could be. When you look at the peer review process looking at the percentages, about half of the protocols were peer reviewed, but then the percentages go lower for what we call the base version of the living systematic review which is the first version that is published 0% indication of peer review for the partial updates and when we had full updates which is a fuller report of the living systematic review. And also a third for which we had evidence of peer review. Interestingly, when we explored how reliable was the update in terms of sticking to the plan period to update so we calculate the ratio of the actual period to update over the plant period to update. And there was variability in the plan periods for a bit, but as you can see that that ratio was very close to one, meaning that whatever they promised in terms of how frequently they would update, they were able to deliver maybe with a slide delay will say 12% delay in terms of period of update and I would say that's very impressive. This is for the actual updates. What we analyzed next was the time period since the last update. So since the last published update. And when we did the ratio of the how much time had elapsed since the last update over the plan period of update. We've seen about more than doubling of the period meaning that if they had promised that they would update within three months, since the last update on average, the period has that had elapsed was more than double of that time to to make it maybe a bit clearer. We have this graphical representation so the midline is when the last update was published and each dot is the one publication of one version of each systematic review or living systematic review. So I missed to say that each line represent one living systematic review. So, and again this line is when represents the latest updated version. Here as I said previously you can tell that there was a regular update of the living systematic review. Overall, this is where the ratio of one comes in. For most of them there's regular update was very close to the plan period of update, but then for many of them since the last update a significant period of time had lapsed without any update. And for about 40% of those living systematic reviews three, three times the plan period of update has elapsed, and no update has been published. And it's interesting that none of those living systematic reviews had any indication in the latest version that they might not update or they might have, they might have to stop the updating. The conclusion of this is that people have done really well whenever they updated but at one point it came a time where they stop updating without any notice. This is another graph from that study that shows the quality of those systematic reviews. And that's with the M star instrument acknowledging that the M star instrument is not designed specifically for living systematic reviews. But you could see that on many of those questions, the percentage of living systematic reviews that met met them was not very impressive. So we stratified the results by rapid living systematic review versus non rapid, the blue bars that represent the rapid show that you have lower quality for those living systematic reviews where they were conducted in the rapid mode. So the challenges are what we describe first as the living fatigue and this is where people have kept updating on the regular basis then at some point, they went missing in action. And we're calling this the living fatigue and from our own experience we could tell that, you know, people just get, you know, kind of sick that living systematic reviews, lots of work. And they diverted their attention from other projects and they need to go back to those projects. So this is the living fatigue. And then as we've shown there's a problem or a challenge in maintaining the quality of those living systematic reviews. The other challenge that we've seen is that the flow of evidence from trial list to evidence synthesizers to guideline developers was not as smooth as we wanted it to be. As a guideline developers, we would learn that a new trial relevant to one of our recommendations is published just by through a press release and you know press releases are very flashy, big news, big impact. And then it took months until the trials were actually published for the systematic reviews to assimilate them for the guideline developers to use them in developing the recommendation. So that was a real challenge that flow of evidence that did not allow appropriate translation of the evidence into recommendations into influencing practice. The practice was just moving ahead because as soon as clinicians heard the press release they were changing their practice and the recommendations came months later to catch up with the changing practice. So potential solutions are first more pragmatic models of living evidence synthesis, a better organization of the health universe and describe this a little bit the principles of open synthesis which are extremely important and better collaboration between the actors in health. So this is our de conception of the process of living systematic review. On the top, you will see that the standard living systematic review methodology is one that is linear, you've got a protocol and you're on the search. You can go through different steps analysis dissemination at at one step or at one point you might update in the living systematic review which you see over here. It's more of a cyclical approach, it's the coil concept, where you have to go through different cycles. Different cycles require lots of coordination in terms of the processes understanding the evidence that is coming out, publishing the evidence and ensuring that adequate access to that latest version of the systematic reviews. And we've seen a major challenge with people landing on a version that is not the latest version, and all of these processes require improvement in the flows requires improvement in the tools for extracting the evidence, analyzing the evidence and managing it, sharing it with the public, etc. In terms of the, the what we call the evidence universe. We published this paper, just at the time the pandemic was starting towards a series about the future of the evidence ecosystem series. And we talked the evidence synthesis 2.0. The major concept in that paper is that currently once new evidence is generated. It's dumped in the universe of evidence, meaning that it is put in a database. And then, you know, with some mesh terms and some tags, and then when people have to search for evidence relevant to a Pico question, they end up with, you know, tons of literature to go through. And the more ideal approach would be to organize that space or evidence space into sub spaces where as soon as a study or a publication comes out instead of just throwing it into that large database, you would have for each Pico question or for each population. There's a space where you would say this is where this study fits in. So at some point if someone would like to conduct a systematic review that would easily go to that cell that contains all the relevant studies, and without having to do a search that is that they would just pick those studies and take terms with the next steps. So this is a work that would need, you know, some technology tools appropriate platform to organize that space of evidence to make it more easy to search. Probably another important or even more important concept of the concept of open synthesis, which became even more important during the pandemic. So this is an ice grab that represent all the components of evidence synthesis from open collaboration to open discovery to open methods and the availability of free the accessible tools, data, open code, open access obviously open peer review, and then being transparent about the populations of interest. So having these would be important. I mentioned earlier the problem that we face during the pandemic in terms of guidance developers in terms of having the data shared by the trial list in order to have timely recommendation generation. So having these concepts will help with that free flow of information in a way that would benefit the intended population. So we end up by talking about the importance of collaboration between the major actors and there are many actors, we talk about knowledge users knowledge generation community. So these are the trial list, the knowledge knowledge synthesis community the system guidance community, and also the knowledge translation community. So these actors have to work together to make sure that the end goal of serving the community, the society, the public, in terms of guidance is achieved. Currently, these different actors have disconnected goal. And the goal is really to publish their products. Again, the example of the trial list during the pandemic. They get the paper out with, you know, significant delays they publish it in a, you know, highly cited journal journal with high impact factor, and they celebrate this is their achievement. We published in journal X. And that's it. They get the credit for doing that. They don't get the credit for handing this information to this evidence synthesizers. And this is really important. So what we are calling for is that these different actors have a common goal which is delivering the needed knowledge to inform decision making. And this is really when they can declare victory. So the trial list should be able to declare victory victory only when they make sure that their data has been delivered in the right in the right way to inform decision making. So we've heard a lot during the pandemic about this is not a sprint. This is a marathon, what I would say, this is a relay race people have to work together they have to deliver that information. And if you see that relay, that's not easy. This is where the work of the developers of tools and processes is really important to make sure that this is happening. At this point, I would like to thank you for your attention. I hope that next time we can meet in person and discuss further how we could move things forward. Thank you for the good of everyone. Thank you so much.