 Hi, I'm Matteo Mancini, a research fellow from the University of Sussex, and today I will be talking about acceptable research articles and what value they are to meta-analysis and systematic reviews. So let's start with the total experiment. Let's think about the generic meta-analysis and the object that we expect to find in it. But for most outsiders from the hardcore statistics field, this object would be a table, and it's actually a useful table since it allows to quickly find information that otherwise would be probably spread in the paper, but it still has some limitations. There are actually the same limitations of static reviews and the more in general of static papers. The first one is about usability and readability since we would need to actually go sequentially to either a table or a section to find information we are interested in. Another issue is that the statistical analysis is in some way a black box because even if a lot of details are provided, in any case we cannot see the actual code that was used to perform the analysis. And finally, as a result, in order to reproduce the analysis, we would need to actually reimplement it. So what would actually overcome all of these limitations? That would be a leaving runnable paper that is actually another definition of acceptable research article in short ERA. So ERAs can embed both text and code, and they can be accessed through a browser, so a very common tool. And actually these days they are rendered through computational notebooks, something that is becoming quite popular. For people that are not familiar with notebooks, they are objects able to combine code visualization and text in a single document. Most people are familiar with Jupyter, but actually there are more notebook systems, most of them they are able to handle multiple languages, and some of them are even polyglots, so they are able to handle multiple languages in the same notebook. So I'm talking about ERAs, because we recently published an interactive meta-analysis through this format, and the topic was the validation of MRI biomarkers through histology. We used Python with some bits of R, and we mainly relied on the Plotly library for the interactive visualization. We now go through some of the interactive tools that we leverage to actually enhance the presentation of our work. So starting from the actual digital survey process, we used the Sankey diagram to actually show how the screening procedure happened, and we can see the details of how many articles we had at each stage, and we can even see what were the exclusion criteria when moving from one stage to the other. After we defined which were the suitable studies, we used three maps to summarize these papers, and this is a good complement for a table, for example, because it provides the same information, but in an organized way, for example, here is by structure, then by tissue condition, then by species, and actually you can interact with this object, and you can see all the information for each paper, and you have even the link to the paper. Another way that three maps can be used is to actually highlight collateral information, like in this example, we have the color that is proportional to the coefficient of determination that in our study was the FX sides, and the sides of each of the boxes is actually proportional to the sample sides. So you can look at studies where the sample sides were small and the FX sides was big. Another way to look at this kind of pattern is a bubble plot, for example, and since it's an interactive figure, you can still retrieve the information for each paper when you look on a specific bubble. And now let's move to more quantitative plots. So we used a mixed effect model in this paper, and to show the results, we use the forest plot. Since in plot you don't have forest plots per se, we actually realize them combining different scatter plots, and again, you can interact with it, and you can see the result of our modeling and the prediction interval, for example. And finally, on top of the mixed model, we actually did pairwise comparisons, and we used heat maps to summarize the results in terms of both z-score and p-values. So how does it look in the wild? How does it look like there? So at first, it seems like a normal paper, you have the abstract, the text. The difference is when you actually reach a figure, since the figure is first interactive, as we have seen in the presentation, and then you can actually expand the code. You can look at the code that was used to generate the figure, and then you can even change the code and run it again, and the figure will get updated. So where you can publish these ERAs? Well, in terms of journals, currently, Elife and the upcoming HBM aperture are publishing them. Elife is in life sciences, HBM aperture is in neuroscience, and there are dedicated platforms such as Stensila Codotion and the no-profit neurolibra. Stensila, in particular, is the one that provides the infrastructure to Elife. And finally, you can simply sharing the notebooks on GitHub as a corollary to a standard journal paper. For a more detailed review, Concol and colleagues went through the actual ways to share these objects. I want to conclude this presentation by thanking all my colleagues, and thank you for their attention. And if you are curious about this and you want to get in touch, this is my email, and you can double check my blog where I posted about the interactive three maps. Thank you.