 Hi, I'm Charles and this is my first week as a postdoc with Gavin Stewart-Seven and Synthesis Lab where we'll be undertaking a network meta-analysis for a Cochrane study. Now, network meta-analysis tools are very much in development and there is no one place that I can go to to find all the tools and functions that I need to produce a network meta-analysis according to Cochrane's protocols. Seeing as I'm going to figure this out, I thought I might as well do it in an open science way. Now, I'm not trying to make something definitive. Instead, I propose a computational waste station somewhere in between an individual trying to figure it out and a definitive computational protocol that someone can follow. This is a meeting place for other people who are practicing network meta-analysis, other people doing Cochrane and maybe Campbell, other protocols somewhere where we can aggregate tool chains and resources for performing network meta-analysis and are according to different protocols. So, to this end, I'm not trying, I don't propose a package or a manuscript but instead a collection of research items. So, a site with reproducible tool chains, an open source code repository and various contribution pathways for people with different levels of computational interest and a discussion form. Why do this? Well, there's no one place that I can go to. There's no gaps in the tool chain for performing a network meta-analysis. I've started off with multi-NMA, which we heard about yesterday. And as far as I know, there's no way to produce a contribution matrix which is required by Cochrane's reporting protocols for network meta-analysis from multi-NMA. However, there are other packages that do produce the contribution matrix, but I suspect they will not talk so easily to each other. I doubt I can just take the model that's been produced by multi-NMA and feed it into a function from a different package. They'll have different underlying data structures. So, to this end, tool chains and vignettes where we string together multiple tools to produce an analysis according to the protocol seems useful, especially for myself, to get feedback from the community and help. Not only that, but where there are, yes, we want to follow Cochrane's protocol, but say, for example, sensitivity analysis, well, what's recommended in the Handbook or in Bornstein is a leave one out analysis. Now, this means we take our meta-analysis and we remove one study, run it again and see whether there was the recommendation is the same. But, of course, this begs the question, well, what if we left a different study out? Or what if we left two studies out? So, this is where sharing our computational tool chains may be of use. Here I wrote a little script to take the network I presented in the first slide, which is five treatments from seven studies for Parkinson's disease from the multi-nma package. And I took all subsets of studies, size three and greater, and ran a network meta-analysis on it, produced the recommended rankings for the treatments, and summarized them here in this visualization where each bar plot is grouped into the number of studies. This is, I think, a fairly interpretable way of summarizing a sensitivity analysis, but it has a major limitation. With seven studies, this produced 99 different subset meta-analysis to run through and took several minutes to run on my computer. So, clearly, this will become computationally intractable as the number of studies increases. The next step for dealing with multiple, much larger sets of meta-analysis studies would be a threshold analysis. But this is far more challenging to implement, and not everyone may do it. So, this tool chain may be of use to someone who's dealing with a meta-analysis of a small number of studies that want a readily interpretable sensitivity analysis that's a little bit more robust than the one-out. So, how might others contribute to the gaps or the extensions for the tool chains for reporting network meta-analysis according to different protocols? Well, if we consider the research group that I've joined, we have a principal investigator who's a psychologist. I imagine she's not at all interested in interacting with GitHub at all. So, in that case, I provide a website with an email address where she can simply send through an email if she has ideas about how to make the visualizations more informative. Then we've got a lead statistician, Gav, who's supervising me. Now, he's very busy and has many projects and may not have time to contribute code, but he may well have the time to make notes in the public discussion forum by the GitHub issues. And I can aggregate them as the maintainer. I can take emails and convert them into issues for discussion amongst the community. For my computational collaborators, say, for example, Matt Granger is interested in network meta-analysis in ecology. So, let's say he wants to develop a vignette for network meta-analysis in ecological protocols. This is a workflow that comes from the use this package that I very warmly encourage you to try and practice and learn with me. And the idea is to provide a small set of functions that will enable Matt to do just that, produce a vignette and contribute it to the package without a great deal of fiddling about with GitHub. This talk is reproducible. There's also a manuscript associated and there's reproducible source for that manuscript. Thank you very much for listening and do get in touch if you have any questions or comments or suggestions. Thanks.