 Hello and welcome to EsmarConf 2022 and this session on network meta-analysis. As always, this session is being live streamed to YouTube and the individual presentations have been pre-recorded and published there as well. Subtitles have been verified and can be auto translated for those individual talks and automatic subtitles will be available shortly for this live stream. If you have any questions for our presenters, you can ask them via the presenter's individual tweet from the app ES Hackfond Twitter account. See them on our feed. Presenters might have time after their talks to answer some of those questions or at the end of the session if time allows and we'll endeavour to answer all the questions soon after the event. We'd like to draw your attention to our code of conduct which is available on the EsmarConf website at esmarconf.github.io. So our first speaker is Sylvia Matelli who is a post-doc drill fellow at the University of Paris. Sylvia, over to you. Hello everyone. In this talk I will present NMA Studio which is a new online and interactive web application to produce and visualize results from network method analysis. This is all joined work with Anna Haimani also from the University of Paris. So the main reason behind NMA Studio is that we wanted to provide a tool that could not only enhance but also facilitate the interpretation of the main findings from a network method analysis. We know that network method analysis by integrating direct evidence with what is called indirect evidence can simultaneously compare many treatments and for this typically there is a very large number of results and outputs being produced. So visualization can be challenging especially when we have large networks with many treatments. Also more recently it seems that we are moving towards the context of a living evidence so with data collected weekly or monthly so we need new software to keep up with this fast production of evidence. However existing software in the field so far are not fully interactive in the same way we hope NMA Studio can be. So a bit more specifically NMA Studio is a Python application that connects to R to produce the NMA results using the library net meta then those results are imported back in Python where the app is built. In NMA Studio users upload their data and interact directly with a customizable natural plot by clicking on different nodes so the treatments or different edges so the comparisons and based on their selection different outputs would be displayed. So NMA Studio will follow all of the key steps which are typically performed in any published NMA starting from the very first assumption of and fundamental assumption of transitivity and then once this is met typically a large part is dedicated to the reporting of the of a summary of the treatment effects in the form of forest plots or league tables then importantly we need to assess what the consistency holds in the network so if there is a statistical agreement between direct and indirect evidence and also to assess for the presence of small study effects which is a phenomenon related to the problem of publication bias. Finally a ranking of treatments is usually provided so NMA Studio will assist the user in each of these steps and we will see this how with the demonstration. So our data come from a recently updated systematic review about chronic plates rises which comprehends 158 RCTs and compares 20 different drugs. There are two primary outcomes that will be analyzed one for efficacy and one for safety. In this case both outcomes are binary and risk ratios will be used. So we can just have a look at the app. Okay so this is the main home page of NMA Studio and first thing you will need to upload your own data from here selecting your data file for example and data format or type of outcome. Here I've already uploaded the data and this is permanently loaded actually this example into the app but once the data process is completed your network plot will appear here on the left. This is a completely interactive object so you can move it around zoom in or out or also drag every item. Also there are additional settings here starting from the layout so you can change layout from list you can change edge sides or node sides for example and also a few options about coloring the nodes for example by risk of bias you will have in your data or by class of treatment if you have that and also just choosing your desired color and same for the edges. Then you can download the plot from here or look at an expanded version from here. Still in terms of network visualization we added an option to look at the evolution of the network over time with the slider here on the right so for example if we click on the very first data available for this data in 1963 we can have a look at the first trial and then we move on over time and we can see how evidence is added and also you will see that the data table is filtered in real time accordingly. Also in terms of data filtering you can filter your data set just by clicking on a few comparisons or edges in the network or a few nodes and the data table will be filtered accordingly again. You can also have a look at the expanded table and export the data in CSV. Okay so now we are ready to start all of these key steps of the anime that I was referring to before starting from transitivity so to check transitivity we need to check whether the distributions of the potential effect modifiers that you have in your data set are similarly distributed across comparisons. You can choose your effect modifier here for example we have a look at age which is typically provided and the corresponding plot will appear here. Also these access labels and this is true for all the subsequent plots are editable so for example here you can just choose your label and also if you want to highlight a few comparisons of interest you just have to click on the corresponding edge in the plot. Moving on we have the summary of the effects so starting from forest plots here we have three options an anime forest plot the typical pair was forest plot and the bidimensional forest plot in case of course two outcomes are given so starting from the first one here of course we will need to choose a reference treatment and we do so just by clicking on a node on the network. We can have a look at the plot also you will have to choose the correct direction of the outcome to interpret the plot correctly and we can also have a look at the plot for the second outcome in this case the network plot will be automatically updated for outcome two then for the pairwise for forest plot of course in this case we will have to choose a comparison and you can resize a little bit the objects and also again you can have a look at the plot here for outcome one or two and you can save it all the plots will you can save it from from here then moving on again we have the bidimensional forest plot so you will have the forest plots for outcome one on the x-axis and for outcome two on the y-axis again we just choose a reference treatment the forest plot will appear here and in this case you will have to pick the correct region of the plot to interpret the results correctly based on the direction of the outcome but also you can click on the legend to remove sequentially a few treatments if that's needed then again we have the part dedicated to the league table so league tables report all of the possible treatment effects between any possible pair of treatments in the network so they tend of course to be very large and here you can see that you can scroll the table we also allow for two different options of coloring the table first and default option by risk of bias in the direct comparisons you will have the average risk of bias but also coloring by report from cinema assessment so cinema is a software that reports an overall rating in the confidence of the evidence which is graded as very low confidence low confidence moderate or high in each comparison so what you will have to do is to upload the results from the from that you get from cinema for both outcomes of course and then have a look at the corresponding coloring here on the lower part of the triangle we have results from outcome one and on the upper for outcome two also you can have a look at the expanded table here and you can export it maintaining coloring and formatting but what is important here is that as we said league tables tend to be very large so you might want to have a look just as a subset of nodes which are more of interest so you can do so just by clicking sequentially on a few nodes and you see that the filter the table will appear here then moving on we have other checks so starting from consistency in this case in NMA studio we allow for two different options a global test for inconsistency which is the design by treatment interaction model and a local test for inconsistency which is the node splitting approach you will see that you have all of your results here in the tables for the node splitting for all of the different direct comparisons you can scroll down and you can see that suspicious values are flagged in red or yellow also here you can filter the table just picking a few comparisons of interest then we have to assess for the presence of the small study effects and we do so using the typical funnel plots in this case comparison adjusted funnel plots here also we have to pick a reference treatment so again we just do so clicking on the metric and the corresponding plot will appear here for outcome one or outcome two then finally we said we have the ranking of the treatments in NMA studio we allow for two different plots a heat map reporting p scores for both outcome or for one outcome only and a scatter plot of the p scores so p scores are the frequentist analog of the more common sucre values and what is important here is that you will have to choose the correct direction of the outcome so for example outcome two for us is a harmful so we change this here and we will have our heat map here with treatment sorted from best to worst then we have our scatter plots reporting p scores for outcome one on the x-axis and for outcome two on the y-axis again you choose the correct direction and you will have of course on the upper right part of the plot treatments that appear to be the best in both outcomes but what is important here is that you can use the net or plot to assist in the interpretation of these results for example if we just look at some more details we can see that the best treatment is actually a treatment which is okay at low risk of bias but also not very well connected there is only one trial assessing this treatment so it is always important to use the network to assist in the interpretation of the findings so this is pretty much it for functionalities of the app we also have a documentation page and the news page so just to conclude we have seen how anime studio is a full interactive and flexible application and that it can simplify the full anime process while also assisting in the interpretation of findings however as all softwares it comes with many benefits but also with many risks so we always really highly recommend to use anime studio following advice from inexperience statistician there are many features that we will we would like to add to anime studio just to name a few here we would like to add more options to customize our network plot and also a more robust system of alerts or warnings for example printing the errors from the r console directly for sure we will add an option for performing Bayesian enemies and also we are looking into ways to provide each user with a permanent link to their project anime studio is also a campaign by python package which is currently under development but that will be available soon of course this is not an exhaustive list so we can add more and if you have any suggestion or ideas that will be more than welcome so you can get in touch with me at this email address here if you want to discuss more thank you very much everyone wow such a useful software for anime users thank you sylvia and we look forward to seeing you again for q&a at the end so if you have any questions for sylvia or any of the other presenters please do post them on youtube or on the tweet thread under her name and i'm delighted and we just our next speaker who is virginia kira who is a doctoral researcher at the university of burn virginia over to you and kira i hope you can all see my screen and can you hear me as well we can okay so thanks for joining my name is virginia and today i will present the development app so development is the first tool to assess the risk of bias due to missing evidence in network man analysis and the tool and underpinning framework been published in bmc medicine and in a few words the framework works in a way so that it evaluates the risk of bias in network man network man analysis first by evaluating the risk of bias in each pair was compared in each pair was comparison that can be made between the intervention in the network considering both within study and across study risk of bias assessment then the risk of bias in each of the pair was comparison is combined with the contribution that each star comparison makes to the network estimate to get together with any presence of small study effects and any bias in the unobserved comparison so that we reach a final level of risk of bias in the network estimate and we give up a level of either low risk some concern or i risk of bias but due to time reason i'm not going to go into the details of the framework but i want to show you what we have developed as a app that semi-automates some of the assessment this is because due to the nature of network analysis such assessments obviously are more labor intensive than a standard pair was met analysis so this is the rationale for developing an app the semi-automates some of it so the app is already available and first i'm going to just show you the data then an example of network the data that i'm going to use to demonstrate the app it's a network of antidepressants from head to head studies and the network graph it's shown here so the first thing the user is from to do is obviously load the data it has to be loaded as a CSV file and then the user can view the data here it's also important as you can see there are some studies that do not have the outcome of interest they have a blank cell so not missing values here and this is important because these studies can be informative about the selective outcome reporting bias which will be assessed in the app in the tool so the next step is to move to the data analysis tab where the user has to select the parameters for the analysis that are run by the app are required by the tool to do some of the evaluation some of the assessments so and then you can start the analysis by pressing the button depending on the size of the network obviously the analysis can take a few moments a few minutes and for this reason i've already run the analysis before and this is essentially what you will get once the analysis is completed so first of all you will get a data summary with a network graph and there is other characteristics of the network interventions and comparison as well as the outputs from both frequentist and by Asian network meta analysis which are run respectively with the net meta and the box net package as well as also the output for the patient network meta regression but i will move straight to the main output of the tool and of the app which are the tables where to record the assessments so first we will have the pairwise comparison tables here the all the possible pairwise comparison are automatically grouped according to whether they have data for the outcome of interest as shown here in group A or only data for other outcomes as we can see here in group B we have two or whether they've been the comparison are totally unobserved so there actually is no data no study identified for those comparisons another thing that the app does automatically is calculating the total sample size and the total number of studies in each comparison so this is important for the first assessment for the within study by assessment of bias which is commonly known as selective outcome reporting bias so specifically here the this obviously this assessment is only done for those comparisons for which we have found extra studies that do not report the outcome of interest but only report outed outcomes because we need to assess whether the reason that these studies are not reported the outcome are due to the results found in those studies so if the outcome was not reported the outcome of interest was not reported because we think that the authors didn't like the magnitude of the direction of the results then that it's informative selective outcome reporting bias but the users also need to take into account the effect that this will have on the total synthesized result so that's why in this case the sample size is important for example here we see in comparison six there are two extra studies and we see the we have judged the most suspected bias they bring to oxidin because we think that the two extra studies report did not report the outcome for the reason that I just mentioned because of the magnitude of the the result the direction of the results and we think that's enough to affect the synthesized result but you can also see in comparison nine there are five extra studies that either we thought that the reason that in our report the outcome was related to the magnitude to the direction of the results or the extra sample size was not enough to affect the already quite large sample size that's why we adjusted as no bias detected so obviously as I said this assessment is only done for the comparison for which we found extra studies also for those in group B obviously cannot be done for the unobserved comparison but the next assessment the across study assessment of bias which is also known as publication bias can be done for all of the pairwise comparison this assessment is done primarily considering a qualitative techniques qualitative considerations such as whether the the reviewers were able to get data from great literature or whether there is well-known publication bias for a comparison or that field additionally if there are comparisons with at least 10 studies then some quantitative consideration can also be taken into account such as tests for small study effects and final plots which are also calculated by the app as you can see here the data analysis stop but as I said these are only in addition to the qualitative consideration once again the user has to then select no bias or suspected bias with the relative direction for each of the comparison then to arrive to the overall bias judgment the user can simply press this button here which it says that it applies an algorithm the algorithm is actually really symbolic just check whether there is a in either of the previous assessment as suspected bias in the case then it's suspected bias also in the overall judgment once these level for the overall bias are done then we can move on onto the main output of interest of the tool which is the robin table where we record the risk of bias assessment for each of the network estimates once again the network estimates are automatically grouped into mixed only direct or indirectly depending again on the availability of the data and the first thing here is the evaluation of the contribution coming from the evidence specifically the app calculates automatically the contribution matrix which you can see here and you can also download which shows how much each of the direct comparison contributes to each of the network estimates so this contribution is used then in the robin table together with the previous overall bias assessment to see how the contribution of evidence with suspected bias it splits in the sense of like in which direction it's going and again this is done automatically as you see in the first two columns what the user has to do is evaluating this contribution and this direction so here the user has to make a some sort of like subjective decision for example here we decided that if there was at least a difference of 15 then that will be enough to count as substantial contribution I'll give you some example you can see here that if there is at least 15 percent going in a specific direction then we will go for substantial contribution from bias favoring a specific treatment if it's not 15 percent then it's no substantial contribution from bias but there are also cases like for example let me find some yeah here do loxidine versus fluvoxamine sorry versus mill nasifram it's 30 and 36 so the difference is less than 15 and that's why we selected substantial contribution from bias balance so the next assessment is is a bias assessment for indirect evidence this is because the unobserved comparison has obviously no weight in the previous part in the part of the contribution from the evidence but we had judged the unobserved comparisons in the previous stage in the previous table for risk of bias and this can lead to bias if the reason that the studies are missing is related to the results found in such studies so what the app does here is essentially just considering the previous assessment for the unobserved comparison so the overall judgment that we had in the previous table and copies in this column here as you can see is grayed out for the direct or mixed estimates this is because we have already there's no need to do that for the mixed estimates since the unobserved comparison we have already counted the direct comparison in the contribution assessment so we only need to consider them for the indirect estimates and that's that's what the app does it's just going the previous assessment in this column the final assessment is the evaluating whether there's any evidence of small study effects and again automatically the app reports the unadjusted estimates from Bayesian network meta-analysis as well as the adjusted estimate from the network meta-regression they use the smallest observed variants as a covariate and what the user has to do is check the point estimate as well as the overlap of the confidence the credible interventions to see whether there's any evidence or of small study effects for the network estimates and so I can tell you without going through all the 153 estimates that there was no evidence of small study effects because there was a good overlap between the confident credible intervals and the point estimates were not that different so we can also just set them all as no evidence by pressing this button here and finally we can then go find our overall risk of bias level and the we can easily do that by pressing this button that applied the algorithm that we propose in our paper I don't have time to go through the algorithm but what I want to say is that if the user does not agree with our rules our algorithm rules so I want to use stricter rules or less strict then you can also change this manually as you can see here the app allows you to change it it will just show you with a yellow warning in this sense but also these manual changes can also be done in the previous table and finally both tables can be downloaded which can be useful for reports for example as you see here and we are at the end of our demonstration so I will just show here the link for the app which is as I said already available to use the paper and I also want to mention that we are planning to we are drafting a paper which is going to serve more as an actual as a manual for the app so it's going to be more technical paper there's also going to be a live webinar for organized by the Cochrane team for the Cochrane training and it's going to be on the 5th of May so it's going to be obviously longer than 15 minutes so there's going to be more details about the assessment and also show all the functionality of the app I also want to mention that the Robman is now integrated in the Cinema Framework and Web application specifically in the reporting bias domain where the user is prompted to upload the Robman table I also want to thank the two people that really helped me with the development and improvement of the app so Rory, Papa, Gustantino and Alex all the way and finally I'm happy to take any question at the end of the session or otherwise you can contact me on Twitter or by email for questions or feedback which is always appreciated thank you very much. Thank you very much Virginia for your great time keeping while presenting live but especially for showing us how to use this tool to critically appraise the studies across this evidence base and so our third speaker in this session is Clarice Neville who is a research associate at the University of Leicester. Clarice over to you. Cheers. Great so hi everybody I'm Clarice Neville and I work as part of the complex review support unit based at the University of Leicester in the UK and today I'll be sharing my work around developing a novel multifaceted graphical visualisation for treatment ranking in an interactive network meta-analysis web application. So treatment ranking within network meta-analysis is a really powerful tool the results can often be misinterpreted or used or presented inappropriately as a consequence we as a team wanted to improve the ranking outputs within our anime web app Meta Insight therefore we had two aims firstly ascertain the current methods and visualisations for treatment ranking within anime and secondly develop a novel graphical visualisation for meta insight. To make sure we're all starting at the same place I'm going to quickly introduce anime's so firstly the standard method for gathering evidence systematically whilst also appraising and synthesising the evidence is called a systematic review. To then obtain a quantitative pooled estimate of the outcome of interest in your review one can run a meta-analysis to quantitatively compare healthcare interventions or treatments. The next step is to simultaneously compare multiple treatments which can be done with the network meta-analysis this is done by forming a network of studies and treatments and then the anime uses both direct and indirect information to estimate relative treatment effects and it's through anime's that treatments can then be ranked. Our first step was conducting a targeted review of the literature since January 2011 looking at papers that introduced discussed or compared ranking methodologies of visualisations. This gave us in total 29 academic papers and articles within that 29 there were two articles that were particularly useful. As part of their paper Veronica and authors produced a wonderfully concise summary of common ranking statistics and their primary characteristics and the paper by Cosmire and authors is a brilliant detailed record of graphical visualisations currently used within the field of evidence synthesis. So from these articles what did we learn about treatment ranking statistics and which ones are important? I'm going to bring forward four key points I found. Firstly most ranking statistics are based on Bayesian methods where multiple simulations are run thus giving rank probabilities based on the rank distribution outputted from these simulations. Secondly a popular option that appeared was the surface under the cumulative ranking curve aka the sucra and this does what it says in the tin. It gives a single value but incorporates the entire ranking distribution but naturally still has limitations. There does exist a frequented alternative the p-school however some authors have indicated that its interpretation can be a bit more challenging. Whilst simple methods may be easy to understand such as the probability best they are often more unstable and don't encompass the entire analysis. The opposite is also true i.e whilst more complex methods are more rigorous they can be harder to understand. And finally interpreting ranking in isolation is not advised there are many things to consider alongside ranking results and not doing so can lead to exaggeration of results or misunderstanding. Literature also introduced us to lots of visualisations there are too many to mention in this presentation so I'll focus on those that I felt worked well and fitted within the remit of a single outcome as this is what meta insight currently works with. So first we have rank grams which are a popular choice with lots of variations and they essentially plot all ranking probabilities for all treatment rank combinations. They can present the results on the same or separate axes, can use lines or bars and can present absolute or cumulative probabilities. Cumulative rank grams are also known as sucra plots due to their direct relation to the sucra statistic by definition. Generally rank grams were found to provide informative balance summaries of the distribution of ranks however some people have stated that comparing treatments can become difficult. An alternative that is less commonly used is a radial plot. Generally radial plots have an outcome measure for example sucra marked radially around a circle. Some variations use a donut shape and use colour coded sections to indicate the value. The benefits of this layout include the ability to present concentric circles to consider multiple outcomes of subgroups and including other information nested in the middle such as a network plot. However a limitation of working on the radial scale is that generally humans struggle to discriminate between radio lengths. Next the ability to display multiple elements of an analysis in the same space a multifaceted display allows researchers to present the larger picture and this is great for treatment ranking is allows users to form robust and sensible inferences by drawing conclusions from a combination of sources. However one does need to be careful to not overload the reader with large amounts of information. And finally with advances in technology a natural movement in the field of data is incorporating interactivity. Interactivity allows users to drill down further into the data or analysis as they wish and can even evolve users choosing their own settings. For example this interactive visualization allows users to specify the relative importance of outcomes regarding treatment choices. So that was just a snippet of the literature that we found and it provided great direction and knowledge when developing our own graphical tools for the treatment ranking within Metaminsight. A few things to mention regarding how we developed these tools. Firstly we decided to start with just Bayesian analyses due to its popularity in the treatment ranking field. Secondly designing the tool was very much an iterative process of creating designs, sharing them, getting feedback and going designing again, starting with pencil and paper and moving to mockups of different software. And the final designs were created in R which is what Metaminsight runs on and the GG plot 2 package. Now before sharing what we came up with let me quickly introduce the exemplar dataset for this presentation. This is the in-built example data set in Metaminsight for continuous outcomes comparing pharmacological interventions for the treatment of obesity and the outcome of interest is BMI loss three months from baseline and the NMA was on 24 studies including five treatments plus a placebo. So our first plot we're presenting we've called the Litmus Rankogram. The base is a Rankogram which we chose due to its popularity and easy to understand nature. The aid interpretation we decided to plot cumulative probabilities and have all curves on the same axes and with cumulative probabilities one can then say that the nearer the curve to the top left corner the better the treatment. It was reported that comparing curves can be difficult so to further aid comparison we added a litmus strip of sucre values which has two functions. Firstly to act as a key through the colors and secondly gives the sucre which is easier to compare. So for this example one can see that the ranking analysis indicates that ribbon band performs well as we can see the green coloring high sucre value and probability curve being near the top left. For the converse reasons placebo performed the worst. It's harder to discern between the two curves for metaforming and all estates debatrimen which may seem like a limitation however I believe it actually emphasizes the point they performed equally well. And next we have the radial sucre plot. We decided to develop a second plot for two reasons firstly there's the number of treatments increased a rancogram like before can easily become crowded and secondly we really liked the radial plot with the nested network plot that I showed earlier and so we created this at the price of losing some of the granularity that the litmus rancogram gives. So for each treatment its respective sucre value is plotted radially in descending order we stuck with the sucre to not overload users with various statistics and the sucre considers the entire ranking distribution whilst outputting a single value. To keep things consistent between the plots the same color scale was used but the colors of the nodes also indicating the sucre value. We are therefore aware that sucre is represented in two manners and that could be seen as wasted ink however we believe it further aids communication and impact. So we took the plot by cider et al and took it a step further by overlaying the network plot rather than nesting it. Therefore the size of the nodes represent the number of participants in each treatment arm and the thickness and presence of connecting lines indicate the number of trials that directly compared the respective treatments and we felt that overlaying the network plot meant less work for the user and more impact. The benefit of including the network plot in this way is easy to see if I unpack this example. So one can see that rim and bound perform the best due to its high sucre and green color however it's actually quite hard to see rim and bound due to its small node size from having a few participants in the treatment arms. Furthermore one can see that it isn't very well connected to the network. These immediate factors should tell the reader that the result for rim and bound should be interpreted with caution. On the other end of the scale the large red node for placebo immediately shows that there is a good amount of evidence supporting that placebo did not perform well against the other treatments and that we can be confident of that result. We noted that having the treatment network overlaid could become messy and hard to read for large amounts of treatments or connections. So in such cases we've created a simplified version of the plot which goes back to the original nested design. However we've kept the nodes colored and sized according to their sucre and data. As I mentioned earlier a key message that's present in the literature is that we should avoid presenting rankings in isolation. In this spirit the final graphic ranking tool was designed to be a ranking panel. So here's the design the central pane contains the ranking plots that I've presented as these are the focus the headline act. Then on the left we present the relative treatment effects. This will aid interpretation of the ranking and reduce potential exaggeration as an inherent risk of ranking is accidentally putting undue emphasis on treatments performing differently purely because they have different rankings. Then the aim of the right pane is to give users the opportunity to explore and interrogate the data behind the ranking analyses. Currently this is just a network plot but we have other plans that I'll mention later. Finally I'm going to show video illustrating how the ranking panel looks and performs within Meta Insight our NMA app created with R in the Shiny package. Firstly we run our Bayesian analysis and then click on tab 3c to access the ranking panel and this is how the panel initially looks. The user can then switch between the two new ranking plots as they wish including the simplified radial sucre plot. There is also an option to have the plot in a colorblind friendly scale. All of the plots can be downloaded and the ranking results exported as a CSV file. Furthermore as is the case throughout Meta Insight there is functionality to run a sensitivity analysis where you can remove studies and compare the results. So this new treatment ranking panel is currently live in a beta version of Meta Insight so please do have a look and play around. We do have future plans and ideas including ones that I alluded to earlier in this talk. Firstly there are plans to extend the contents of the left and right panes within the ranking panel. Regarding the left pane we plan to allow the reader to switch between relative and absolute effects. With the right pane we very much intend to increase the number of visualizations available to the user. Current plans include incorporating some form or risk of bias or grade visualization so users can see the quality of studies being analyzed and we also anticipate including David Philopo's bias interval plot where users can ascertain what level of bias could change the results and thus how robust they are. We have lots of ideas for making the panel more interactive including options to easily change the reference treatment where applicable and we plan to look into using the plotly package to enable further information via hovering and unfortunately it isn't just a simple thing to convert these ggplot objects to plotly objects as they have multiple layers. And finally we plan to wrap the newly developed plots into an R package so that other researchers can create them with their own data outside of Meta Insight. So thank you for your time today, I just want to acknowledge Will Stahl-Timmins, the bi-statistics research group at the University of Leicester and the rest of the series U group for their input into developing these designs. And here is the link again to the apps you can explore for yourself. Thank you. Thank you very much Clarice for such an interest in presentation and it's really great to see accessibility in the Meta Insight app as well. You're certainly given Mark Ladrenay's a run for his money with your presentation skills this year so Mark you have a challenge to meet here ahead of your session tomorrow. So our final speaker in this session today is Tasnim Hamza, who is a doctoral researcher at the University of Bern, hands them over to you. Today I'm going to talk to you about CrossNMA, it's a new R package to synthesize the cross design evidence and the cross format data using Bayesian network meta-analysis. So standard network meta-analysis synthesized aggregate data from randomized clinical trials because it is easily accessible from the published literature. However, heterogeneity may be present across these trials and we include participant covariates or effect modifiers as aggregate information to explain some of this heterogeneity. Including being covariates could however induce aggregation bias so ideally we would like to have or to include the covariates on the individual level from every study but more typically we only have IBD from a subset of studies and then aggregated data from the rest. In terms of study design, most NMAs include only clinical trials in the analysis because they are typically at lower risk of bias but on the other hand they are conducted for a restricted set of population and this makes their results hard to be generalized to the general population. On the other hand, the non-randomized studies or observational studies or observational studies reflect better the reality but it comes with high risk of bias. We might need in some situations though to combine these types of evidence while taking into account the biases on each design. And this is what the Crescent MA model is doing. It's a recent extension of the NMA model to synthesize mixture of data formats, IBD and aggregate data and different study designs. We built the Crescent MA model by integrating these four different approaches which combine clinical and observational data into the three-level hierarchical model that synthesizes IBD and aggregate data. How does the three-level hierarchical model work? First of all we define an individual level regression model to include participant covariates. Here beta zero captures the diagnostic effect. Beta W is the within-study treatment covariate interaction and beta B quantifies the interaction between the relative treatment effect and the mean covariate value on study level. And then for aggregate studies we have access only to mean covariates so we can only estimate the coefficient beta B. Then we combine the evidence across IBD and AD for relative treatment effect, delta and covariate coefficient beta B, while beta W is combined only from IBD studies. Now how do we integrate the four approaches to combine clinical and observational data into this three-level hierarchical model? So the simplest approach is to not differentiate between the two types and fit individual regression model for both the clinical and observational studies and do the same for the aggregate studies. But we know that each type has different level of bias and this model doesn't account for this bias. And this brings me to our second model which adjusts the relative treatment effects to the bias on each study. For individual model and aggregate model we add this highlighted part, the bias effect of each study, gamma multiplied by the bias indicator of that study R. The bias indicator R takes values zero or one based on the assessment using the risk of bias tools. However, we know that the judgment using these tools is subjective and carry many uncertainty and we reflect this uncertainty by assigning a Bernoulli distribution to the bias indicator which will provide zero or one to R. But now and to estimate the probability of bias on each study we either assign a beta distribution, lower risk of bias studies are given, this distribution is Q towards zero and high risk of bias studies Q towards one or we could estimate the bias probability using study characteristics through this logistic model. There is another way to reflect study bias by directly modeling the bias adjusted relative treatment effects theta which is estimated by a weighted average of unadjusted effect delta and the bias adjusted effect delta plus gamma. In bias adjustment model one in the slide before this theta is either the unadjusted effect for lower risk of bias studies or is a bias adjusted bias adjusted effect for high bias studies and not both of them. But here we allow theta to be weighted average of both parts but for lower risk of bias studies we give a greater weight to unadjusted and a little weight to bias adjusted part and vice versa for studies at high risk of bias. The last approach in combining clinical and observational data is using observational information as a prior to model the clinical evidence. It is a two-step approach. First step is to conduct network meta-analysis only for observational data using individual aggregated data or mixture of both. In Bayesian framework we will get a posterior distribution of each relative treatment effect. Second step is to conduct network meta-analysis for clinical data but as a priors we use the posterior we get from the observational data. Now some people could argue that usually observational data has a greater sample size compared to clinical trials and this makes the observational information dominate the estimate. So to control this potential high influence of observational studies we downweight the contribution from these studies by increasing the variance from the posterior's hairs or make these distributions further. These models are implemented in the croissant amée package which is a suit of tools for performing network meta-analysis and meta-aggression with individual participant data aggregate data or mixes of both and each format can come from clinical trials or observational studies or mix of both. Behind the scenes models are estimated in a Bayesian framework using Jackson R the package allows for conducting standard network meta-analysis when we have aggregate data from RCTs it allows the inclusion of IBD of these RCTs and it also enables each format include observational studies. So a workflow in croissant amée model is as follow. First of all you start with your data sets participant data and aggregate data or only a single data of one of them. There should be both of them in a long format which means one row bare R, bare aggregate study or bare participant in IBD. We then call the croissant amée model function to construct a JAX model and reformat the data. We might visualize the network and then we run the analysis using the croissant amée dot run function and once the model is fitted we then kind of reduce summary of results, leak table or check the convergence. So let's apply this to an example. First of all we have some individual participant data at the top here. So one row bare each individual and you can see that we have in this example a binary outcome and then we have aggregate data below this one row bare each R study with their outcome and coverage. On each data set you need to have at least these highlighted columns study ID outcome the assigned treatment and the design of the study clinical or observational and for aggregate data you need additionally to provide the sample size column in combining these together in a croissant amée model. First of all you need to indicate the name of your individual participant data and aggregate data. You need then to indicate columns name of these variables and you then choose which model to be used to combine treatment effects across studies either random or common effects model. You set the reference treatment in this example relative treatment effects will be evaluated versus a and finally you indicate which approach you would use to combine observational and clinical data. Here goes one of these four approaches that I talked about before. To conduct network meta analysis meta regression you could set a covariate here we use H to adjust all relative treatment effects and also you could adjust the relative treatment effects to study bias using bias adjustment model one and here there are too many arguments to be set as well all will be described in the help file and the vignette. You can also plot the network using the net plot function you could pass to this function different arguments to control the colors the thickness and many other features which are very similar to the net graph function from net meta package and then you can go away and fit the model using the cross-anime deuteron function it takes the model we created using the cross-anime deuteron model and you need to set the settings for mcmc samples iteration burn and etc. Once we fitted the model I can then print a summary of the results so here it's showing us the mean standard deviation etc for each of the parameters we have here the regression parameter b the relative treatment effects d and the bias effect plus the two heterogeneity parameters also to provide the convergence information the Gilman-Robin statistic and the number of effective sample size you could also produce the tracer plot to check convergence of each parameter and finally you can you can create the leak table by cross-anime leak function so here the values in each cell it shows the relative treatment effects and the 95 percent credible interval of each treatment on top compared to the treatment on the left so I really just gave a brief summary today if you would like to find out more and you can check out our package on on github website soon we will submit this package to Cran with the illustrator documentation and detailed example we also submit our methods paper thank you very much thank you very much Tasman for your excellent presentation on cross-nma and how it can be used with both individual participant data or aggregate data and I'm really looking forward to that paper and its arrival on Cran so that's it for this session on NMA we hope you enjoyed it as much as we did and thanks so much to all our presenters we'll see you at the next session which kicks off with multiverse meta-analysis and we'll start in just over 30 minutes from now thank you