 We're live streaming. Hello everyone. Welcome to Esquimark.com 2023 and this workshop on network meta analysis using our package NetMeta. My name is Viviana Betancourt. This workshop is being live streamed to YouTube and has a group of participants taking part live. A very warm welcome to all of you. If you have any questions to our presenters, feel free to use the Twitter and the AES hackathon account by commenting on the tweet about this workshop. If you registered for this workshop, you can also ask your question here on Zoom in the Q&A facility. You can also comment and chat with other participants on our dedicated Slack channel that was sent along with your registration information. We will endeavor to answer all your questions in due time and as soon as possible. We would like to take time to draw your attention to our code of conduct. It is available on the Esquimark.com website at www.esquimark.org. Our workshop presenters today are Guido Schwabser and Gertha Ruker from the Institute of Biometry and Statistics at the University of Freiburg. Guido and Gertha, over to you. OK, Viviana, thank you for the introduction. Yeah, welcome you to our workshop on network meta analysis with our package NetMeta. Gertha is also participating here. You find the materials, so the handout of this presentation as well as some r scripts and data set on Zenodo. So here is the link and as I've seen, Gertha already posted this link in the chat as well. So we start with a presentation, some questions in between and then afterwards we have a practical. And at the end, we will discuss together the results of the practical. And if there are any remaining open questions, then we would also go into that. So this is an outline of what I will be talking about. I will start with indirect comparisons and specifically the so-called Buche method. Then I will move on to mixed treatment comparisons, combining direct and indirect evidence. And then the next step will be the network meta analysis model and finally how to do all this using NetMeta. The Buche method was published in the 90s about 25 years ago. And at that time, the common approach for indirect comparisons was to compare just the active treatment arms of randomized studies. And what you do then there is you lose all the advantages of the randomization. So you basically have non-randomized comparisons and this is an approach rather prone to bias. What the authors proposed is to instead of looking just at the active treatment arms to compare differences of active treatment arms with placebo in two sets of RCTs, for example, comparing A with placebo and B with placebo and then indirectly estimate the effect of A versus B. What they did in this publication is they evaluated differences between direct and indirect evidence. What they did not do is they did not formally combine these two sources of evidence. And I would like to start by introducing or showing this method for a subset of a network meta analysis of diabetes treatments, which is part of our R package. So here nine diabetes treatments plus placebo were evaluated in 26 studies. Among them was one three-arm study comparing three of the 10 treatments and 25 two-arm studies. The outcome here was HPA1C, measured as a mean change or mean post change. So the blood glucose value level and small values are here desirable for this outcome. And let's assume that for the moment, we are only interested in two specific treatments, Rosicleta zone and metformin. And if we look here at these 26 studies, we see that two studies compared them directly. However, there are six additional studies comparing Rosicleta zone with placebo and four studies comparing metformin with placebo. And so the question is, can we use this 10 studies here in order to inform the comparison of Rosicleta zone versus metformin? So finally, to combine direct and indirect evidence, the first step will be to have a look on how is this indirect evidence calculated and so on. So this is here this small network that we are looking at here in this network graph. You see here the six, four, and two studies respectively. Here is some notation for this. So basically what we do is we conduct three pairwise methanalyses, one methanalyses with six studies comparing Rosicleta zone placebo for metformin placebo and the two comparing Rosicleta zone and metformin. And what we can do then here based on these direct estimates is we can calculate the indirect estimate comparing Rosicleta zone versus metformin by looking at the difference of the differences. And this is given here at the bottom. And for this calculation, what we then do is we ignore these two studies here and just compare or look here at the results of these two and calculate the indirect estimate. We can do all this, for example, with the meta package for pairwise methanalyses, which is loaded here. Here I said that results should be printed with two digits and I'm not interested here in the common effect model. I would only like to see the results for the random effects model. Here I load the data set, which is part of that meta. And then I'm using meta gen, which is the function for generic inverse variance methanalyses, pairwise methanalyses and the essential arguments are the treatment estimates and there's standard errors. Yeah, then we say here that the summary measure is a mean difference. Study labels are here optional, but yeah, in forest plots, et cetera, we would like to see the names of the studies. And here is the data set. And what I do here now is because this here is, these are the data of this network methanalyses with the 10 studies. I define subgroups and these subgroups are basically the pairwise comparisons here. So feed one and treat two or treat one long and treat one too long the variables which define which treatments are compared with what and so on. I say I'm not interested in an overall result across subgroups, no overall genetic estimates should be a calculator or shown, no test for subgroup differences and so on. And if I would print the result of this then I would get all pairwise comparisons within the send data set. But here I say by using the subset argument that I'm just interested in those studies that compared metformin with placebo or Rosicada zone or the two directly. And this is then here the table. So here we again have the 12 studies four, six and two. We can see here the estimates for the between study variants in these subgroups. Subgroup here means pairwise comparison as we can see here in the front. And the estimation here is done using the remel estimator of tau squared. The direct estimate is given here directly in the last row and the indirect estimate we can calculate by subtracting the second, the first value from the second one. And then we have here the direct and indirect estimates which in this case are quite similar. Instead of allowing here to calculate separate tau squared values, we could also assume that tau squared is the same for any pairwise comparison that we have. And this actually is an assumption of the network meta-analysis later on. So in the next step we will calculate a tau squared value which is assumed to be the same within subgroups. And in order to do that and to get an indirect estimate just for the comparison Rosicada zone versus Metformin, we in the next step will ignore these two studies here. Otherwise we would get an estimate of tau squared not just for these two distance studies, but for the 12. And this is done here. So again, we take our meta-analysis with all pairwise comparisons, define a slightly different subset here, say that we would like to calculate a common tau squared within subgroups and then we get here these results, tau squared of 0.139 and somewhat slightly different results for the indirect comparison here. Obviously the direct comparison is the one that we have seen before in this case. And we could also do all this already using net meta. So here I'll load the R package. And I would like to note that some of the defaults that we already defined before within settings meta are also considered for network meta-analysis. And for more details on that, I would refer here to these two help pages. So here we make the dataset send available and here is the command for the network meta-analysis, so to speak of this very small network that we had comparing just Rosiglieta zone with placebo and Metformin with placebo. In the net meta command, what we have to provide just like for Metagen, the estimated treatment estimates, the standard errors, study labels, and also here information on which comparison we actually are looking at here. So therefore we have to provide these to additional variables here. The subset is the same as before and here we say we are interested in the ramel estimator for tau squared. By default we would get the decimmonian layered estimator and we say that our reference group should be placebo. So in the printout, we will always see results in comparison to placebo. And just in specific order in the printout that we also would like to have. Using the net graph, we get here a network graph for this very small network which is basically the same as we've seen before on one of the previous slides. And this is here then the standard printout for a network meta-analysis or in this case here for our indirect comparisons. And what we see here in the printout is first in the number of studies we included, number of pairwise comparisons that is here the same. I will talk about this point later on when we come to the network meta-analysis model. Then we have three treatments, Rosicleta-zone, metformin, placebo and we have two designs. The two designs here are Rosicleta-zone compared with placebo and metformin compared with placebo. And these here are the direct estimates in this model and what you should see here is that we get here exactly the same estimator of tau squared as before in our meta-analysis where we defined subgroups according to our pairwise comparisons. So this is a one-to-one relation between the two methods here. And now the next step is then to calculate, yeah, not only we already have the estimator for the indirect comparison, now we also need a variance estimator. And under the assumption that all our treatment estimates are independent which in our case here is fulfilled. But it would not be fulfilled if we would have any multi-arm studies. So pre-arm, four-arm studies are comparing more than two treatments. Then we could not use this rather easy formula here going back to the Bucher method. Then we would have to use some other methods which will be shown later on. But here what we get is a variance estimate for our indirect comparison which is just the sum of the two variances. And here I would just like to show you the results for this very small network using the net leak table which provides both the result for the direct and the indirect evidence here. So in a leak table by default here in our package net meta, what you see in the lower triangle are the network estimates. So this here are these are the results from the network. And in the upper triangle you see the estimates from the direct comparisons. And here you note a dot here which means there are no studies directly comparing Rosy-Vita-Zone and Metformin that have been included here in our network meta-analysis in this, yeah, here. And so these are here the direct estimates we've seen before. And these are here the network estimates. These are here all the same the direct and the network estimates because we do not have a loop here. We just either have direct or indirect evidence in our network estimates. And so the bottom row here gives the direct estimates and this here is the indirect estimate together with its confidence interval. Then the next step, which has not been done in the Bucher publication is that we combine the direct and the indirect evidence. And for this very small network that we have here that we can rather easily combine this here and then we get here, yeah, we combine the direct estimates and the indirect estimates where the weights are basically the inverse variance weights. And let's now do this here for the full network. So here the only thing that changes is the subset argument. What we do here is we again take into account the two studies that compared Rosy-Vita-Zone Metformin directly. And then we get here this leak table, which in this case here, guys, some in so to speak real network estimates based on direct and indirect evidence. And accordingly we get here some slightly different results here as well. And what we can also see here is that in this network here including the 12 studies, not the 10 but we get a different estimator for tau squared. So this is the value we had before and this is the value based on the larger network. And on the next slide, we see a forest plot generated here with this command. And this is a forest plot showing us for our very small, yeah, the example, both the direct estimates and the indirect estimates and the mixed treatment estimates. And what we see here is in general, there is a nice agreement between direct and indirect estimates and also that the mixed treatment estimates that they are more precise than the individual ones. And this is typically the case under the red effects model. There are some situations where this might not be the case especially if the direct and indirect evidence disagree. But yeah, this is, I would say a general problem. Yeah, that one then would have to look into in more detail. And this is everything I would like to say on this, yeah, indirect comparisons and mixed treatment comparisons. This was all based here on this triangle here printed in red. So these are the two studies comparing the two treatments directly and the other ones indirectly via placebo. But as you can see here, the total network is larger. And so the next step will then be to do a network meta-analysis for the full network. And this one here with this blue area indicates that there is one free arm study comparing Aka Bose, Metformin and placebo. And for that, the methods that I've described so far they will not work. But before I move on to the network meta-analysis setting I would like to ask you whether there are some, yeah, urgent questions up to this point. So far no questions on Slack, Twitter or any other media. Also, I cannot see much in the chat. I think we can continue. Then let's move on. So now let's move on to really to network meta-analysis. And oops, what are our aims for in the network meta-analysis there could be different ones. So far what we did is we looked at direct and indirect evidence and tried to combine it. And our aim was to get a more precise estimate of treatment differences. Another aim however could be to include multi-arm studies. So this is not easily possible with what I've presented so far. For that we will need network meta-analysis methods. And also often we would like then to rank treatments according to the efficacy in order to make some suggestions that we can use in the future. And the assumptions of a network meta-analysis are first that studies must be independent. This is typically also the same for pair-wise meta-analysis even though there are methods for dependent effect sizes and so on. But this is at least for the methods that I propose here the underlying assumption. And the second assumption is that our effects are consistent. And what is meant by that? So what I mean is also I'm talking not about consistency in a mathematical sense but about network consistency here which means for example there are two possible explanations for this one is if we look at a sum of direct treatment effects in a loop in our network that the sum of those must be zero of the direct estimates. If I go back here to this one here so closed loops as you can see here there are several in red is one closed loop which is highlighted. So under consistency we would expect if we compare placebo with metformin metformin with roti-glitter zone, roti-glitter zone with placebo and add them all up that we are back to zero that would mean consistency. What we cannot do in this network we cannot evaluate whether for example, Mickley-Toll whether we get here consistent effects in comparison to roti-glitter zone because here is no closed loop. So it's not possible to evaluate this but nevertheless the underlying assumption of the model is that this is the case. An equivalent description of network consistency is to say that the indirect evidence for any difference does not differ from the direct evidence. And here is an example given for three treatments ABC. So the direct effect of A versus B under consistency is the same as the indirect effect is the same as one direct effect minus the other. And there are methods to evaluate this consistency but I will not go into detail here on this today. Okay, and in order to introduce the network materialism model I would like to start with a small example with four treatments and even four treatments ABCD here labeled six studies each providing a single pairwise comparison. So in total we have M equals six pairwise comparisons. And if you look here at the network graph for these six studies you see that there are two studies comparing A with B. The other comparisons are always just one study and there's no study comparing A to C directly. And what we could do here is we could write down this list of pairwise comparisons. So as I said to compare A with B that are the two first here in this overview and so on in this listing. And what we do is we estimate they are treatment effects. And the underlying idea is which comes from pairwise meta-analysis that the first two studies estimate the effect of A versus B. The third one estimates the effect of B versus C and so on. So and what the observed effects are these underlying effects plus some random error. And what we do next is we express these study specific underlying three effects as a difference of our four effects here at top. So we say the comparison A versus B is yeah just A minus VB and this could for example be a difference of me differences or a difference of log odds and so on as examples. And now let's move it here a little bit to the left side in order to get some more space here because the next thing what we will do is here we switch here from this notation to a matrix notation. So we introduce here a matrix which we multiply with the vector of our unknown true effects for intervention A to D. And as you can see here if we multiply here this row of the matrix with the vector of the treatment effects then we are just back here at VA minus VB and we can write this down here then in matrix notation and this is here then the underlying model that we will look at in more detail and that is used here in the paper by Gerta Röker. And what we will do is we will consider all pair M pairwise comparisons which are noted here by this vector with corresponding standard errors. We have the design matrix as already given before and then in this model all pairwise comparisons will be used and a weight matrix is then defined based on the full set of pairwise comparisons that means for multi-arm studies we will include all pairwise comparisons of that study of those studies and how this, what are the implications of this I will talk about in one of the following slides. But as you can see here for the common effects model network method analysis model we just use here a weight matrix which uses the inverse of the squared standard errors within standard, within study standard errors under the random effects model we use like in pairwise method analysis weights that are based on the sum of the within variance plus the between study variance and by default generalized decimony layer estimator is used, but in the example that I used before I already used a ramul estimator. So this tau squared could be also based on ramul or ML maximum likelihood. And then in this model we can get our network estimates. This is a vector of the same length as pairwise comparisons and for that we have to do this matrix multiplication and as you can see here this is operation which is based on the design matrix defined by the network structure and the weight matrix defined as here above. This H matrix is also called the head matrix in regression and the interpretation is that our network estimates that we get that they are linear combinations of the observed estimates with coefficients from the head matrix. Now we have the network estimates. We also then need some information on the standard errors in order to calculate confidence intervals, prediction intervals and so on. And for that we can calculate the covariance matrix again based on the design matrix and the weight matrix. And also we get a generalized queue statistic defined in this way and also a heterogeneity statistic I squared in our network model. And now to the point of the multi-arm studies. As I already said in our method implemented in that meta we consider all pairwise comparisons of multi-arm study and as we know that study with P arms has P over N pairwise comparisons but only P minus one independent comparisons or degrees of freedom. We can directly see that only for two arms study so comparing two treatments that the number of pairwise comparisons included in the model and the degrees of freedom are the same. Otherwise we always have a larger number of pairwise comparisons included in the software compared to the degrees of freedom. And why, for example, do we have here three degrees of freedom for treatments? What we do here so could think of is if we calculate this is not a network plot for network meta analysis but for a single study and this study compared for treatments and if we know the result of treatment two versus one four versus one and three versus one we can calculate the remaining three pairwise comparisons here. And accordingly we only have here three degrees of freedom and therefore the standard approach for multi-arm studies is that one chooses a study specific reference treatment that one only considers comparisons to this study specific reference so which are then called basic parameters. In my example, this would have been here the treatment one as the reference. And the method implemented in that matter going back to electrical network methodology instead adjust the standard errors of multi-arm studies. So they are inflated and as they are inflated the weights are reduced and accordingly we can in the model can include all pairwise comparisons of multi-arm studies to show this graphically here. So again, this is the forearm study. This is the standard approach. So we cut off three of the six comparisons and in that matter in the network materialist model by Gerter Röker we will reduce the weights accordingly. So this was the information on the model and now I would like to show how you can use our package NetMatter for the analysis. So this diabetes data set we already have seen in the bit on the indirect comparisons and the experiment comparisons. I will also have a look at two more examples and the reason for that is that they have a different data format which is quite often found in foreign network method policies and I would like to show how you can use NetMatter in order to transform the data in such a form that is needed in the NetMatter function. And the second example here is quite famous smoking cessation example comparing four interventions to quit smoking with 24 studies and so on. And here the outcome of interest is to stop smoking which was a binary outcome. And accordingly a good treatment would have a larger number of study participants that stop to smoke. The third example is an example from Acute Mania 14 treatments for this, 47 studies. So this is here the biggest network of the three several three arm studies and the rest are two arm studies. And again, we have here binary outcomes or your response to treatment. And again, this is an outcome where we would like to see several events. And for this we have the Excel file available while the other tools are part of NetMatter. So first of all, the send data set that we used before but here we get more information on it. So here what I show you is a subset of three studies. So the Fronzo, Levin and Wilms. And as you can see here the first two studies provide one row while the last study here provides three rows. And this is actually here the three arm study comparing Acapos, Metformin and placebo, here the Wilms study. And each row here corresponds to a single pair wise comparison in so-called contrast-based format. Here is the mean difference. So here the TEs treatment estimates in our model corresponding here to these estimated effects with the underlying assumption that the first one estimates the effect of Metformin versus placebo plus some random error and so on. And the standard errors are also given here in the second column. And these are the quantities that have been used in the formula before. Important things on the data sets are first of all that the treatment labels must be equal to identify the treatments. So for example, if you would have an additional space here for Metformin here at the end then within your network you would have two nodes on Metformin one with Metformin without the extra space and Metformin with the space would be a separate intervention because the software does not know that it's basically the same. This is easy to spot. If you look at the network graph and you see your twice intervention there must be some type or whatever. Some reason that they do not appear as one but two nodes. For the multi-arm studies it's important that the study labels are the same. And this is really important here. For example, if you would have written down here not Wilms 1999 but Wilms 1999, one, two, three then the software would think, okay these are three separate studies and accordingly would treat here these results as independent which they are not. So it's really important that your study labels are equal. It's also important for the treatment labels but there you would certainly notice that there is a problem which is not always the case otherwise here. One more thing, if you would only provide here for the Wilms study two of the three power comparisons so basically this standard representation then the software would stop with an error and stating that you do not provide the correct number here of power comparisons for multi-arm study. And then basically we have already seen this net meta command before but here we use it without this subset argument. So again, what we have to provide is or what we really have to provide are the first four arguments, the treatment estimates, the standard errors and information on the two treatment groups. And if you have multi-arm studies you also have to provide the study labels. If you do not provide the study labels then net meta assumes that each pair-wise comparison comes from a separate independent study. And one more thing. So by default the first alphanumeric treatment here it would be akabos would be used as a reference treatment in printouts and plots but typically we have some, yeah, clear reference treatment like placebo and here in this command we specify just this. And after all this said then we come to the printout for this network meta analysis. We see here we have our 26 studies that were mentioned on one of the slides the 28 pair-wise comparisons because one of our studies is the three-arm study with three pair-wise comparisons. We have 10 treatments and we have 15 designs. So design as I said before meaning any combination of these 10 treatments here unique combination of those. And because we said placebo is our reference treatment all the results here are printed relative to placebo. And then we could then directly here look, yeah, how good all other treatments compared to placebo work. And as you can see here with the negative sign all are actually better than placebo in order to reduce the HPA1C value here on average. And then at the bottom we also get information on the adrogenate here and tau squared estimate as well as this test for, yeah, overall adrogenate or inconsistency and an I squared value, which is rather large here. What follows now is a printout that you probably in an application would not really would like to look at but here in order to understand the model we also would like to show you this. So this is here first of all the summary command. So we'll be printing the results, more detailed results. Using Cronkate we say, okay we would like only to see our three studies that I talked about before. We do not want to use here reference group. We want to see here a matrix with all treatment comparisons and yeah, we would like to restrict the number of the treatment names to just four characters. And the first bit here in this very long printout is then the original data that has been used as input to the network meta-analysis. So again, we see here the studies and the comparisons and what's important within the software these entries are then ordered by increasing treatment names. So if the original comparison would have been met for me versus Acabos, then the software switches these two around and accordingly also switches the sign here from minus 0.2 to 0.2. Otherwise these are the original treatment estimates and the original standard errors. And here it states that we have here the adjusted standard errors for multi-arm studies. And I just noticed today that this description here is quite correct for the common effects model but for the random effects model it's a little bit more confusing because as you can see here also for these two arms studies we have here different values compared to the original ones. And the reason for that is that these are here the square root of the sum of the variance within the study plus the variance between the studies. So within study variance plus tau squared and the square root of that obviously. But for this multi-arm study here, Williams, there is also internally an adjustment for this dependency coming from having three pairwise comparisons for a study where when we only have two degrees of freedom. Then on the next side we get the results for the random effects model. These are actually already the network estimates. You can see this because always we have always the same results here for comparisons like metformin with placebo and so on. Then again, we get here this bit that we have seen before and now comes here the matrix with all network estimates. So for example, rosiculita zone versus placebo we get here an estimate of minus 1.23. Placebo versus rosiculita zone it's just the other way around. So only the sign here switches here or changes here between the two. And these are all network estimates that are available either relying on only direct, only on indirect or on mixed evidence. And for example, what did I say before? I can't remember. I think I said something like rosiculita zone versus ceta also that we did not have any direct evidence there but nevertheless we get here a network estimate for that. This is then followed by the same matrix for the lower and the upper confidence limits which I do not print here in detail. And then again, we get here the information on the heterogeneity as well here as a legend here because we abbreviated the treatment labels. So and finally a forest plots for this example again here we are using here the reference that we had before placebo and in the printout we get basically the same information as before as in the printout here. Again, we can clearly see here which even better see here which treatments work better and obviously rosiculita zone is here one of the best is meant for me for example. Okay, so this was a standard analysis for one network and the summary printout as I said was just to give a little bit more details on the model. Let's now move on to the next example. So this smoking cessation data. And if we look here at the data what we see is that each row here in this data sets corresponds to a single study. And these are really all the variables in this data set. So as you can see here we do not even have study labels here but as is clear that one row is one study we actually do not really need them in the analysis. And we would call this a wide arm based format because for each treatment arm we have here a set of variables. So here for the first treatment arm we have the number of events, the number of participants in that treatment arm as well as the treatment label and also the same information for the second and third treatment arm here. What we also see is here that the first two studies are three arm studies because we have here information on three arms and the others are all two arms studies because here we have missing values and no information on the third treatment arm. One disadvantage of such data format is that if we would add study with four treatment arms then we would have to add here additional variables to the data set which would be missing for all these studies but for arm studies and so on. And what we must do now is we must transform this wide arm based format into a format that is needed as an input format for net meta and for that we can use the pairwise function. And so the pairwise function can be used for various different outcome types, binary, continuous, incidence rates and so on. Here we are using it for a binary outcome and what we then have to provide or what we always have to provide is information on the treatment labels that is here the first argument. And for binary outcomes we have to also provide number of events and the sample sizes. And we are using here this list construct because there could be a varying number of variables in the data set which we have to provide here and this depends on the study with the largest number of treatments in it. So if we have a five arm study we would have to provide here information for event one, two, three, four and five and so on. And for that we are using here this list construct. Internally then the meta bin function is called for a binary outcome. And here we say we are interested in using the odds ratio as summary measure. So what then happens is that log odds ratios and corresponding standard errors are calculated and this is here then information on some of the variables in a pair-wise object. So TE here is the log odds ratio as ETE the corresponding standard error study labels treatment labels provided and here also information on the number of events and so on. And what you can see here is that the first three rows are for the first study which was a three arm study. The next three are for the second study and the single line here is for a two arm study comparing A and C. The network material resist then can be conducted by using the pair-wise object as the single argument here of the function basically. Then here with this net graph we get here this default printout. If you look closely you will see that here two numbers are printed atop each other which is not so nice but there are some additional arguments in net graph that could be used in order to move here these numbers around. So here as you can see this is a fully connected network and this is here then the printout for it. A was here the reference and I think it was plus C or something and you can see that all active interventions work have a larger odds ratio larger than one which indicates that in general they are better suited to have the participant to stop smoking but again this second one B here is not significant here so you cannot completely exclude that it's basically as good or bad as possible. Here in this printout on this first plot sorry I show you a little bit more information so here we use A and D as the references we use A here because that was our actual reference group and we use here D because D was the best reference best performing one and here we then can see here that these results here all favor the first intervention meaning B, C or D or go in that direction and these one here at the bottom they favor the second one which means that they are favor D and we see that D is better than A but not strictly better compared to the two other active interventions. Then let's get to the third example this aphutmania data set and here again we have a different data format which is a we call it long arm based format. Here each row of the data set corresponds to a single treatment arm and as you can see here the first two entries are from the first study then we come here to the next study and here at the bottom we see that study 47 is a three arm study comparing here these three interventions here. One disadvantage of this data format is if you have something like a risk of bias variable so a variable that is always the same for all treatment arms you have to replicate the values here because they must be provided in such a way. So this is the only advantage I would say of this other this wide arm based format there you would only have to provide one column with the risk of bias assessment which is on study level. Good and again we can use here pair wise in order to transform the data so here the main variables are for the number of responders and for the number of patients and in this case the pair wise command is quite easy because here we have just a single column in our data set that we have specified for the treatment variable for the events and the sample sizes. And but what's important here for this arm based format is that also the study labels are mandatory because otherwise we would not know which role corresponds to which study. Then this year is again just the same as we have seen before and here is the corresponding network graph. I've been using here this rotate argument in order to move placebo here to the right side. I think in the original print out it was somewhere down here and then I said I would like counterclockwise and move it here five positions in that direction which is done with this rather obscure command here. And then this is here the corresponding forest plot for that and that was it with the three examples. Now just one summary slide. Yeah, I think one can certainly say that over the last decade methods for network method analysis have been established and nowadays are in routine use in medical but also other applications. The main additional assumption that we have to deal with is network consistency. And this is something I did not really talk about today but this is one of the essential next steps one has to take. And for that I would like to refer you to our paper on net meta which has just been published in the Journal of Statistical Software. There we also talk about ranking of treatments a little bit more on the generation of league tables and this important bit evaluation of inconsistency. And then the final comment is a comment that we have for quite some time now is the main limitation of net meta is that we do not have methods for subgroup analysis or for network meta regression at the moment. You can do a subset analysis. I've done that before. You've seen that with the subset argument but there is no subgroup argument. One problem with subgroup analysis is that within different subgroups you can have different sets of treatments and so this makes this a little bit more difficult. And then there is here the list of references but I would say I leave it here with that slide and would like to thank you for your attention. Gido, there have been some questions on Slack to you although Gertha has replied some of them on the chat. So the first one was how much data do we need to do a network meta analysis? And if we do we need much more data than traditional meta analysis. I mean the first example with this indirect comparison that I've shown you I would call this kind of a network meta analysis and this is really a very small example with what was it, 10 studies, four and six. Obviously in order to especially in order to evaluate this network consistency we have to have a network with some loops in it if there are no loops. So if you have a simple star shape network which you come across sometimes. So you have placebo as the natural reference treatment and all other interventions are just compared to placebo you cannot evaluate network consistency at all because there are no loops if there are no loops there is no possibility to really statistically evaluate if there is any inconsistency. And when you do then these indirect comparisons you have to assume that this basic assumption is fulfilled. Gertha maybe you could comment that there are some approaches trying to summarize the complexity of networks. Was that a question? I don't understand. I don't see that. My question is also there. I know there are some approaches on to measure the complexity of networks but I cannot really recall them. If you also know them then we can leave it. So I don't understand the context. So was there a question in the chat also in that direction? I didn't see that. The question was when can you conduct the network meta analysis? Yes, I had, so this first question just mentioned. So my response was like quite similar to yours. I said that your very first example was a very small example of a network meta analysis and it might be that the power is not very big of such a small network but you can in principle do a network meta analysis for only three treatments and a small number of studies. But what we often see is that small networks are disconnected. So that you may have one network comparing A to B to C and another network comparing D to E. So they won't grow together and then we have a different problem. So that might be the case if we have only small number of studies. But I don't still don't understand the relation to that question of complexity. We talk about that during the break, Gerter. I don't understand. So maybe I missed some discussion. I don't see it. It was just my idea, Gerter. So what we can discuss this later on the tour. Of course there are complexity measures for graphs in general and I know some of them but I don't see the relation to questions that have been brought up. Okay, so we still have a few open questions. Most of them have been answered by Gerter on the chat. There's still one by Mohamed. So how do we resolve issues where the assumption of transitivity is violated across few studies? I mean, the most drastic approach would be not to conduct a network meta-analysis. Another approach would be to do a network meta regression but if you only have a very small number of studies then this already somewhat contradicts that. So I'm not sure whether I can give there a definite answer on that. We could conduct just the pair-wise comparisons. Look at that. And then there's only so much you can do with that approach as well. So yeah, sorry, no clear answer on that. Okay, and I've seen another open question here. A question when preparing data for NMA. When we have crossover studies that report data as a parallel trial, one might like to calculate effect sizes and their corresponding adjusted standard error for these studies by using the correlation. Either assumed or back calculated from studies providing raw data. According to the method described by Elborn 2022, is there an option for this in pair-wise or stick to correcting the output of the pair-wise manually prior to NMA? There's no possibility to do that in pair-wise. So you would have to do that beforehand. So yeah, same goes with cluster randomized studies, for example. So you would also have to calculate these standard errors beforehand. Okay, so I am checking here for more questions on the different channels, but not so far. Then I would say we could move on to the practical. And so my idea would be so if you go to the Xenotl website, download the stuff what you find there, among others is here the practical for you to do. And especially there is also the file with the R commands that are used in the practical. And I would say now it's a little bit after four. You have time now 45 minutes or so, still a quarter to five. And then we come back here and discuss the results of the practical. But I will be online and if there are any comments in the chat and so on, then I'm happy to assist there. Okay, then I would say I see you back in about 42 minutes. Should I just start, Birjana, or? I think we can continue with the program. I see there's a long discussion on the grid graphics. Yeah, and I really do not have a clear answer on that. So when I started to write the meta package, I was using grid, but that was no 20 years ago. There was no ggplot2 or something else. And I did not try to change this in between because that would be really a mountain task to do that. But I don't see fully the problem. So I simply don't have that problem. So maybe it depends on the hardware. Do you see my window? Yes. This is my default window if I produce this forest plot. And this looked quite different from that one. Which is probably, as Wolfgang said, a problem. So he is using a screen with a much wider, a larger resolution. And therefore the default window looks like this. What I would do here, I would just simply change the size of the window. And yeah, I don't know what happens on his side then whether the actual forest plot would get larger or not. For me, typically the more difficult problem is that I get something like this here. So I cannot see the full forest plot. And sometimes you can change this by resizing the window. But if it's a very large, very wide plot, then sometimes it does not fit even on my laptop screen here. And what I then typically do is I really adjust, use the PDF command and generate a PDF file from it. I think I did this in the R scripts for the presentation. So yeah, so here I always, I'm using here the PDF command and I specify here the height and width of the PDF file. This is in inches as far as I remember. I could also use here SVG or PNG or something. Then I would have to see what is the actual value that is expected for height and width for a PNG file, for example. I think that is different. Could have a look here, PNG. So this is here in dots or something. So here the, no, PNG, but it's all the same basically. So here it's really in don't know point sizes and don't know. So that would have to then to be changed here accordingly. For PDF files, it's, what does it say here? So the default width and height is seven. And it's, in this case, it's really, it's in inches. It would be nice to really have to use not the grid engine, but something else in order to just get the right plot. But that's, I do not see this, that this will happen in the near future. Okay, concerning the example, I think there has been some questions or discussions whether one could calculate the pairwise comparisons using ES-Calc in Meta 4. What I'm not sure, because I'm not using ES-Calc very often, whether you can calculate their all pairwise comparisons or not, because if not, then you could not use this here as an input to NetMeta, because NetMeta expects really that in your data set that you have all pairwise comparisons for multi-arm studies. So that's important. And this set, what I think there were not that many other problems, let's close this one here. I think in the chat, there was not that much other things going on, so maybe I could just briefly mention here some points on the example. So this year was a different example using a continuous outcome. So here the outcome I think was, what was it? The loss in the reduction in the loss of a working time or something, so after treatment. And here it means a negative value is a smaller loss in the working time, so then that's good. So if you after treatment did not work, I cannot even say whether this year is in days or hours. I assume it's days, but I had a look into the original publication. I did not find a mention on that. Let's assume this is days. So this would mean after the intervention in a certain period, there was 2.4 days on average, fewer days that were lost for working compared to the period before. And this is obviously here for placebo. It's much smaller, so negative values here mean that the treatments are effective compared to placebo. Then here for these two Lieberman studies, which are the first in the data set, we can see those are both two arms studies comparing either from the Paxol with placebo or Hopi Nirol. And accordingly here, these four entries for the third treatment are all missing. But there is at least one study here or exactly one study, that is the pre-arm study. And therefore we have to have these information here in the data set. And then, where is it? Oh, this was the other one, that one. What we have to do again is to have transformed this into this contrast-based format. This form is the pair-wise comparisons. And that is done with this command here. And what we see for a continuous outcome is, again, we need the treatment information, we need the sample sizes, but here we do not need the number of events, but we need the mean values and the standard deviations. And then by default, we see here that the mean difference is used as the measure of effect, summary measure. We could also use here the SMD, so the standardized mean difference, which we would have to provide as an argument in the pair-wise function. And following that, it more or less is straightforward, the same as before. So we run the net meta command with our pair-wise object. And then in the print out, the reference is used here plus SIBO. If we would not specify this argument here, then Romo-Cryptine would be the reference group. Again, overall here, we see a reduction on average, but only one intervention here, one drug is really much better than plus SIBO and so on. Then already in this print out, you got information on the number of studies, number of pair-wise comparisons and so on. And this could also be then printed from the meta analysis object here for the number of treatment arms. That was one question, how many studies have three and so on study arms? We see here there's actually one study with three treatments and six studies with two. The designs are available in this list element designs and here we see just the listing of those. And then with these two commands, we get basically the same information in different ways. So the first one here says how many studies compared the respective treatments directly with each other. And in this list element A matrix, we also have the information, how many studies compared each combination or each of these two treatments with each other. And the values that we see here, these six values are the same six values that are provided here in the lower triangle or the upper triangle as well. Yeah, and then I've just noticed that the network graphs that we provided you that they do not look very nice. So I hear from, for myself, I produce here slightly different versions of those. And this is the typical circular presentation with one crossing that we cannot omit. But if we use this iterate approach, then we could get here a network plot without any crossings. We could also do here a 3D plot, but probably nothing for a presentation or publication. And then we did here some, a couple of forest plots and this here fits well into this window. If I really would like to have a figure for a presentation, then I would use the PDF command and not use here the export facility. And here with that command using this left calls, left labels arguments, I adhere the number of direct comparisons that we have for each of these drugs. And what I also do is here, I have sorted it by decreasing treatment efficacy, is what the sort of argument here. And yes, then then here the last two are basically taking here this, the best working from a pixel and use it as a reference. And of these two forest plots, I think the second one is the more easier to interpret one because this one here compares it then. So the best treatment with the others, and then we see here directly which of the treatments are worse. Then Prami-Pexol and it's all but this cargo line, which is at least could be as effective as Prami-Pexol. Yeah, and all of this, what I just said here, I've shown you, you find also in the solutions on Xenodo. So here either as a word document or as a PGA document, with some comments by us and I'm happy to answer any remaining questions in the remaining one minute that we still have. Okay, so I've seen that there are no further questions on Slack, so Twitter is a little bit quiet today. Okay, so probably with this, I think we're going to have a little bit of a break with this, shall we close the workshop now or would you want to add something? Yeah, okay, wonderful. So thank you so much, Guido. Thank you so much, Gerita for this wonderful workshop. We've learned a lot. And that's it for today's workshop on network meta analysis using our package net meta. And we look forward to seeing you all soon at the next session. Have a lovely afternoon for those in this time zone. Thank you, Viviana.