 Hello, and welcome to Esmar Conf 2022 and our last special session for today, Quantitative Synthesis with a Bayesian Lens. As always, this session is being live streamed to YouTube and individual presentations have been pre-recorded and published there as well. Subtitles have been verified and can be auto-translated from those individual talks and automatic subtitles will be available shortly for the live stream. If you have any questions for our presenters, each presenter has an individual tweet on our Twitter feed, which is at ES Hackathon. We will keep an eye on questions and endeavour to answer them at the end of the presentations. We would like to draw your attention to our code of conduct, which is available on our website at esmarconf.github.iu. Our first speaker today is Franti Shek Bartosz from the University of Amsterdam, and we'll pass it over to Franti Shek. Franti Shek Bartosz, I'm a PhD candidate at University of Amsterdam, and I would like to tell you about troubles-based meta-analysis, a way of combining multiple methods of publication based adjustment, and drop my R package that implements the methodology. First, I will tell you about an example that compares register application reports and meta-analysis, and I will use it to highlight the problems and challenges of adjusting for publication bias. Then, I will tell you a bit more about different ways of adjusting for publication bias, how we can combine them with Bayesian model averaging, and lastly, I will tell you more about the package implementation and functionalities. So, the example, a requirement in colleagues 2020 published a paper that looked at 15 different meta-analysis and register application report that tried to replicate the main study from each of the analyses. And under some other assumptions, you would expect that the register application report should provide the best possible estimate of the true effect size, and that the meta-analytic estimates of original studies should converge to it. The original meta-analysis were of different sizes, and they range from 15 to around 300 studies. And if you look at the original effect size estimates based on the published studies, you can see a wide range of effects, and since this is a doctoral publication bias, unsurprisingly, you see that the estimates from registered replication reports were much smaller than the original effect size estimate based on meta-analysis. So, this large discrepancy is by many attributes of the publication bias, and one way that you can use this example is to see how well what different publication bias adjusted methods adjust for the publication bias and provide estimates closer to the registered replication report estimates. So, publication bias adjustment is a topic that has been here for many decades, and there have been different methods implemented and developed that try to adjust for it. I like to differentiate them in two different camps. One camp are method that adjusts for relationship between standard errors and effect sizes, for example, term film, bed-piece, or indigenous kink, and the second groups are selection was of p-values that tried to adjust for publication as operating on p-values, for example, 3 and 4 p-sem, 8,9,8,2, p-core, p-uniform. So, I will just go through those in terms of bed-piece and 3 p-sem. So, bed-piece is a conditional meta-regression estimator that tries to adjust for relationship, effect size, and standard errors, or standard error squares. And the idea is if there's no relationship between effects and standard errors in that happens when there's no publication selection bias, then if you fit the regression estimator, the intercept should correspond to the true effect size. However, if there is a publication bias selection, then you see a higher number of small studies with large standard errors and overestimate effect size estimate, and then fitting at your pet model or piece model should provide a much better effect size estimate. The selection models on the other end extend the traditional random effect or fix effect method model between parameter mu to heterogeneous parameter tau and publication bias weights omega. Here, for example, we use a step weight function that specifies different publication bias probabilities for different p-value intervals. Here we can fix the relative publication probability of significant studies to one and we can estimate the relative publication probabilities of the marginal significant studies or non-significant studies. As a result, you obtain a different likelihood function. So the F would be the unweighted likelihood function while the FW is the weight likelihood function that takes the different publication probabilities at different p-value intervals into account. Why this approach is interesting. If you look at the one million published set statistic on Medline, you can see a very similar shape that shows two very large discontinuities, maybe accidentally at the alpha 0.05. So if you use those two very popular methods and in the example of Quaran, we can see the original effects estimate in red, the original publication estimates in blue and then the black circles that are per piece and black triangles that are 3 BSM. And in some cases, all of them will provide the same estimate in some cases, 3 BSM is better in pet piece and in other cases, pet piece is better in 3 BSM. The problem is apriori, it's hard to tell which of the estimates is better. So the question is, how should you base your inference, especially if the methods disagree on the conclusions? So we argue that you shouldn't show or you shouldn't base the inference on a single model. Instead, you should use robust based method analysis and patient model averaging to base the inference of multiple models simultaneously. So instead of selecting a single model, you specify all of the models, you fit them and you base your inference proportionately on how well the different models predict the data. Then use base factors to quantify the evidence in favor of the presence or absence of either the effect, heterogeneity or publication balance. You can use prior distribution to regularize the estimate and incorporate prior knowledge and use Bayesian evidence ablating. That's independent of the sampling plan. So in an overview, the Bayesian model averaging works something like this. You have different hypotheses about the data that are represented by the different demons and some of the demons specified hypothesis. For example, this demon says so, the treatment works. So alternative hypothesis is true or there is no effect, the line hypothesis is true. And you have kind of different assumptions about heterogeneity, the fix and random effect models or different assumptions about the presence or absence of the publication base. So you specify all of those different hypotheses by the different models in your ensemble. You feed the models with the data and the models that predict the data best will grow and their voice will be heard much more. So if the model predict the data well, it, you will base the inference much more often on it and much more strongly. So how we split the ensemble, then you, for example, when obtain the model average estimate, you can look at a different components in a way that model specifier absence or presence of the effect. So you can split the primary problem equally across those two model pairs, then against equally across models, assuming presence or absence of the heterogeneity and then the publication base. So at the end, you end up with eight different model types that specifying all the possible complication or combinations of either the presence or absence of the effect, heterogeneity and publication base. So each of the model types ends up with the same primary problem. But as I said previously, there are different ways how to adjust for publication best. So in our illustration of the demon, this one demon can represent in many ways. So what we do, well, it just surface all the way down. We just specify more models that represent this one demon here. You can specify, for example, the pet model as one way of adjusting for publication best. The piece model or different weight functions. For example, one-sided selection on one-sided p-values on the significant level or selection on marginally significant and significant p-values on the two-sided p-values and all different weights. So if you look across all of the different publication best adjustment methods that we specify in turbospecial meta-analysis, we use the pet and piece, publication adjustment models that adjust for each effect size and standard errors. Then we specify six different weight functions that specify different assumptions about the different possible ways publication might operate on p-values. So, and all of those specifications cover approximately the pet piece, 3-psm, 4-psm, AK1 and AK2 models. So if you look back to our example, we can see that in some cases, all of the methods provide still the same estimate. In other cases, the problem provides an estimate that's somewhere between the pet piece and 3-psm. And in different cases, we can obtain again estimate that somewhere in between, but it's not greater than one of those methods. That just signifies that we are still doing statistics, not magic, and we cannot provide the correct answers all the time. Nonetheless, in simulation studies that are linked at the end of the presentation, you can see it in majority of cases, Bayesian model averaging provides the best possible results. So we, to make this methodology available to the practitioners, we implement it in the ROMR package, and the package uses MCST mission with checks, using the run checks R package, and then computes a margin like that's better for example, R package. And the most things that the ROMR package does is the model specification, some plotting summary function that I will show you in a second, and additional auxiliary stuff. So, the ROMR package, you can use it to specify the default answerable. You just specify the effect size and standard error. So for example, here on the infamous BEMS 2011 data set, you fit the model just with a single simple call, and you can use a summary function to obtain some default summary of the model. Here, you can see that in the first summary table, you see information about the whole model ensemble, and you see that you specify 36 models, 18 of which are some presence of the effect, 18 presence of the eternity, and 32 presence of the publication base. The paramount robots are equal across the component, and you see the possible probabilities. You can also quantify the evidence with the question-based factors, and you see that there is very weak evidence for the absence of the effect, moderate evidence for the absence of the originative, and strong evidence for the presence of publication base. And then of course, you see the model average estimates for the mean and heterogeneity parameter, and then the publication base probabilities and patent piece estimates. So, moreover, the package provides additional summaries. For example, you can look at the summary of individual models that shows you the prior distribution for the effect at differentiating the publication base, the paramount properties of each of the individual models, marginal equals, posterior probabilities, and inclusion base factors for each of the models. Also look at the MCMC diagnostics of the individual models that show summary of the MCMC error minimum effect of sample size and maximum overhead from each of the models to verify that the models were fitted properly. And you can also get estimates from the individual models. So here I'm just showing a print of the last two models that are specified, and you can see the model specification, the parameter estimates, and et cetera. So even if you don't wanna do divisional averaging, and you wanna look at the individual models, you can look at the estimates from the individual models. Then the package also provides plotting functions. So for example, you can plot the model average mean estimate where the spikes correspond to the probability of models assuming absence of the effect. So the effect size is zero, and then the slab corresponds to the density of models assuming presence of the effect. The functions are also implemented in Gigiplot. So if you are a fan of Gigiplot, you can use those. Or you can also look at the prior and posterior distribution. For example, here for the tau estimate assuming presence of the effect and many other combinations. The package allows multiple different specifications and you can modify basically everything about the ensemble. For example, you can change the prior distribution for the effect and specify truncated normal distribution that specifies a hypothesis of small effect sizes to mean zero standard deviation of 0.3, forced to the interval of zero to infinity. Or you can specify different base of adjusting for publication. Here you can specify only prior distribution that specify the pet and piece of models that adjust for publication. If you wanna see more different specifications and customization, I recommend to check the variance of the package that are on CRM. You also implemented the R package in GES with graphic user interface. And the GES implementation allows you to set basically all of those customization also in GES, specify the models for different prior distributions. And then you can create different summaries for inferences, different figures. Here you can see again, the model average mean effect size estimate and also the model average weight function estimate across all selection models. So just to sum up something about the Roost-BasicNet analysis, it can incorporate uncertainty about a selected model with Bayesian model averaging. So you don't have to base inference on any single publication that's adjusted model but on all of them and how well they predict the data. It can provide evidence for either not or alternative hypothesis. It has better preference with small sample sizes. It's the capacity to incorporate expert knowledge and has the potential for sequential reading of evidence. On the other side, there are some disadvantages. For example, it's slow. It requires MCMC sampling and it can also fail under strong P hacking. So thank you for your attention. I hope you enjoyed the talk. And if you wanna learn more about the package, you can also can either look at the Siren where the package is released or my GitHub page where you can also submit a feature request or back reports and look at the GESP. There are some references to the papers that we have written that outlined the methodology in more detail and tell more about the model specification or simulation studies that we further conducted to verify the methodology. Thank you very much. And looking forward to see you in discussion. Thank you very much, Franti Sek. And we will now hear from Christian Rover from the University Medical Center, Göttingen over to you Christian. Yeah, thank you very much. I'm going to introduce the new meta regression features that are now available in the base meta R package. And I first introduced the base meta package briefly. I'll talk about binary co-variables and then go over to different privatizations that you can use. I'll also have an example with continuous co-variables and briefly sketch other advanced applications that you can approach. So first of all, just briefly, so that's the base meta package as it has been up to now. So it implements the normal normal hierarchical model. And what you assume here is that you have a number of estimates and a number of standard errors, YI and Sigma I. And the assumption here is that they are imprecise measurements and the uncertainty you have or the standard error measures the measurement uncertainty here. But the true means here are not necessarily the same. They also have this certain amount of variation or heterogeneity that's quantified by this heterogeneity parameter tau and the overall mean here is mu. So in the end, we have two parameters or two unknowns here that also require prior specification if you want to use a Bayesian approach here and that's the overall mean mu and the heterogeneity tau. And that's, yeah, the model that has been implemented in the base meta package so far. In the base meta package, we are not using MCMC but it's, yeah, the calculations are based on a trick if you want and you get out the posterior densities and quantites and so on directly without having to sample here. Yeah, now the simple normal-normal hierarchical model can be extended to meta regression as well and the extension is sketched here as well. So what we have is again, we have estimates YI and standard error Sigma I. But in addition, we also have co-variables or moderators or study level regressors here that are called X1 to XD. And the assumption or the model looks similar here as you can see just in the second equation here, we see that the overall mean is not just a simple overall mu parameter here but instead the study-specific means are determined by or come about as linear combinations of these moderators here and we still have the heterogeneity parameter and yeah, the change in the model means that now the parameters, the open parameters that require prior specification are still heterogeneity parameter tau and also the regression coefficients beta which is now instead of a single overall mean is this D dimensional vector of regression coefficients. Calculations work similarly as before and the new approach now is implemented in this BMR function here. And I go straight to an example. So the example here is a systematic review that was performed in pediatric transplantation and the endpoint here was log odds ratio of acute rejections. So that's the event that you would like to prevent by the medication and you can see we have estimates of log odds ratios here. They are all on the negative side. So the medication seemed to work and so we have six studies included here. We've got six log odds ratios along with their six associated standard errors. And we can also see that two medications were actually used here. So there's similar kinds of medications but one is the Clisomab, the other one is here Vasileximab. Yeah, and if you want to account for these two medications and estimate individual effects, individual treatment effects for these medications we can do that using meta regression and what we need to do is set up this matrix or these covariates and in this case we can simply set up a matrix as we see at the bottom here. So it's six rows for the six studies and then two columns for the two means that we want to estimate and we can see that we have zeros and ones encoding what treatment group or what medication each study belonged to. Yeah, now implementing that in the base meta package is pretty quick here. So first of all, we need to prepare the data of course. So we need to derive the log odds ratios and standard errors, that's the first bit here. Then we of course need to specify the regressor matrix, the covariates here and that's unsurprisingly it's the matrix that we essentially saw on the previous slide. So it's two columns, six rows for the six studies and two columns for the two treatments. And then the function call eventually is this two line call here supplying the effect measures here that we generated in the first step then the regressor matrix, the covariates here called capital X. And we are also specifying a prior distribution for the heterogeneity here. If we omitted that bit, then we would be using a uniform improper prior which is also okay in certain cases. And yeah, so note that we are running the analysis here. So we're calling the BMR function and we are assigning the results to this BMR or one object and we can use this object later on then. So we can first of all print out the default output here and we can see, I mean, it's a lot of text but the important bit is if you're familiar with base meta output so far this looks similar to the previous output difference now is that instead of just two parameters the overall mean and the heterogeneity now we have in this case two regression coefficients labeled basileximab and we have all the estimates and credible intervals and so on there. We can also go of course and illustrate the results for example, looking at marginal posterior density so that's posterior density for the heterogeneity parameter and then for the two regression coefficients that are labeled according to the regressor matrix that we supplied to the function and we can also generate a forest plot that's probably the most convenient way to all the most familiar way to illustrate things and we can see in the forest plot we also see the regressor matrix reappearing again so we'd see the first two rows sorry, the first two columns here reproducing the co-variables that we supplied to the function and we see at the bottom the two estimates for the two regression coefficients. From this BMR01 object to which we assigned the output of the regression analysis we can in fact retrieve all the posterior densities quantites and so on so we see some examples here so we can compute the 99% posterior quantile here for the heterogeneity by accessing this posterior quantile function here and similarly it works for the beta coefficients we can compute quantiles we can compute the posterior cumulative distribution function at a certain point say at zero and we can also compute confidence or credible intervals for the coefficients here. So that's again similar to what we had before with meta regression now it's often interesting to also infer linear combinations so you want to have something like X prime beta so linear combination of some co-variable vector X and then figure out what the posterior distribution of that is and yeah for that we of course need to specify what our co-variables are so one example would here be for example the difference between ducalizumab and basiliximab treatments so that means we're taking the difference of the two so we multiply one by plus one and the other one by minus one and then we sum up the two and we can implement that by supplying this co-variable vector which consists of a minus one and a plus one here and we're getting out a combless interval for that and it works similarly also we can derive predictions and we can also derive shrinkage intervals for the for the study specific effects these theta i parameters which are also sometimes of interest for certain applications. Yeah so we can use the same coding here also for the forest plots which is often convenient so we can again generate a forest plot and if we're supplying a set of co-variables here so it's one zero and zero one and then minus one and plus one three rows of a co-variable vector yeah three co-variable vectors if you want then we can have them displayed in the forest plot as well so we get the estimates of the two treatment effects and the difference between the two treatment effects yeah and so we can have that in the forest plot or we can also if we're just interested in the in the pure in the near figures themselves we can also use similar arguments for the summary function yeah just briefly so we've used a particular specification of the regressor matrix here but there's always I mean there's usually alternatives setups that are sensible or conceivable one example for example is what you would usually get from the model dot matrix call here which would be a simple setting with a single intercept coefficient and an offset if you want parameter giving the difference between duckly sum up and basilix sum up and there's additional ways that you could use to all different ways of specifying the same regression problems using different regressor matrix setups and they are sketched at the bottom here in general they give you or they should give you consistent results just one note of caution is that in case you're using proper priors for the regression coefficients then the way you code the regression coefficients of course makes difference but there's I mean there's a ways or there's ways to translate prior specifications in one setting into prior specifications in a different setting so again you can get consistent results there yeah so far I've only talked about this one example where we had just binary co variables I just wanted to show that you can also get yeah what you might be thinking of when you're thinking of regression in general so we can also do regression analysis including regression lines and so on so this is one example application here involving 35 studies and we have a number of co variables here including early and late onset of the treatment and then the dose of the treatment and we can fit a model here to these 35 studies including these study level co variables here and in this case we can fit a model using four coefficients there's other settings conceivable again and but in the end we can get out these regression lines as well so what you yeah might be thinking of if you're thinking of regression so in this case it looks like we have an actual treatment or increasing treatment effect with increasing dose in one of the groups here and not so much of an effect in the other treatment group if in case of late treatment onset here yeah just a little bit of an outlook of other things that you may be able to do so we've seen already that looking at the difference between Daclizumab and Basyliximab treatments that was in fact a so-called indirect comparison because it was comparing treatments I mean the studies were all looking at treatment versus placebo and the treatment or the comparison of Basyliximab versus Daclizumab was not implemented in one of the actual studies but still we can try and figure out the difference between the two studies so that's a so-called indirect comparison and that means if we can do that then I mean quite generally if you're doing meta regression then the model can be used at least to some degree for network meta analysis as well and I mean the restriction here is that we're looking at the data needs to be estimates of the contrast so in our case that was these log odds ratios we can only handle two arms trials and we have a single common heterogeneity parameter but I mean at least to some extent we can also do meta regression and network meta analysis, sorry and finally so in the output from the model we can, the output also includes the marginal likelihoods and that's interesting because that means as soon as we are fitting different models we can also compute base factors which means that we can do interesting things like model selection or variable selection or model averaging and so on if we're fitting different models here, yeah. So to sum up, yeah, the meta regression is as an extension from the simple meta analysis my guess is that the most popular practical application may in the end be looking at subgroup analysis or comparing subgroups of the studies just like we did in the first example that we were looking at there's of course a wider range of applications including continuous covariance, network meta analysis model selection, model averaging and so on. Yeah, a little note of caution again, different parameterizations should give you consistent results in some cases, especially if you're looking at or if you want to use informative prior distributions or proper prior distributions for the regression coefficients then you need to be careful to properly translate the different settings from different parameterizations and yeah, the new base meta package version is available on CRAN since this week and if you have any questions I'll be happy to answer them either in the following discussion or also by email. Thank you very much, Christian. I'd just like to invite our two presenters today to the panel and we also are very lucky to be joined by two special guests so Matt Granger and Gavin Stuart will join the panel and I'll open up the floor to questions from Matt and Gavin. Hello. Go on then, Gavin. Okay, great talks, lads. Really, really enjoyed them. Thank you very much. My first question is about the model averaging risk of bias and I thought it was an amazing package, really nice. I wondered whether or not you'd be able to extend that to nested treatments and also to think about network meta analysis as well. So where you've got that problem that you've got the multiple treatments and so you've got the multiple peaks and it just seemed that maybe Bayesian model averaging might be an interesting approach to look at that. I mean, people have tried doing things in net meta, trying to look at all of the different treatments simultaneously in publication bias. Anyway, I just wondered what your thoughts were on those, the nesting and then the multiple treatments. Yeah, well, that's a very good question and thank you for the praise and looking at multiple dependent outcomes or different measures from the single studies is a thing that I've been working for half a year now and it's a tough problem to correct, to be honest. And it depends on what type of models you want to specify. If you just want to look at the pet piece, meta regressions that adjust for the relationship with standard error, standard error squared, that's quite simple because you can still marginalize the stuff out or just use a multivariate normal distribution for dependent variables, but with the selection models, if you really want to model them properly, then you need to get a multivariate weighted normal distribution and this distribution just starts exploding and the number of computations you need to get to get all the proper probabilities. So I think I got up to like three to four outcomes from a single star study. So if you have the nesting and you have three estimates, that's still possible. If you go more, then you are not able to evaluate it. So I'm looking at some approximations, but nothing that I would be really happy about yet. So yeah, but that's a great extension to do in the future and I'm working on. So I've got a bit more of a general question really. A lot of our audience at the conference probably won't know much about Bayesian methanolysis. So can you both maybe give us an idea of why one would use Bayesian methanolysis over any other frequencies approach? What do you think the advantages are? Are there any pitfalls that people need to look out for? Just give people an idea of why you think it's a good idea or a bad idea in certain circumstances. I'll hand it to Christian first. Thanks, yeah, I guess one advantage is that many of the frequent test approaches rely on large sample sizes and topics and that's not the case for the Bayesian method. So they actually work well also for small sample sizes and few studies like the example with six studies or also three studies or two studies without at least any technical problems. Yeah, I mean, of course, I mean the obvious disadvantage of course is that you need to spend thought on the prior specifications that you want to use. But yeah, we've been also working on that and trying to compile the, yeah, obvious choices there or some hints on how you can start thinking about what might be sensible priors, especially for this heterogeneity parameter which is not so obvious when you first think about it. Fantastic. Yeah, maybe I can build one of that and I think the small sample sizes or like small number of studies is the obvious problem for frequentist methods. And as Christian mentioned, the Bayesian methods thanks to the prior distribution allow you to get the proper estimates in that regard. And they also, but they also offer additional abilities for the meta-analyst to incorporate knowledge that's already in the literature. For example, last year we published paper in statistics and medicine where we described different informed prior distribution based on the published trials in the Cochrane database of systematic reviews. So for example, for the heterogeneity parameter tau you can get an informed prior distribution that's based on the previous meta-analyses on similar topics, similar treatments. Now also in just we have a new module that allows you to look at different trials, combine them and get estimates from them. So you can again use this previous knowledge to specify your prior distribution. The same thing then goes for the effect size parameter. Furthermore, this allows you to create informed tests. So you are not testing now against some unspecified alternative as you do in the frequency settings but you can actually evaluate evidence for a point now or some peri-anal if you don't believe in mouse versus some informed alternative that corresponds, for example, to the vehicle treatment or the treatment that you would expect or treatment your founders might be interested in and those tests then have much higher informativeness which I think is very important for the statistical inference. Yeah, could I add to this Matt as well? I think that kind of three fundamentals for me are that a lot of the time with meta-analysis we're not actually trying to say this is the answer, this is the effect, we're trying to express the uncertainty around it. And if you do your normal meta-analysis particularly like the lads were saying when you've got only a few studies that estimate of tau is sitting there in your model pretending that you know exactly what it is. You haven't got a clue what it is. And if you put a little bit of uncertainty on that the credible intervals just explode. So suddenly about little analysis where you've got your four studies and you think, you know what's going on. You do it in a Bayesian framework you can express the uncertainty much more realistically. So sometimes it helps you express uncertainty. Sometimes you get the shrinkage effects where it helps you to reduce the uncertainty by looking at the exchangeability between the studies. And Christian's written a lot on that kind of stuff. You should probably ask him to chat more about that. The third kind of element of it is that it feeds into the decision models. So if you think about network meta-analysis, for example there's all kinds of different ways of doing that in frequentist framework and in Bayesian frameworks and everybody argues about it for the socket being all the rest of it. Whether or not the estimators are biased and we all have all our technical arguments. That's kind of irrelevant to me. The reason why I like the Bayesian approach for that is that it feeds, it gives me the information that as a decision maker I would want. It gives me the probabilities of one treatment being better or not. I can get these cumulative ranking curve for the probabilities and I can get that out of a Bayesian framework. So it feeds directly into the decision-making process in a way that's much more intuitive than the frequentist models. So if I was just having that straightforward should it be Bayesian or should it be frequentist they would be the three things that I'd be thinking about. Fabulous. Christian, would you like to elaborate on the second point where it was highlighted you've written about? Maybe just briefly. So I mean in Bayesian models in general you've got this nice feature that you can sequentially update your information and that's also sometimes handy in meta-analysis so you can do a meta-analysis of your, I don't know, 10 previous studies and then from that derive a prior for your 11th, your future study which is helpful sometimes, I mean for the analysis itself or also for planning the new study and so on. So I guess that would be another advantage, yeah. And that goes under this term of shrinkage estimation. And also from the philosophical point of view like meta-analysis is not really good way or like proper analysis in the frequentist sense because there is no sampling plan a priori. So you cannot really compute p-value because there is no sampling plan for how the studies were to be conducted in the past. So you cannot really, you don't know what's the likelihood. So you really need the likelihood based methods to evaluate evidence properly. But that's not as practical as the other advantages that we mentioned before. Frantisek, I'll just say we have a team chat and there was a lot of love for your demons. Folks are loving your demons. It was really lovely in illustration to illustrate the point. Any further questions from the panel? So I've got another question. This one's for Christian. And it's about, again, it's about kind of extensions to handling nested data and handling the multiple treatments. So I'm guessing that nested data you could handle with your meta-regression framework just by specifying the nesting as covariates. But if you move into the multi-arm NMA you're gonna have problems because of fitting the multivariate distribution. Is that understanding right? You could extend it quite easily to the nesting but doing multi-arm NMA with base meta is gonna be really tricky. Yeah, I guess I know too little about the nesting problem particularly. But I mean the computational trick underlying the base meta function or the base meta package in the simple example it's essentially based on the fact that you only have two parameters, right? You've got the overall mean and you've got the heterogeneity and conditioning on particular heterogeneity value everything is normally distributed and you can, yeah, then numerically try and marginalize over this heterogeneity parameter. And that still works also in the case of meta-regression because instead of the single overall mean parameter that is conditionally normally distributed you have something multivariate normally distributed and that still works. But it's so I'm not sure how you could I guess it would not be feasible to extend the model to include additional variance parameters or something. So I guess anything that's more or less the meta-regression might still work but otherwise you probably need to switch to MCMC or something. Brilliant talks, brilliant packages. So I'm gonna start using them more. Thank you very much guys. If we don't have any further questions I'd love to round it up here. No pressure. If there's any further questions, please do go ahead. Otherwise that's it for this session and we hope that you enjoyed it as much as we did. Thanks very much to Frantisek and Christian and also to Matt and Gavin for joining us for the panel and we'll see you at the next session.