 Hi everyone. Thank you for joining us with the next talk. I'm excited to present the next speaker. His name is Christian Rover. He's a research associate in the Department of Medical Statistics at the University Medical Center of Göttingen in Germany. He's currently a research focus on Bayesian methods for meta-analysis and implementation in art. So he will be presented today by Bayesian random effects meta-analysis using Bayesian Meta. Okay, yeah. Thanks for the kind introduction. Yeah, I'm going to talk about the Bayesian Meta package for Bayesian random effects meta-analysis and I'll first, yeah, briefly introduce the problem and then how it's implemented in R and how you can use the Bayesian Meta package and then close with some outlook and also some future plans. So the problem first of all is, I mean, the generic meta-analysis problem. Suppose you have a number of studies that you found in a systematic review. So in this example case here, six studies and these six studies are quoting an estimate of some number. In this case, it's an estimated log odds ratio and along with the estimates. So indicated by the red dots here. Along with the estimates, we also have standard errors indicated by the lines, by the horizontal intervals here, the whiskers. And yeah, what we want to do in the end, we want to somehow combine these numbers and want to figure out a combined estimate that's shown at the bottom here. And yeah, so that's what we're going for. And what we need to consider in addition is also this nuisance parameter, this between trial heterogeneity, which you don't see in the picture here, but you only see it quoted in the bottom left here. Yeah, and we'll see how that looks in practice in the following picture. Yeah, so this is the common random effects model used or often applied meta-analysis and how it works is, I mean, you can see in the first line here. So we have our estimates, yi, we have the standard errors, sigma i, that's the data that's going into the analysis and we're assuming that the yi are measurements of some parameter theta i. So for the i-th study, the i-th quoted measurement that we have. And of course, this is a noisy measurement of the underlying true value and the amount it's or the amount of offset from the true value is given by the standard or quantified by the standard deviation here by the standard error, sorry, by the sigma parameter. And then for all the different studies that we have included in our meta-analysis, these theta i parameters, these true parameters that are measured by each single estimate are not necessarily completely identical, they are only similar. And so these have again a normal distribution centered around some overall mean parameter mu. And they have this extra variance or this extra variance or variability given by tau squared here. You can also, if you want to write it down in a single, so technically that means integrating out these theta i parameters, you can write in a single line here in the second line and you can see that or you can also think of the model as your estimates being a noisy measurement of the mu parameter, the overall mean and the amount of offset that you're getting in your actual data, in your empirical data is determined by first of all the measurement error, the standard error that you have in your measurement, the measurement noise or the measurement uncertainty. And on top of that, you've got this extra variance given by the heterogeneity. So mu and tau are the two parameters in the model. Mu is the one that's usually of primary interest and tau is mostly a nuisance parameter. And if you want, I mean, we'll see that data sometimes you also need to worry about these theta i parameters. Yeah, so this normal or also so-called normal, normal hierarchical model is quite commonly used. It's useful for many endpoints, anything essentially what you can, that you can measure and where you can quote a standard error along with it. So in the example, we had logarithmic odds ratios, you can also use it for hazard ratios, mean differences, all sorts of numbers. In the following, we will follow the Bayesian approach for inference, which means that we also need to worry about price specification, which is required, but it's not terribly complicated in this case. On the computational side, it's usually slightly more involved than what you may be used to from frequentist methods. So, and usually it involves a lot of integration, which you can often approach using Monte Carlo methods. But in this particular case, oopsie, sorry, we'll be, sorry, we'll be using numerical computation. So it's, yeah, all numerical integration. So once back to the very first slide, I mean, we saw this overall estimate at the bottom of the plot here, shown in red as the red diamond. And of course, where does it come from? It comes from, you know, if it's a Bayesian estimation, it comes from, you know, some posterior distribution of this log-bots ratio. And in order to figure out, say, the median and the 95% credible interval, you need to integrate. And you can see this is the marginal distribution, marginalizing over the heterogeneity already. So there's again another integral. And this, all this integration, that's essentially the computations that you need to do. And that's what the, what the package will do for you. Yeah, so the Bayesian metric package in brief, it does Bayesian meta-analyses. It gives you, it does all the integration for you. It provides access to all the integrals that you may be interested in. It's, to some extent, builds upon the metaphor and cross-pot packages for, you know, pre-processing the data and nicely illustrating the data at the end. What's pretty common is, you know, you perform the analysis and you get returned some object, which is essentially a list object. And yeah, so that's the same, I guess, for linear models, the LM function or something, you apply the LM function and you get back some list object. And in the list object, you've got a lot of, you know, the data itself, the estimates and so on. And it's similar here. And what's not quite so common is that a number of these list elements are also functions that you can use and which, you know, give you the, the access to the integrals in the end. So we'll walk through it a little bit using an example here. So we'll be looking at this patriotic transfer transplantation dataset. And it includes six studies that were giving log odds ratios, along with their standard errors. It was looking at, in the example, they were looking at acute reaction rejections, rejection reactions. And so it was about pediatric transplantation. The acute rejections are the events that you want to avoid, that you want to prevent. And so, you know, if you take the medication, you want to have low odds, reduced odds of acute rejections. And so you're looking for small log odds ratios here. So that's what you would like to have if you take the medication. Yeah. For the analysis itself, we can use, I mean, for this run here, we can use an uninformative or an improper uniform prior for the effect new. So for the overall log odds ratio. And that's also the default specification, if you don't specify anything. And for the heterogeneity, you may need to spend a little more thought. And for the present analysis, we'll be using a half normal prior with scale parameter 0.5. You can, you know, why a half normal prior with scale 0.5? That's, I guess, a topic of another presentation in itself. And yeah, I just refer to the reference here. So to the actual example, to run the analysis, we first load the package, load the example data set. Then we need to, I mean, the example data set contains the plane counts, the plane 2 by 2 tables for each single study. And we need to derive the log odds ratios and the standard errors. So that's done using this ES calc function from the metaphor package. And the point where it's getting interesting is at the bottom here. So that's where we're applying the base meta function. And we can see from the call here that, so we're calling the base meta function, we're assigning the results to this BMA object or the object that we call BMA. And all we need to provide is an argument y for the estimate and an argument sigma for the standard errors. And also this tau prior argument for the prior specification. So that's the prior for the heterogeneity. And you can see that, you know, what we need to specify here is the prior density. So that means, I mean, it looks a bit clumsy here in this function call. But on the other hand, that means that we are extremely flexible here. So any prior density that you can write down or that you can implement, you can plug in here and use it for the analysis. Yeah. And we don't, I mean, in fact, we don't need to specify y and sigma explicitly here. We can also just provide the ES object that we defined at the top here to the base meta function. Yeah. So when we just print out the BMA object, the object that we called BMA here, you can see you get the usual output, you get a little bit of summary, you get the estimates, the input data and so on. And that's probably sufficient for some applications already. But the, yeah, interesting bits is, you know, when you try to access the individual list elements of this base meta object of the returned object here. So for example, I mean, it's a list object and you can access the individual slots in that list. So for example, you've got a summary element that's pretty common. It gives you the summary estimates for the parameters in the model. So for the heterogeneity and for the overall mean estimate, the posterior mean median and 95% credible interval and so on. But then you also have these other slots that contain functions. So for example, you've got this D posterior and P posterior and Q posterior for the posterior density, posterior cumulative distribution function, posterior quanti function or a function for to compute credible intervals and so on. And you can, yeah, just access these functions and get out numbers that you may be interested in, you know, beyond the plain summary statistics that you're given already. And we can look at an example here. So for example, say you want to want to have a plot of the posterior density here for the effect parameter, what you need to do is you just specify a vector X ranging from minus three to point five. And then you just plot X versus the D posterior function here applied to X. And you can just pull this from this BMA object from the return object from the base meta function that we got BMA in this case. And so you've got this plot of the posterior density. I mean, it looks pretty much like a normal distribution here, but it's not. It's somewhat close to a normal distribution, but it can also be skewed and look pretty non-normal as well. And we can also, you know, access the other functions and figure out, for example, the posterior median here, the point five quantile, or an interval, and add that to the plot as well, or use it for whatever purposes as well. Yeah, we can also draw, say, a false plot to summarize the output. And we see, I mean, that's pretty similar to what we saw on one of the first slides already. And what we see here in addition, and by default, is these little gray lines here and also the bar at the bottom here. So we can see that we have, in addition, this prediction, a prediction and a prediction interval most interestingly. And we also see this shrinkage estimate. So that's an individual extra additional estimate for each single study here. And that's the estimate of the study specific theta i parameter. So the study specific mean that is measured by that particular study. And yeah, that's quite useful for some applications or for some non-standard or advanced applications. So if we, you know, just recall the original model that we were using. So we said that, you know, each single measurement, each single estimate here was measuring some theta i parameter here. And these theta i parameters are the ones that are normally distributed with variants, according to the heterogeneity around the true mean, then we can, you know, we can use the prediction or we can derive a prediction of a new or say a future study mean, which may be useful. For example, if you're interested in study design, say you want to know, you know, you have done six studies in the past, you want to plan a seventh study, and you want to know what sorts of effects or also what sorts of placebo rates and so on to expect, then you can use meta analysis, derive a prediction and use that for study design. And also these shrinkage estimates are often of interest. So again, you can, you know, in our case, we have six studies in there. So we have six different theta i parameters. And for each of these, we can again, access posterior densities, quantites and so on. Again, using the same functions as before, we just need to specify the extra prediction parameter or individual parameter here for the individual estimates. And that's useful for, say, if you want to use a meta analysis to derive a prior for the analysis of a future study. And so there's some, some advanced kinds of analyses that you can do using these approaches here. Yeah. So maybe quite briefly some, some other features that are included in the package and in the functionality. So we are, for the overall effect, we can also specify prior distributions. So, and that works via an additional argument here, the mu dot prior argument that you spy, specify the function call. And that's, yeah, that's restricted to normal priors or normal distributions for the prior distributions or prior information on the overall odds ratio. We have, we are quite flexible with the prior distributions for the heterogeneity. So we can specify all sorts of functions for the tau prior. We can also specify additional arguments here. So we can also ask for, say, a uniform prior and improver prior or the Jeffries prior and so on. So you're quite flexible here. In addition to the output that we've seen already, you can also ask for base factors to be returned. So for example, I mean, it would be maybe of interest to, you know, see the base factor for a zero heterogeneity or a base factor for a zero effect. Yeah, if you're interested in p values, which you, of course, don't quite get from a basin analysis, you can compute posterior predictive p values. And for that, you have this PPP value function here. So that's, yeah, quite, I mean, it's a bit numerically demanding because it's using, again, Monte Carlo integration. But yeah, you can figure out these posterior predictive p values as well for a number of different hypotheses that you can specify. Yeah, quite often it's common or it's useful to quantify the influence that the individual estimates have on the overall outcome in the end and do that by giving weights to quantify the contribution of individual studies. So that's included in the output as well. You can produce additional plots like funnel plots or trace plots showing the behavior of the conditional mean and standard deviation of the individual studies as you're varying the heterogeneity. And there's some additional functions that's again a bit advanced. So that's useful for figuring out sensible prior distributions for the effect of all the heterogeneity. For that, you can figure out these unit information standard deviations and also effective sample sizes that's useful when you're performing a meta-analysis in order to use it as the prior in a future meta-analysis. And then you can quantify its weight or its contribution in terms of an effective sample size here. Yeah, so to summarize, yeah, Bayesian meta-analysis or Bayesian methods are useful for many meta-analysis applications. I mean, especially if you're looking at few studies where other methods tend to behave funny sometimes and also if you're looking at applications where you want to use prior information in the analysis or you want to do the analysis in order to formulate a prior distribution for some other analysis. With a Bayesian meta-package, you get quick and nicely reproducible computations, which makes it very easy or maybe easier as compared to MCMC methods. And you can do quick sensitivity checks. You can run easily simulations with that. You get detailed output with all the details of the analysis. You can access all the integrals that you may be interested in, hopefully. Yeah, you cut the access to prediction and shrinkage intervals and so on. You can figure out marginal likelihoods even. So that's maybe useful for things like base factors and then model selection or model averaging applications as well. And yeah, currently we are thinking about extending or whether it's possible to extend the same methods for meta-regression as well, which would be quite interesting and hopefully useful. Yeah, so finally I have some references. Thank you so much, Christian. That was a wonderful presentation. So unfortunately we don't have time for questions now, but please feel free to post them on the chat or reach out to the speaker. And now we have a short break coming up and we will be back at 345. In the meantime, a special chat is open so you can follow the green button at the bottom of the screen during the break and chat with others. And we will see you back after the break. Again, we'll see you back at 345. Thank you. Thank you, Christian. Bye.