 Hello and welcome everyone. I'm going to give a brief introduction into the new meta regression features that are now implemented in the base meta R package. So I'll first say a few words about simple meta analysis in general and the base meta package, and then I'll go to the case of covariables that are available at the study level. I'll briefly talk about different parameterizations, also about several covariables and binary and continuous covariables. I'll point out some more advanced applications and close with some conclusions. So so far in the base meta package, the simple normal-normal hierarchical model was implemented and that's sketched on this slide. So what we have is we have a number of estimates called yi here and along with their standard error sigma i. And the assumption made is that these estimates here are measuring a true parameter theta i with some uncertainty given by the standard error. And these true values are not necessarily the same or identical for all the studies, but they also have some variation associated with that's given by this heterogeneity tau squared or tau heterogeneity variance tau squared. So the parameters that we have in the end is first of all the overall mean mu, the heterogeneity tau. And sometimes you're also interested in these study specific means for shrinkage estimations, these theta i parameters here. So that's the model that is implemented in the base meta function. And the way it's done is using some numerical trick here. It's not based on MCMC. So yeah, and that's the neat thing that you have direct access to posterior densities, posterior distribution functions, and so on. And yeah, it gives you nicely reproducible and quick results. So extending to the case of meta regression, the set up is similar, so we still have a number of estimates and standard errors. And in addition, we have a number of covariables available at the study level here. So it's number D of covariables. And the model still looks similar here. So we still have the study specific means theta i here and their measurement error sigma given by sigma here. But the mean of these study specific parameters here is now given by a linear combination of the covariables times these unknown coefficients here. And so for the model, this means that instead of the single overall intercept or overall mean parameter mu, we now have a number of coefficients to be estimated. And then again, heterogeneity and the study specific means here. The model is similar and this computational approach still works. And that's now implemented in the so-called BMR function. And I'll go straight to an example here. So that's an example, including six studies here. We've got six estimates of a log odds ratio, log worth and odds ratio, along with a standard error, the uncertainty here. And in addition, we have this treatment covariable here. So we have the six studies that are based on two different treatments. So one is Diplizumab, the other one is this Basiliximab treatment. Yeah, and we can account for that in the analysis. And if we want to estimate two individual means for these two treatments here. And in order to do that, we need to code this information for the analysis. And we do that by putting it into this regressor matrix, capital X. It has, in this case, two columns for the two parameters that we're after, coding these two group assignments here. And we've got six rows cross one into the six studies. And we've got zeros and ones indicating which study belongs to which group here or which treatment. The actual implementation then looks like this. So we first of all load the package, we load the data set, and then we compute the log odds ratios and their standard errors. And then what we need to do in addition is specify the regressor matrix. So that's the matrix that we've seen on the previous slide now implemented in R with two columns, six rows for two parameters and six studies here. And to perform the analysis, we just need to call this BMR function here. We assign the result to this BMR01 object here. And we specify, oh, we supply the data, the estimates, and standard errors. We supply the capital X here, the regressor matrix. And in addition, we also have a prior for the heterogeneity here. We could also omit that. In that case, we're using the default of a uniform heterogeneity prior. So executing this command, we get this result here. So that's the default printout of the analysis output. And the exact details are not so relevant. Important thing is that if you're familiar with the previous base meta output, this looks very similar. It's just now we have three parameters in this particular case. So we still have estimates for the heterogeneity. And now we also have estimates quoted for the two regression parameters, the two beta coefficients here corresponding to the Vaseliximab and Daklisumab treatment here in this case. And we can also illustrate the results graphically. So for example, we can look at posterior densities or marginal posterior densities here for the heterogeneity, for beta 1, the one regression parameter, and beta 2, the other regression parameter. And I guess the more common way to illustrate things is also using a forest plot. And we can do that as well. So the forest plot looks slightly different from what you may be used to from the base meta package so far. So we have the usual setup here. So we've got the estimates and their standard errors quoted. And we have got them illustrated on the right here as well. And in addition, now we've got in the left half here, we also have the regressor matrix reproduced. So we can see which study corresponded to which regressor matrix settings here. And at the bottom now, we don't just have a single overall estimate, but we also have, we've got two parameters to be estimated. One is this basiliximab coefficient. The other one is the Daklisumab coefficient or the group mean and the other one group mean and the other group mean. And yeah, we've got the estimates quantified here and also illustrated in the plot on the right. Yeah, so from the BMR functions output, we assigned that to this BMR01 object here. And from there, we can actually access if we want more detailed information on the parameters, on shrinkage estimates, and so on. And for that, we have a number of functions that are included in this output object here. So in order to look at posterior densities, posterior cumulative distribution functions, quantile functions, and so on, we can use this D posterior, P posterior, Q posterior function. So the naming is similar to what you may be used to from other probability distributions in R. And so for example, we can look at the posterior quantile, the 99% quantile of the tau parameter here and get that number from the output here. Or similarly, we can look for quantiles of the beta parameters. We just need to specify which one we're looking for. Or we can also compute posterior cumulative distribution functions, and so on. Now, the difference to the simple meta-regression is that in the simple meta-analysis is that in a meta-regression, you're quite often interested also in linear combinations of the regression coefficients, the regression parameters here. And we can also access these. And for that, we also have, again, a set of functions available. And what we need to do in addition is we need to specify this covarial vector that we're after here. So for example, one example would be we've got the basaliximab and the daclysumab treatment effects. And one obvious question might be, well, is there a difference or how large is the difference between these two group means here? And we can implement that and we can say, well, the difference between the two coefficients is just one times the one coefficients plus minus one times the other coefficients. So that would be daclysumab minus basaliximab. We supply this contrast vector here, this x-coefficient vector here, and then we can get posterior density distribution function, or in this case an interval for this difference in group means here in this case. And then again, similarly, we can look for the prediction interval, in this case, for one of the groups here, including also the heterogeneity to make a prediction for a future study, for example. And we can also look at shrinkage intervals if we're interested in the study specific effects here by specifying which effect we're after either by the index or by the name. So I guess specifying these contrasts or these covarial vectors is quite convenient. And you can also do that for the forest plot, so to also illustrate that graphically. And so this shows essentially the forest plot that we've seen previously. But now we have supplied a number of contrast vectors here. So that's the ones that we've seen previously as well. So it's 1 and 0 and 0 and 1 for the two individual group means. But we can also include the difference in the same plot. And we see that at the bottom here, we can again see the corresponding coefficient settings here. And then we can get an estimate of the effect in the two individual groups. And then also an estimate of the difference between the two groups here by simply supplying the corresponding coefficient vectors here. And you've got a similar functionality here for the summary function as well. So you can get out the estimates from there as well, if you're not only interested in the plot, but also in the actual numbers. Yeah, just a general remark for the setup of the regression matrix or the regressor matrix, the covariable matrix is usually not unique at all. So there's different ways to code the same regression problem. So one example here would be the default that would often be used in our regression applications using an intercept and offset coding here. And there are other examples. But just general remark, these should lead to consistent results. And one note of caution, in case you're using proper priors for the regression parameters, for the beta parameters here, then that if you want to switch from one parameterization to the other that needs to be accounted for. So that's a difference from frequentist analysis in general. Yeah, so far we've only looked at binary covariables. This is just one brief example, also including continuous covariables. So in this case, it's meta-analysis including 35 studies. And each study was using a different onset of the medication and a different dose of medication. And we can see how that affects the efficacy of the treatment. In this case, we can code this in terms of four coefficients here. There's, again, of course, different parameterizations possible. In this case, we have a four-column matrix for these regressors here. And we can have a quick look at the output here. So we can see we can also model continuous covariables along with binary covariables. So we've got two groups here and a continuous covariable. And it looks like in one group, we have an effect of dose. And if the onset is late here, we don't actually see a substantial effect of increasing or decreasing the dose. Yeah, so just briefly, we've already seen that we can figure out the contrast between basaliximab and declizumab, the two treatments here. And so technically, that is a so-called indirect comparison. And that means that we are, to some extent, getting into the domain of network meta-analysis models. And yeah, we can, in fact, analyze some network meta-analysis problems here. And there's just some restrictions that we need to be looking at contrast estimates from the individual studies. We can only work with two-armed trials. And we have a single common heterogeneity parameter. Another extension is that from the BMR output, we can also get the margin likelihood. And that, of course, is interesting because that means we can compute base factors, which is often useful for model selection, for variable selection, also model averaging applications. Yeah, so just to sum up briefly, we've seen the extension from simple meta-analysis to meta regression. I guess the most popular or most common application is going to be meta-analysis, including subgroups of studies and looking at the different means in the different groups or whether there is a difference between the groups, actually. But there's, of course, a wide range of applications, including continuous covariables, including network meta-analysis model selection, and so on. Just a brief note of caution. If you're switching between parameterizations, unlike in frequencies models, you may need to account for that in the prior specification in case you're using an informative prior for the regression coefficients. So the new base meta package version should be available on CRAN, meanwhile. And yeah, I'm happy to answer questions or comments now or later. Thank you very much.