 Hello everyone, my name is Fransjit Bartosz, I'm a particular at the University of Amsterdam and I welcome you to the short tutorial on the Robma R package, a publication based adjustment method my colleagues and I have been developing over the years. The first question in my ask is what does Robma stands for and why the hell I say Rob instead of R? Thanks for my first question Robma stands for Robus Bayesian meta-analysis. We named the method as such since Bayesian model average publication best adjusted meta-analysis or BMA, PB, AMA would have been way too mouthful. To answer the second question, I'm origin from Czechia, new pronouns are like that and some are stuck in the lab. So in this tutorial I will outline the goal and philosophy of the Robus Bayesian meta-analysis methodology, how the methodology differs from other publication best adjusted methods and how you can easily apply the methodology with the R package. So starting with the philosophy behind Robma, I will draw two tiles that my supervisor uses in the interaction to Bayesian statistics. The first style is never assert absolutely. In other words, the point is to be mindful of our assumptions. If we set them in stone and hardcone them into our models, no amount of data can ever change our mind. The second tile is average across what you don't know. Assuming that we flow the first style and we did not assert absolutely, so we kept our mind open about different possible assumptions, how should we proceed to actually draw inference? The answer is simple, we just need to average across what we don't know, the uncertainty in our assumptions. In summary and translating the methodic domain, the goal of Robma is to entertain different possible assumptions about the data and draw inference in a way that acknowledges this uncertainty. Some of the assumptions about the data are often discussed and are included into the standard significant list as the assumptions about the presence versus absence of the effect and the presence versus absence of the additional between study and originality, so I will jump directly into the last domain, assumptions out of publication best. During this conference, you may have already heard or you will still hear several brilliant researchers describing different methods for publication best adjustment. Robma builds upon some of those methods or some of their versions and later combines them together in accordance with the two main principles I mentioned previously. So I will just quickly summarize some of those other methods that we use in Robma and I like to divide them in two main categories. Models adjusting for the relationship between effect sizes and standard errors such as trim and fill, pet piece and endogenous kink and models adjusting for a selection of p-values such as 3 and 4 PSM, AK1 and AK2 or special cases such as p-curve and p-uniform. And in Robma, within Robma framework we incorporate pet piece which conditional adjusts for the relationship between effect sizes and standard errors or standard errors squared and PSM for style selection models which adjusts for different publication best probabilities in different p-values intervals such as marginal significant or non-significant p-values. As I said in the previously Robma differs from the remaining publication best adjustment methods in ability to combine inference from multiple models. We use Bayesian model averaging which allows us to combine models representing different assumptions of the data changing process and weight their results according to the possible probabilities, a point which I will illustrate in the next slides. We use Bayes factors to quantify the evidence in favor of the presence or absence of effect attaching to our publication best. This represents a major advantage over the standard frequency methods which can only reject online hypothesis. We use prior distributions to regularize the estimates or incorporate prior knowledge which helps with convergence especially for the publication best adjustment models which often suffer from poor convergence issues. And lastly, Bayesian evidence submitting is an evidence of the sampling plan which is especially hard to justify in meta-analysis where we don't control the sampling of the studies. The studies were already conducted and we are just collecting them from the literature. To illustrate Bayesian model averaging I will use this slide that we often use to describe Bayesian model averaging. You can think about Bayesian model averaging as a way of specifying different assumptions about the data and here each assumption is represented by different elements. So in the meta-analytic setting we have assumptions of the presence or absence of the effect. So we specify models that represent our assumptions about the presence or absence of the effect. Similarly in a sense about the presence or absence of the native heterogeneity and lastly but not least the publication best. So we specify additional two model types representing those different assumptions. Once you specified all of our assumptions we have the different models. We feed our models with data and models that bring the data the best grow and grow. And once the model grows they also speak more loudly so we can hear them more clearly and we can base the inference more clearly on them. So here in this case you said the model represented the assumption of the presence of the mass is now the presence of the effect is now the largest one so we would be more inclined to conclude that there is presence of the effect. You can also visualize this in a different way and we can think about splitting of the model space and creating different models based on our assumptions. So in order to obtain our model average estimate we can specify models that represents absence or presence of the effect. So we split our model space into those two categories. Then we split our model space again according to the absence or presence of the heterogeneity. And then we split the model space again according to the model presence or absence of publication best. So in the end we end up with eight different model types representing combination of those different assumptions. But as I was talking previously there are also different assumptions of presence of publication best. So this demon that represents our assumption of presence of publication best is just an overall umbrella demon. How do we deal with this issue and how do we specify different models? Well we can split this one model into different models and we specify models that assume different publication best processes. For example we can specify the bad model that assumes that there is that there is effect between effect size and their standard errors. Or the piece model that assumes relation between standard error squared and effect sizes. Or we can specify different different selection models such as one-sided selection of significant p-values or two-sided selection of significant and marginal significant p-values. There is of course much more different assumptions about the publication best selection that we can do. And in the default trauma ensemble we use these for the following eight different model types. We use the bad and piece publication best models select adjustment. And we also specify six additional weight functions that combine the assumptions about different p-value thresholds and whether there is a directionality of the selection. So that was the background about the package and now I will move about the methodology and now I will move forward to the package. The package uses a mcmc estimation with jacks and transit run jacks package on the background and to estimate the parameters of the models. And then we use marginal likelihood computation implemented via the braid sampling R package to compute the marginal likelihoods and to proceed with the updating of the model of probabilities. The two main features of the package is the prior function which creates a class that allows us to specify different assumptions about the training processes via prior distributions. And we can of course summarize and print those objects with the print and pot function and then the Robma fitting function itself that allows us to as to specify the different model ensembles based on combination of our assumptions and also does the whole model averaging on background so we don't really have to worry about it as users. To specify the different hypotheses about the presence, absence of the effect, heterogeneity and publication bias, you can use the prior's effect, first effect now argument and similarly for the heterogeneity and publication bias. The resulting object that's fitted with the Robma method can be then further interrogated with the summary print and plot functions and I will now quickly illustrate the usage of those functions and the output that we can obtain. So basically how we would have to use a package while you need to open our studio, you need to load the package and assuming that you have a data set mind that already has the effect sizes and standard errors computed, you can just use the Robma function and place the effect size and standard errors into the corresponding arguments. By default, the Robma estimates the Robma PSMA ensemble that's outlined in one of the papers that I will show at the end of the slides and this ensemble consists of 36 different models which is based on 18 models assuming absence of the effect, 18 most assuming absence of the heterogeneity and 32 most assuming presence of the publication bias. And the summary functions was to print a summary of the fitted object. Here we see the different priorities that are set for different assumptions resulting posterior probabilities of the different model components once the model was updated with the data and the corresponding inclusion base factors which quantify our change in beliefs from parts of the results. In other words, the evidence data provided in regards with respect to our models. So for example, here we see inclusion base factor of 0.48 for presence of the effect. We can turn around and use 1 over 0.048 which is approximately 2, which means that the models that the data were two times more likely under most assuming absence of the effect than most assuming presence of the effect. Similar for heterogeneity, we can see the data were more likely under most assuming absence of heterogeneity than presence of the heterogeneity. And lastly, models assuming publication bias, here we see the data were more than 16 times more likely under most assuming presence of the publication bias. So we could summarize as a strong evidence for the presence of publication bias. At the second part of the summary table, we can see the model average estimates and this table summarizes the model estimate average across all of the specified models. So including most assuming presence, earn absence of the effect, heterogeneity, and publication bias. So overall, taking the complete uncertain table of the model space in the account, we can provide the mean model average estimate of 1.032 which is basically zero effect with the corresponding credible interest ranging from minus 0.51 to 0.218. And some more estimates for the heterogeneity estimate tau, different publication bias probabilities omega, and the bed and piece coefficients. The package also provides different visualizations before the visualizations of different summaries. For example, we can specify a type argument and we can print summary of all of the fitted models, including their primal primal properties, their marginal equals, their posterior properties, and their inclusive base factors, which quantifies which models were supported, which data which quantifies which models were probably the most support for data. Here we see that the model 8 and model 9 describe data the best and the data were 12 times more likely under the models assuming bed selection with no effect and no heterogeneity than the remaining models. We can also look at the diagnostics which is important since the package uses MCMC estimation and we can see the MCMC error, we can see the minimum effect size, and we can see the maximum overhead for each of the specified models. If we want, we can also look into summaries for the individual estimated models. Here the argument individual allows us to create summary of each individual model, which creates a long print of all 36 models. Here is under the snippet of the last three models where we can see this complete specification of the model, including the prior distributions and the posterior parameter estimates for each of the specified parameters. Moreover, if we fit the model, we can also create visual summaries. The default function just plays a posterior distribution of the model, but we can modify this function to provide more informative summary or different visualizations. For example, we can create a ggplot object that can be further modified or we can plot different parameters. For example, we can plot the heterogene parameter, we can overwrite with a prior distribution and we can plot only as model average estimate for models assuming presence of heterogeneity which is specified by the conditional. Then we can also use the traditional plotting arguments as limits, labels, etc. The main advantage and the main feature, in my opinion, of the Robma feature is that you can completely modify the models that you want to fit. For example, you can specify a one-set hypothesis test which is specified on much tighter effect sizes. For example, you do this by setting the prior's effect argument here in the Robma call and you specify the prior object. We specify its normal distribution, it means 0, standard division of 0.3, and it's tracking it truncated from 0 to infinity. So all the prior density is assigned only to positive values. Or you can specify only different publication best adjustments. For example, here we specify only path and p-style models. And you can specify this for both the now components assuming absence of the effect, heterogeneity publication best, or for the alternative hypothesis component assuming presence of the effect heterogeneity or publication best. And you can combine all of those together. And of course, if you want to use your Robma et al. in teaching mid-analysis, but your students are not skilled with R enough and you don't want to scare them away, we also implement Robma in JASP with additional publication best adjustment methods. So you can use the JASP graphic loose interface to run all of these analyses and also to specify all of these parameters with your mouse and just point and click. So here are just a few screenshots from JASP that visualize the corresponding summaries and output. And before I end the talk, I will just provide a quick summary of the Robma methodology, with the main advantages being that Robma can incorporate uncertainty. I would have selected model with Bayesian model averaging. You can provide evidence for error now, or alternative hypothesis. There's better performance, especially with small sample sizes, has the capacity to incorporate expert knowledge, and also has the potential for sequential updating of evidence. There are, however, also some disadvantages. First, it's relatively slow, especially in comparison to the frequentist counterparts, because it requires slow MCMC sampling, and it can fail under strong PIKing. If you want to learn more about the methodology, I would recommend some of our papers that we have written on the methodology, or our tutorials. And thank you very much for tuning in and have a nice rest of the conference.