 Hello everyone. My name is Juan Claremont, the maintainer of the R package Metacard. Metacard is an R package that integrates classification and regression trees into meta-analysis. Therefore Metacard produces classification and regression trees to analyze the potential moderators that could explain the heterogeneity of the results obtained by multiple studies. Today we are going to do a small tutorial on how to use Metacard. First we start loading the package and the package Metacard offers three datasets. Two of them are simulated datasets that we are not going to use today. And it also offers a real dataset which we are going to use in the examples of today. Let's see what this dataset contains. All this dataset contains 106 studies about motivation enhancing behavior chain techniques. Then here we have G is the effect size of each of these studies. V is the sampling variance of the effect size in each of these studies. And then we have five indicator functions whether behavior and change technique is applied or not. Then this package Metacard contains two main functions. MRT which performs the fixed effect model which can be applied to studies that are identical or almost identical. Also the second main function is RE MRT which is the random effect model. And this one is applied for studies that use different populations or similar populations that generalize to the segment. First let's start with an example for the fixed effect model. Initially we have the formula so the effect size predicted by the indicator functions. Then we include the variance of the sampling variance of the effect size, the data, and in here a pruning parameter because as we are creating trees we have to prune the tree at the end to avoid overfitting. Apart from these options at these arguments there are other arguments. For example we can select only certain rows or certain studies. In this first example we only select the first 50 rows 5 to 50 and we use the argument subset. We can also select the arguments that fulfill a condition. In this case for example only those whose effect size is above 0.5. If you see the results we obtain complete different trees. This is a small tree while the previous one is a much larger tree. The next argument, the next parameter that we can change is the pruning parameter is this tree. To prune the tree we use what we call the C standard error rule. And this is the C we use in this rule. So if we run it with C equal to 0 and C equal to 0.5 we obtain also two completely different trees. So with 0.5 we obtain a small tree while with 0 we obtain a much larger tree. Finally another option of this FPMRT function is the control argument. This control argument is quite popular in classification and regression trees. Actually we are using it from the package R part which is one of the most used classification and regression trees packages. In here we can set the number of cross validations we want to do, the meaning bucket, the mean split and also the convergence parameter. So how many studies we need in the final nodes, how many studies do we need in a node so that we can split it and some other parameters that are used when we create classification or regression trees. Finally we created three output functions. First the prune function which is what you saw before. There are some small information about the data set and the call we did. Also a small summary, the moderators are T1 and T4 and also a basic scheme of the tree. So we have the root node which is split into two nodes and the second node is split into two nodes again. But we will see it much better in the summary and the plot function. Then with the summary we have again this information about the study, the call and the number of moderators. Also it says the tree has three terminal nodes that we will see with the plot. Then we have the results of the permutation test. We use a permutation test because the Q statistic has a chi square distribution, which is usually significant when the sample size is large enough. Then we use the permutation test to protect against type one error. In this case we observe it is significant. That means that there are a derivative between the status. Finally we have a small summary. Here we have the three terminal nodes, what's the sample size of the terminal nodes. The Q statistic within each of the terminal nodes, the effect size in the terminal nodes, the standard error of the effect size in the terminal nodes. The Q statistic we compute in the terminal nodes and whether this statistic is significant or not. The confidence interval with the lower bound and the upper bound. And we can see these results also in the plot. So we have the sample size of each of the nodes, then the moderator, so the condition yes or no. And again the split. Here then we have another small plot. We see whether the effect size is far from zero. And in this case the three of them are significant we don't see any overlap with this oriental line. Second, the second main function is the one for the random effect model, which is a slightly different. We run something very similar, the same formula, same values for the V, data and C. And here the options are slightly different. Well, the first one is exactly the same, the pruning parameter, which is again used for the C time standard error rule. And again, we observe that there are different trees as a result. And then in this function, the control parameters are directly incorporated into the function. So they are not used via our part control function, but they are incorporated into the function. So here we have again the maximum number of leaves, the mean split, so the minimum number of studies that we need in a split so that we can split the convergence parameter, the mean bucket, which is the number of elements that must be in each of the final nodes and the number of cross validation. So in this case we can decide these parameters for the tree within the function. Also, in this function there is the look ahead parameter, this look ahead is a Boolean, if it is true then we perform the look ahead algorithm and if it is false we don't perform it. The look ahead basically does checks the next step in the split. So every time we split we check, we look for the best split point and a split variable, in this case a split moderator. However, this might end in a local minimum of the loss function. If we use the look ahead algorithm we don't only check for the best variable and best splitting point in that step but also in the next one or even two more steps. That tries to prevent finding a local minima and tries to find a global minima of the objective function. In this case we keep it as false because using the look ahead algorithm is computationally expensive and this is a small tutorial. Again we have three output functions, these ones are similar to the previous one. We have the print in which we have, which are the moderators, again the call, the number of studies and a small summary with the queue between, so the queue statistic between the left and right nodes in each split and the tau square, which is an important statistic here that measures the dispersion of the two effect sizes between studies. Then we have again a summary which is similar to the previous one, we have the size in each of the nodes of the terminal nodes, the effect size, the standard error of the effect size. Then the statistic that we compute the p-value of this statistic and then the lower and upper bounds. Again we have also the permutation test, which is again significant I would say because it's 0.002. Last but not least the plot of this model, we observe here that we have more terminal nodes and more moderators. The advantage of using classification and regression tree is that we can observe interaction between these moderators because of course this group only has the condition of t1 equals 0 but to arrive to this group we have t1 different from 0, t4 equals 0 and t2 equals 0. So there are some interactions between the moderators that we can find out with Metaca. Again we have the other small plot, we see that for this second terminal node there is an overlap with the horizontal line and it is because this statistic is not significantly different from 0, while the other ones they are significantly different from 0. And that is all, this is the tutorial on how to use Metacard. If you have any questions you can let me know, you can find our emails in the Metacard documentation that is available on CRAM and we will be glad to help you with this package. Thank you very much and see you soon.