 My name is Tanja Burgaard. I am the Acting Head of the Research Census Unit at the Leibniz Institute for Psychology and responsible for a platform for open and cumulative meta-analyses called Psych Open Karma. I will present this platform to you now. It is a test version that will be released within the next few weeks. For further questions on the project, you can contact me via email or have a talk after the presentation. Why do we need a platform for open censor sees at all? Whereas the results in a printed meta-analysis are fixed, access to the data allows the replication of the analyses and, moreover, subjective decisions as the use of the statistical model and moderator variables included in the model can be varied to check the robustness of results. The data collected can furthermore be used for further analysis purposes. For example, if a researcher is only interested in a certain subgroup included in the data, especially in rather active research areas, the evidence of meta-analysis may outdate fast. Access to the data set and the rough documentation of the methodology makes it easier to update a meta-analysis. The basic concept of a community augmented meta-analysis system that has already been implemented by other initiatives as Metabas or Metalab, uses meta-analytic data in a repository within a graphical user interface offering meta-analytic outputs. Researchers can provide data to implement their meta-analyses on the platform or to extend existing meta-analyses. At the same time, researchers can use the functionalities of the graphical user interface to get a quick overview on the evidence on a topic and to replicate a meta-analysis. Most karmas rely on our shiny architectures up to now. To make the architecture more robust and scalable, to meet the needs of an infrastructure institute as ours, Psyc OpenKarma is a PHP web application. Requests from the user are translated and sent to an open CPU server. On this server, we have a self-maintained R package comprising standardized Karma data as well as meta-analytic functions for the web application. The data have to be standardized concerning naming and data structure for interoperability with the functions of the package. The computations are executed within the R server and the corresponding outputs are given back to the user via the graphical interface. Now I will give you a quick demonstration of the look and feel of the platform by using our test version that is almost ready for release. First, we select the dataset in the domain of personality. The documentation tells us that the original publication is a master thesis on the dark dried of personality. The dataset that we will look at now is on sex differences and psychopathy, measured with Hedges G. We can filter for subgroups and sort the data table according to each variable. Thus we can easily find out that the data includes effect sizes from 2002 to 18. In data exploration, we get an overview of the distribution of the effect sizes of interest and various moderator variables. Choosing publication year as a numeric and publication status as a categorical moderator, we get a grouped scatterplot. There are only few effect sizes before 2010 and there are unpublished results included from 2010 on. With a growing number of effects, the variation of the outcomes also increased. Here we have violent plots for different subgroups. Many studies in the field are conducted in the laboratory of Jonathan. The results from this lab are more coherent than the results of other labs. Next we go to the basic analysis for mid-regression models. We choose the random effects model with two moderators, number of items and laboratory. A growing number of items increases the sex difference in psychopathy significantly. The laboratory has no relevant effect on the sex differences. The model with two moderators accounts for about 9% of the heterogeneity in effect sizes. We want to look at potential publication bias. The contour-enhanced funnel takes into account the statistical significance of the results. There are only few studies with non-significant results and there is no asymmetry suggesting missing studies due to publication bias. With a p-curve, the evidential value of published findings is assessed by only looking at the distribution of significant p-values. The p-curve resulting here is the blue one. There is no indication of p-hacking and the power estimate based on p-curve analysis is 99%. Finally, we want to estimate the power of a prospective new study with a sample size of 50 and a significance level of 5%. The estimate from the meta-analysis is assumed as true effect size and represented as the curve in the middle. Based on this information, a study with 50 participants would find a true effect with a probability of 89%. For 80% power, which is a common threshold, a sample size of 39 would be needed. There are still challenges and developments needed to increase the benefit of PsiC-OpenKana. On the side of data acquisition, we can use data of pre-registered studies collected in our laboratory PsiC Lab if the studies are eligible for an already existing meta-analysis. For meta-analysis published in one of the journals of PsiC Open, authors could be asked to share the meta-analytic data of the meta-analysis. To assist data submission from the users, we want to use our own submission tool within PsiC Archives. To make data use for further analyses easier, we will connect PsiC OpenKana to PsiC notebook of a Jupyter Lab notebook for PsiCology that will also be released soon. Users will be able to use the meta-analytic Kana data in a free R environment within their own accounts. As we can see, as a central research infrastructure institute for PsiCology, we and our users can greatly benefit from synergy effects with other tools and services. Thank you for your attention.