 Hello everyone, I'm very happy to be part of this exciting session on user interfaces for meta-analysis. I was asked to talk a little about an open access guide for new meta-analysts that we developed called do meta-analysis with R. I'll mainly be talking about the motivation, concept and features of this guide. The guide itself is not a graphical user interface in the strictest sense of the word, but I think there are still many communalities. In particular, I also wanted to discuss this guide with respect to concepts and personal learnings that I hope are relevant for many people who develop R-based meta-analysis tools, especially if they are geared towards beginners. In the end, I will also present a few other projects that I like and that I think are really helpful, particularly with the notices. Let's start with our motivation. I think that many people would agree that R packages, R-based tools, arguably provide the most comprehensive state-of-the-art ecosystem for research synthesis and meta-analysis that we have to date. I also believe that this entire ecosystem has an enormous potential to increase the overall quality of meta-analytic research. However, this of course rests on the assumption that applied researchers across disciplines know that these tools exist and how to apply them properly. Therefore, I think it's really important to acknowledge a few real-life barriers that may keep intended and use us away from using tools and R, for example, packages that may actually be very helpful for them. First, for better or worse, most meta-analyses are conducted by applied researchers, often by early career researchers or PhD students, and not always by statisticians and evidence synthesis experts. Secondly, mastering R requires continued practice. It has a steep learning curve, but it is seldom used in some disciplines. Metanalysis itself is also not part of every statistical or curriculum in general. This wealth of available tools and R is also to some extent a double-edged sword. This makes it harder to navigate this vast ecosystem, especially if you're a beginner in the sense of, where do I start? Another real-life barrier is that many, if not most, researchers operate under considerable time pressure. This of course makes it much harder and also more quote-unquote costly to adopt new methods. This may be particularly true for R, which requires quite some time and frustration and tolerance to become proficient in. All of these barriers may potentially limit the access to software that we develop for or using R, and thus I think are relevant for GUIs and R-based tools in general. Our goal, therefore, was to develop a guide geared towards individuals without prior knowledge of meta-analysis or programming or both, while trying to pay particular respect to motivational, practical and cultural barriers. Another goal was also to make that information accessible to as many people as possible for free. In terms of the instructional design, a big inspiration for us was the work of Greg Wilson, in particular his book Teaching Tech Together. For several years, Greg was the head of the instructor training at RStudio, and I think there's really a lot to learn from him in terms of developing effective learning tools in whatever shape or form. I wanted to briefly highlight two of his concepts that we tried to implement when developing this guide, that is mental models and aspects of motivation and especially demotivation. The first concept that I found really helpful are mental models. Greg Wilson defines them as a simplified cognitive representation of the most important parts of some problem domain that is good enough to enable problem solving. It's important that these mental models are not static. They are assumed to progress from novice to expert level, so people develop, reorganize and expand their mental model to solve problems and handle exceptions in practice. The goal we tried to achieve, therefore, was to allow learners to build a mental model format analysis using R. We call this a conceptual understanding in the guide, meaning that a basic mental model is conveyed, which is used to handle problems in practice and, importantly, can also be used to integrate new expertise, learn new skills and sort of find new tools for continued learning. To illustrate this idea of mental models a little, I conducted an experiment. I gave student assistants who were working with me a few minutes to come up with their own personal mental model of meta-analysis for illustration. The first model, which you can see here, is the one by Alisa, who at that time had just started working with us and had not conducted a meta-analysis before. The crucial thing you can see here in my view is that, being a beginner, Alisa already has a mental model, which connects meta-analysis to other domains of her knowledge. Of course, this mental model is not sufficient yet to conduct a meta-analysis in practice. I think we see the difference quite well when we look at the second mental model drawn by Lena, who has gathered quite some knowledge on meta-analysis in arms last few years. And it's quite obvious that this mental model is much larger, it has much more notes. But importantly, it is also densely connected, which means that it becomes easier to solve problems and accommodate exceptions because there are simply more links. We also thought about motivation and demotivation of learners. Since we assumed that most people opening a guide for meta-analysis would be to some extent motivated, the primary question was how do we not demotivate learners? And there's evidence that this is actually easier to do than one may think. There's research, for example, showing that minor environmental cues can deter learners, particularly when they're under a quote-unquote stereotype threat. It's also important to acknowledge that a large proportion, some studies say, up to 80% of graduate students in non-mathematical fields experience statistics anxiety, which can also be a motivational barrier. Another thing are self and preconceptions. For example, there's a study showing that many students but also instructors actually falsely believe that programming skills are bimodal, so there's people who know how to do it or can learn it and the others can't. We tried to address these issues by building the guide around authentic tasks, meaning that contents are geared towards the direct implementation of knowledge using real-life examples. And to start very early with relatively easy hands-on exercises to generate so-called early wins. Such early wins are known to be a really good way to boost motivation, much better than verbal encouragement because they generate tangible evidence for learners that they can achieve something. Another thing that we did was to avoid mathematical notation and jargon unless it is really properly introduced and to start with terminology that is used in practice and then correct it if necessary. Now to the actual contents and implementation of the guide. Here you can see the core components. We start with an introduction to meta-analysis, which includes some historical background, common pitfalls, and importantly the problem specification and study search. This is not the focus of the guide, but we thought it is important to include these points, especially since they have a substantial impact on the analysis later on. The next part is about R and really starts with the absolute basics. How to install R, what is the difference between R and R Studio, how to import data, which is also a big barrier for first-time R users and also how to manipulate data in R with a direct focus on meta-analytic datasets. We then discuss the core parts that we consider essential or at least highly relevant for every meta-analysis. Then use that as a basis to shed light on a few more advanced but frequently used methods and then have a section devoted to helpful tools in the R meta-analysis ecosystem. This also includes tools for reporting by the way or how to improve the reproducibility of analysis in R. The guide lives in an open access website, which we build using Bookdown, using this really nice free column Bootstrap 4 theme, as well as using a few additional packages and some minor HTML and CSS tweaks. The guide also introduces a companion R package called Demeter, which contains all the dataset used in the guide and also a few minor helper and wrapper functions. We also release the entire code and material used to build the guide on an open GitHub repository with the intention to help others adapt or repurpose its contents more easily, for example, for their own teaching. I'm always very happy to hear about instructors using the guide as part of their own courses. To the right you can see two very cool examples in which the guide was used as basis and then adapted for the course. In terms of future plans, I just wanted to briefly mention that what I really like about this technical setup is that it allows us to treat the guide as a sort of living document. So to regularly integrate new research and tools at new chapters and it also allows us to incorporate feedback fairly easily. Below you can also see a few things that are on our to-do list and that we plan to add to the guide as soon as your time allows. Something I'm particularly interested in actually are ways to make the guide more accessible in the real sense of the word. So to introduce functionality, for example, for people with visual impairments or neuray-typical individuals. If you happen to know a good source on how to do this, feel free to let us know. This would be really interesting for us. I also wanted to briefly mention the limitations of the guide and its conceptual approach that I just sketched out. I think one important risk is contributing to what Box calls statistical cookbookery. I mean the guide merely tries to be an entry point for meta-analysts, but it's quite clear that continued learning and self-reflection is required to become an expert. The hands-on nature of the guide may potentially also lure people into following sort of like a cargo cult science like approach to meta-analysis, where you simply follow certain rules or procedures because others do so, but not out of an actual understanding of the subject matter. The early win strategy I mentioned previously may also be associated with certain risks because it may potentially lead to a false confidence that meta-analytic methods are always easy or trivial or to neglecting their assumptions. I therefore think it's really important to keep this trade of mind, particularly because it's relevant for many types of GUIs or tools, since they essentially always aim to simplify things for users in some way or to some extent. At last, I'd like to also briefly present other tools that I think are really cool and potentially helpful for new meta-analysts. In terms of GUIs slash shiny apps, it's really amazing to see the amount of all three tools developed over the course of the last years. For meta-analysis in general, there's, of course, Mavis, which has been around for quite some time now, but there's also other options that are based, for example, on meta-primarily. For network meta-analysis, there are tools like GMTC or meta-inside, and there's also now apps which have been really cool for diagnostic test accuracy studies and for meta-analytic exam. In terms of more specialized apps, I wanted to mention the Meta Showdown Explorer, which allows us to explore the impact of factors such as publication bias, heterogeneity, questionable research practices, etc. on the false positive rate of meta-analysis, which I find really, really interesting. There's also tools focusing on publication bias, such as P-Uniform or the P-Curve app. There's the MetaPower app that I discovered recently, which allows to calculate a priori power of meta-analysis, and there's also, of course, really helpful tools now for risk of bias assessments, in particular the really gorgeous and highly recommended Roppers app. So that's all for now. Thank you so much for your attention, and if you happen to have any questions or comments, I'd love to hear them.