 Hi and welcome to EsmallConc 2022. You're watching special session three on graphical user interfaces. We've got a really exciting set of talks for you today. As always, this session is being live streamed to YouTube and the individual presentations have been prerecorded and published there as well. Subtitles have been verified and can be auto translated for those individual talks. Automatic subtitles will be available shortly for this live stream. If you've got any questions for our presenters, you can ask them by the presenters individual tweet, excuse me, from the ES Hackathon Twitter account. You can see them in our feed. Presenters may have time after their talks to answer some of those questions or at the end of the session if time allows. We'll endeavor to answer all questions soon after the event. We'd like to draw your attention to the code of conduct which is available on the EsmallConc website at esmallconc.github.io. So moving on to our first talk of the day. It's Matthias Harre from the Technical University of Munich. So over to you, Matthias. Hello, everyone. Very happy to be part of this exciting session on user interfaces for meta-analysis. I was asked to talk a little about an open access guide for new meta-analysis that we developed called do meta-analysis with R. I'll mainly be talking about the motivation, concept and features of this guide. The guide itself is not a graphical user interface in the strictest sense of the word, but I think there are still many communalities. In particular, I also wanted to discuss this guide with respect to concepts and personal learnings that I hope are relevant for many people who develop R-based meta-analysis tools, especially if they are geared towards beginners. In the end, I will also present a few other projects that I like and that I think are really helpful to deal with the notices. Let's start with our motivation. I think that many people would agree that R packages, R-based tools arguably provide the most comprehensive state-of-the-art ecosystem for research synthesis and meta-analysis that we have to date. I also believe that this entire ecosystem has an enormous potential to increase the overall quality of meta-analytic research. However, this of course rests on the assumption that applied researchers across disciplines know that these tools exist and how to apply them properly. Therefore, I think it's really important to acknowledge a few real-life barriers that may keep intended and uses away from using tools and R, for example, packages that may actually be very helpful for them. First, for better or worse, most meta-analyses are conducted by applied researchers, often by bollic area researchers or PhD students, and not always by statisticians and evidence synthesis experts. Secondly, mastering R requires continued practice. R has a steep learning curve, but it is seldom used in some disciplines. Meta-analysis itself is also not part of every stats course or curriculum in general. And this wealth of available tools in R is also to some extent a double-edged sword. This makes it harder to navigate this vast ecosystem, especially if you're a beginner in the sense of, where do I start? And another real-life barrier is that many, if not most, researchers operate under considerable time pressure. And this, of course, makes it much harder and also more quote-unquote costly to adopt new methods. This may be particularly true for R, which requires quite some time and frustration, tolerance to become proficient in. All of these barriers may potentially limit the access to software that we develop or using R, and thus I think are relevant for GUIs and R-based tools in general. Our goal, therefore, was to develop a guide geared towards individuals without prior knowledge of meta-analysis or our programming of both, while trying to pay particular respect to motivational, practical, and cultural barriers. Another goal was also to make that information accessible to as many people as possible for free. In terms of the instructional design, a big inspiration for us was the work of Greg Wilson. In particular, his book, Teaching Tag Together. For several years, Greg was the head of the instructor training at RStudio, and I think there's really a lot to learn from him in terms of developing effective learning tools in whatever shape or form. I wanted to briefly highlight two of his concepts that we tried to implement when developing this guide, that is mental models and aspects of motivation and especially demotivation. The first concept that I found really helpful are mental models. Greg Wilson defines them as a simplified, cognitive representation of the most important parts of some problem domain that is good enough to enable problem-solving. It's important that these mental models are not static. They are assumed to progress from novice to expert level. So people develop, reorganize, and expand their mental model to solve problems and handle exceptions in practice. The goal we tried to achieve, therefore, was to allow learners to build a mental model or meta-analyses using R. We call this a conceptual understanding in the guide, meaning that a basic mental model is conveyed, which is used to handle problems in practice and importantly, can also be used to integrate new expertise, learn new skills, and sort of find new tools for continued learning. To illustrate this idea of mental models a little, I conducted an experiment. I gave student assistants who were working with me a few minutes to come up with their own personal mental model of meta-analysis and for illustration. The first model, the first mental model, which you can see here, is the one by Alisa, who at that time had just started working with us and had not conducted a meta-analysis before. The crucial thing you can see here, in my view, is that for being a beginner, Alisa already has a mental model, which connects meta-analysis to other domains of the knowledge. Of course, this mental model is not sufficient yet to conduct a meta-analysis in practice. I think we see the difference quite well when we look at the second mental model by Luna, who has gathered quite some knowledge on meta-analysis in arms last few years, and it's quite obvious that this mental model is much larger, it has much more notes. But importantly, it is also densely connected, which means that it becomes easier to solve problems and accommodate exceptions because there's simply more links. We also thought about motivation and demotivation of learners. Since we assumed that most people opening a guide for meta-analysis would be, to some extent, motivated, the primary question was, how do we not demotivate learners? And there's evidence that this is actually easier to do than one may think. There's research, for example, showing that minor environmental cues can deter learners, particularly when they're under, quote, quote, stereotype threat. It's also important to acknowledge that a large proportion, some studies say, up to 80% of graduate students in non-mathematical fields experience statistics anxiety, which can also be a motivational barrier. Another thing are self and preconceptions. For example, there's a study showing that many students, but also instructors actually falsely believe that programming skills are by-model, so there's people who know how to do it or can learn it and others can't. We tried to address these issues by building the guide around authentic tasks, meaning that contents are geared towards the direct implementation of knowledge using real-life examples. To start very early with relatively easy hands-on exercises to generate so-called early wins. Such early wins are known to be a really good way to boost motivation, much better than, and say, just verbal encouragement, because they generate tangible evidence or learners that they can achieve something. Another thing that we did was to avoid mathematical notation and jargon unless it is really properly introduced and to start with terminology that is used in practice and then correct it if necessary. Now to the actual contents and implementation of the guide. Here you can see the core components. We start with an introduction to MAT analysis, which includes some historical background, common pitfalls, and importantly, the problem specification and study search. This is not the focus of the guide, but we thought it is important to include these points, especially since they have a substantial impact on the analysis, like can have a substantial impact. The next part is about R and really starts with the absolute basics. So how to install R, what is the difference between R and R Studio, how to import data, which is also a big barrier for first-time R users and also how to manipulate data and R with a direct, of course, on metadata data sets. We then discuss the core parts that we consider essential or at least highly relevant for every meta analysis, then use that as a basis to shed light on a few more advanced, but frequently used methods and then have a section devoted to helpful tools in the R meta analysis ecosystem. This also includes tools for reporting, by the way, or how to improve the reproducibility of the analysis. The guide lives in an open access website, which we build using Bookdown, using this really nice three-column Bootstrap 4 theme as well as using a few additional packages and some HTML and CSS tweaks. The guide also introduces a companion R package called Demeter, which contains all the dataset used in the guide and also a few minor helper and wrapper functions. We also released the entire code and material used to build the guides on an open GitHub repository with the intention to help others adapt or repurpose its contents more easily, for example, for the long teaching. I'm always very happy to hear about instructors using the guide as part of their own courses. And to the right, you can see two very cool examples in which the guide was used as basis and then adapted for the course. In terms of future plans, I just wanted to briefly mention that what I really like about this technical setup is that it allows us to treat the guide as a sort of living document, so to regularly integrate new research and tools at new chapters and this allows us to incorporate feedback fairly easily. Below you can also see a few things that are on our to-do list and that we plan to add to the guide soon as shift time allows. Something I'm particularly interested in actually are ways to make the guide more accessible in the real sense of the word. So to introduce functionality, for example, for people with visual impairments or neurotypical individuals. If you happen to know a good source on how to do this, feel free to let us know. This is what we were interested in for us. I also wanted to briefly mention the limitations of the guide and its conceptual approach that I just sketched out. I think one important risk is contributing to what Fox called statistical bookery. I mean, the guide merely tries to be an entry point for meta-analysts, but it's quite clear that continued learning and self-reflection is required to become an expert. The hands-on nature of the guide may potentially also lure people into a column, sort of like a cargo cult science-like approach to meta-analysis where you simply follow certain rules or procedures because others do so, but not out of an actual understanding of the subject matter. The early win strategy I mentioned previously may also be associated with certain risks because it may potentially lead to false confidence that meta-analytic methods are always easy or trivial or to neglecting their assumptions. I therefore think it's really important to keep this trade open mind, particularly because it's relevant for many types of GUIs or tools, since they essentially always aim to simplify things for users in some way or to some extent. And last, I'd like to also briefly present other tools that I think are really cool and potentially helpful for new meta-analists. In terms of GUIs, tiny apps, shiny apps, it's really amazing to see the amount of all three tools developed over the course of the last years. For meta-analysis in general, there's of course Mavis, which has been around for quite some time now, but there's also other options that are based, for example, on meta-primarily. For network meta-analysis, there are tools like GMTC or MetaInside. And there's also now apps, which have been really cool for diagnostic test accuracy studies and for meta-analytic exam. In terms of more specialized apps, I wanted to mention the MetaShowdown Explorer, which allows to explore the impact of factors such as publication bias, heterogeneity, questionable research practices, et cetera, on the false positive rate of meta-analysis and which are quite really, really interesting. There's also tools focusing on publication bias, such as the Uniform IP Curve app. There's the MetaPower app that I discovered recently, which allows to calculate a priori power of meta-analysis. And there's also, of course, really helpful tools now for risk of bias assessments, in particular the really gorgeous and highly recommended Office app. So that's all for now. Thank you so much for your attention and if you happen to have any questions or comments, I'd love to hear them. Thanks very much, Matias. So we'll move on straight away to our next talk, which is by Irene Kariotaki. And she is from VU Amsterdam. So over to you, Irene. Hi, everyone. I'm very happy to be here with you today and have the opportunity to present you the metaside.org initiative, which aims at improving access to meta-analytic evidence on psychotherapy for mental health problems. I'll start with the example of depression. Depression affects hundreds of millions of people worldwide. So it's a common and debilitating mental health problem. As you can imagine, it has attracted the attention of many researchers and there's hundreds of randomized controlled trials have been conducted on the effects of treatment for depression. One of the most well-standard treatment for depression is psychotherapy. And when we're talking about psychotherapy, we mean several different things. So several different types of psychotherapy, like the behavior of therapy or psychodynamic or problem-solving therapy and behavior activation and so on. We're talking also about different treatment formats. For example, group psychotherapy or individual psychotherapy or digital interventions and so on and so forth. So apart from that, all these trials have been conducted on different populations, so different target groups, let's say women with postpartum depression or people with comorbid depression and other mental health problems or physical problems like HIV, for example, or cancer. So if we take into account all these differences among the trials that have been conducted so far and also the fact that many of these trials have inconclusive evidence or contradicting evidence, then we understand why maternalitic efforts are important. Why it is important to synthesize all these results and make sense out of these results. In this context, we have developed a maternalitic database on all psychotherapies for depression. To develop this database, we have searched in the bibliographic databases of PubMed and Bayes, Cycling for the Cochrane Library. So we've done this work a long time ago and we tried to identify any RCT, any randomized control trial on psychotherapy compared to any other condition. And when I'm saying any other condition, I mean another psychotherapy, another format or a version of the same therapy or pharmacotherapy or a control condition such as waiting list or care as usual or any other type of control condition. In these searches, we also included combined treatment, which is combined psychotherapy with antidepressants. And also we focused on studies, including individuals with either depressive symptoms or who met a diagnosis of depression. Of course, we had exclusion criteria and among our exclusion criteria were maintenance and relapse prevention studies, studies including only a part of participants with depression. So those studies that were conducted, for example, on anxiety or depression, were excluded. We really focused on studies that included participants with depression and that's why we needed studies to include a specific eligibility criterion for depression. And if they didn't include such criteria, they were excluded. We also excluded psychotherapy that did not target depression. So for example, CBD for insomnia and step care and collaborative care studies, which include often multiple components and very hard to tell whether the psychotherapy caused the effect or the different components that are usually included in collaborative care studies. So as I told you, we've done this long time ago but we've tried to maintain it through annual searches. So every year we conduct another search to identify new studies on psychotherapy for depression. Throughout these searches, we have screened more than 21,000 titles and abstracts and of which we have screened in full text, 3,586 papers. And this process is done by two reviewers independently every time. And whenever we do not agree on the inclusion, we reach agreement through discussion. Currently, the database includes 824 papers and this is based on the update of 2021. The latest update is still ongoing. We're still searching for the right papers include. However, based on the previous update, we have 824 studies and of these studies, 763 are RCTs with adults and 61 are RCTs with children and adolescents. I will focus a little bit on the database we have on adults and we have managed to categorize several comparisons including psychotherapy versus pharmacotherapy, combined treatment versus pharmacotherapy alone, combined treatment versus psychotherapy alone, combined treatment versus placebo, psychotherapy versus controls which is our biggest selection of studies including 390 studies. Also psychotherapy versus psychotherapy, studies on inpatients, unguided interventions, studies on different treatment formats, come to bias modification studies and other comparisons. So after we identify the studies we want to include in the database, we always extract the data from these studies and this process is also done by two reviewers independently. So we have extracted data related to effect sizes calculation, so means under deviations numbers of participants per group but also information related to treatment such as number of sessions, duration, the type of treatment, also the type of control condition, the duration of the assessment, the target group, data that are needed for the risk of bias assessment but also some sociodemographic information such as a proportion of females in the sample, et cetera. As you can imagine, this is a massive endeavor. So at some point we thought that there is no need for other research groups to do the same thing. So we thought it would be more helpful if we uploaded everything online and make everything accessible to other research groups or anyone else who wants to have access to this data. And so we did. So we have created a publicly available online repository of our trials. We first started with one group of trials, the one on psychotherapy against control conditions, which is the biggest election of studies that we have as I told you before. So we uploaded all our data online and the most exciting part is that if you visit our website, which you can access using the URL meta-side or if you visit our website, then you can simply run a meta-analysis by following some simple steps. So if you run these meta-analysis, then you will realize that you have access to all the data we have extracted. And by following some simple steps, you can perform, let's say, meta-analysis in the subset of these studies. Let's assume that you want a particular year or a particular target group. You can simply select this year, select this target group and run the analysis, which is very fascinating as you can find all the results you need. So you can find a summary of the main effects or you can generate plots like forest plots, which is a nice representation of the results of the meta-analysis. You can run analysis of publication bias or risk of bias assessment only by following some simple steps. And let's say that you want to do a more complicated meta-analysis, let's assume that you wanna do a network meta-analysis. For example, you can simply download all the data we have extracted and run this meta-analysis. And we have uploaded everything you might need in order to do something like that. We have uploaded the protocol of our database. We have uploaded instructions on how to conduct a meta-analysis, instructions on how to conduct a meta-analysis, several tutorials. So everything you might need is there. And you can simply run a meta-analysis using the data we have been collecting. Of course, you might be interested in all the details about the implementation of this database and how you can run meta-analysis, what exactly we have done for you to run this meta-analysis. You can ask all these questions to Mattias Hatter, who is here today. And I'm sure that he will be more than happy to answer your questions. From my side, I would like to talk to you a little bit about our plans for the future. So I told you that we have uploaded only the data from the comparison of psychotherapy versus control conditions. But our biggest dream is to include many more data sets and to have many more data sets publicly available. So we have been working on several other data sets, but also we have been collaborating with groups who have done exactly the same systematic effort in other mental disorders. So in the near future, we would like also to include the data sets on anxiety, PTSD, insomnia, suicide prevention, obsessive-compulsive disorder, eating disorder, psychosis, borderline personality disorder, and grief. So stay tuned, more will come in the future. Also, our group has been collecting individual patient data from the trials and depression. So this individual patient data concerned the primary data sets of the trials. And we have conducted several meta-analysis based on this data and other possibilities are that we upload also the results of this individual patient data meta-analysis using the Shining app. So we really hope that this database will be helpful for clinicians, other researchers or patients or other stakeholders who want to get a better understanding of the effects of psychotherapy for mental health problems. Of course, there is a whole team behind this effort and this team is growing because the databases are growing and we really hope that we will achieve our dream and we will upload the data from all the other databases in the near future. Thank you very much for your attention. Thanks so much, Irini. We'll move on to our next presenter now, who is Loretta Gasperini from Murdoch Children's Research Institute. So Loretta, over to you. Welcome to MetaLab. The interactive tools for conducting and exploring community augmented meta-analysis in developmental psychology. I'm presenting today on behalf of the MetaLab consortium. MetaLab is an interactive platform that hosts community augmented meta-analysis in the field of developmental psychology. This includes topics, spanning, language and cognitive development in babies and children. In this presentation, I'll be giving a tour of the tools we have for contributing to and exploring MetaLab data sets. Meta analyses are incredibly valuable in providing a synthesized overview of the body of evidence in a topic of research. This synthesis can be used to inform future research. Meta-analytic data can expose research gaps, identify which methods or experimental conditions yield more robust effects and be used to estimate power. This means that there's value in a meta-analytic data set beyond just what the authors have scoped to report in their publication. Researchers who are planning new experiments will have targeted questions about the nature of existing evidence in a topic that the meta-analysis has the potential to answer. But even if the meta-analytic data set is made public, it requires some familiarity with the data structure, knowledge of statistical software and meta-analytic methods to run new analyses and create plots. Realistically, how many researchers are going to put in that effort to select their methods or sample size? Considering how costly meta-analyses are to conduct, we really want to make sure that researchers can take full advantage of the value that they provide. To make exploring the data much quicker and more accessible, we've created a visualization tool and a power analysis tool. Meta-analyses are also basically outdated as soon as a new study emerges. A community augmented meta-analysis is one where anyone from the research community, not just the author of the original meta-analysis, can contribute new data. While it's a nice idea that the community can contribute new data, it does require effort to facilitate that people actually do. And one barrier to this is that new data must be compatible with the existing data structure. A step we've taken at MetaLab to make this technically more accessible is providing a data validator in a graphical user interface that informs you whether your dataset complies with the MetaLab structure. Today, I'll walk you through these user interfaces on the MetaLab website deployed by Shiny app. This is the visualization tool. So we start off by selecting a dataset. Let's pick infant-directed speech preference. So this topic looks at the extent to which babies prefer listening to people who talk in infant-directed speech or what we might call baby talk versus talking normally like they would to other adults. Infant-directed speech has some different properties compared to adult-directed speech in intonation, pitch, and speed. Now there's no evidence that infant-directed speech hinders language development nor that it's necessary for successful language development, but researchers are interested in questions like whether infant-directed speech has properties that are valuable or desirable to babies and the cross-cultural differences in how it's used. So first, we're just looking at overall effect sizes here. Here we have a forest plot of all the effect sizes from each experiment. Here's a scatter plot of effect sizes by age, a funnel plot for appraising publication bias, and a violin plot of effect size density. And here we see a summary of the meta-analytic model. This shows that across all experiments, the overall effect size is about 0.6 for babies preferring infant-directed speech over adult-directed speech, and the 95% confidence interval spans about 0.4 to 0.8. And we see the actual model output here. Now let's select some moderators. By looking at the scatter plot's linear fit, we see that the effect seems to decrease with age, taking us close to a null effect of no preference either way by about 18 months of age. So now let's specifically select the moderator of age. When we do so, and we check the meta-analytic model, we actually see no effect of age. So maybe this linear trend that we see in the plot here is just from fewer studies having investigated older babies and maybe a few outliers here. So it looks like we really need more data to see if there's a real moderating effect of age. It might also matter whose voice the infant is hearing in these experiments. So here we select the moderator of speaker, and this shows us that most research has used voices of unfamiliar women. We might want more experiments that use recordings of the child's mother to see if there's a difference there, even though we can imagine that there are some methodological challenges in doing so compared to having recordings of an unfamiliar person. And almost no experiments have looked at men's voices. So the model shows no significant effect of speaker type, but there are large confidence intervals. So overall, the overall preference for infant-directed speech seems quite robust regardless of these moderators, but we've seen where future research might be helpful. Now we'll move on to the power analysis tool. Again, we select the data set that we want, so let's keep going with infant-directed speech preference. Straight away, we see that the overall effect size is 0.61, which requires 21 babies for 80% power at the level of P is smaller than 0.05. Or we can see in the plot below the power we would get for any sample size, say if we wanted to achieve 90% power instead. Now let's add in some moderators. So maybe based on our exploration of the visualizations before, we want to test older babies. So let's say we're testing 18th month olds. Now the estimated effect is smaller. So we would need 45 babies if we want a good chance of a non-null effect. Let's see if any methodological choices might increase effect sizes so that we could test fewer babies. Eye tracking is where we measure babies' looking times at a screen as a proxy for their interest in the voices that they're hearing. This method is expected to yield a much smaller effect. So instead we'd need 73 babies. Behavioral methods involve babies turning their head to look at where the voice is coming from. And in contrast to eye tracking, this yields much larger effect sizes, which were pretty close to the overall effects we saw before. So we could test 42 babies for sufficient power using this method. So it looks like a behavioral method is the way to go. So now let's say I've conducted the experiment on infant-directed speech preference, testing 42 18th month olds using a behavioral paradigm and recordings of the mother's and father's voices. Now I have new data to contribute to MetaLab. So here I've copied MetaLab's template of their data structure and entered in the details of my experiments. This includes experimental details, details about the participant, like their native language and their age, and the results of my studies. So the means and standard deviations of the infant-directed and adult-directed conditions. Now I can use MetaLab's data validator to check whether I've entered the details correctly. Response mode, exposure phase, and mean age haven't been entered correctly. Let's go back to the spreadsheet and see where the problems are. Let's start with response mode and exposure phase. Okay, so it looks like I've used Australian spelling conventions to spell behavior and familiarization, but these columns only accept pre-specified entries and we use US spelling conventions. So I'll just make those changes. Now I just go back to the validator and click Validate again. Great, we can see immediately that that has been fixed. Now let's see what's wrong with mean age. Okay, so I've added days in after the age, but this column only accepts numerical values. So this isn't necessary or even permissible. All values in this column need to be in days with no additional characters. Now the data validator says that we're all good to go. Now I can contact MetaLab to let them know that I have new data and they can add it to the dataset for me. Then the visualization and power analysis tools will also include those new data points even though they weren't included in the original meta analysis. So here I've given it an example of someone wanting to add data points to an existing MetaLab dataset. But the data validator works in the exact same way for someone wanting to add a whole new meta analytic dataset in a new topic and make sure that the data is all in the right structure before it's added to MetaLab for the first time. So as you can see, these tools are quick and easy for users. They can be used to explore meta analytic datasets which can inform future studies and they can help someone add a new dataset or new data points to MetaLab. These new data points then feed back into the visualization and power analysis tools which means that the most current evidence is available and synthesized for a given topic. For people interested in using tools like these in their own areas of research, MetaLab has had one spin-off database for community augmented meta analyses in a different field of research, voice patterns in neuropsychiatric disorders. MetaVoice is based on the ideas, code and structure of MetaLab, and they have the same visualization and power analysis tools. So the concepts and infrastructure I've presented today could be leveraged in your own area of research. The links to the MetaLab website and the GitHub repo are included on each slide here. So if you're interested in learning more, check those out and we always welcome collaborations. If you're interested to learn more about MetaLab, please also check out our ESMA 2021 presentation that was on our in-development MetaLab R package. And we also have a preprint to a tutorial on how to conduct transparent, reproducible meta analyses using the MetaLab framework. I just wanna thank the MetaLab team, all authors and contributors to the meta analyses and the 45,000 babies and children who participated in the original studies and their parents. Thanks for listening and please get in touch with any questions. Great, thank you so much, Loretta. Really lovely to see someone who is, or a team who are here at the last ESMA Conf as well and see the evolution of your work. That's brilliant, thank you so much. Our next speaker today is Caitlin Hare. Caitlin Hare is from the University of Edinburgh. Over to you, Caitlin. Hi everyone. So today we're gonna be talking about R Shiny and why you should turn your R scripts into interactive web applications. So first of all, what actually is R Shiny? Shiny is an R package which allows you to build interactive web applications without any other knowledge of web development. Shiny app is always maintained by a computer running R and this can be either locally on your PC or on a hosting platform like shinyapps.io. The benefit of using a hosting platform is that it means that your application can be shared with anyone as long as they have an internet connection and they can access that website. Shiny apps have two main components. They have a user interface, which is what you see when you go onto the application and it's what the user interacts with. And they also have a server which is instructions for what the computer should do in the background. So it's sort of the processing side of things. When a user inputs something within the user interface, it gets processed in the server and it usually leads to some sort of change in the output on the application. I wanted to show you under the hood of a really simple Shiny app just so that you understood a bit more about what's going on within these elements. So let me share this video here. This is a simple Shiny app where the user inputs a region of interest. So America, Europe, Asia and then you can see on the right hand side, there's a bar plot showing the number of telephones within that region over time. Here you can see the title panel that corresponds to the title of the app. You can see the side bar layout and importantly, you can see the select input option here which is generating that dropdown list that the user can select an option from. So you can see within this that the choices for this dropdown list come from the column names of this dataset world phones, which is already loaded in our studio generally and it comes from the datasets package. So then we've also got some a break in the text and then some health text is written as well just to describe to the user where the data comes from. On the right hand side, we have the plot output which is the bar plot showing the number of telephones and this relates to the server.r code that we have here as well. So what's happening here is this is outputting the phone plot which is the same ideas here. So the plot output phone plot corresponds to this here in the server code and it's rendering a plot using the world phones dataset again but it's slicing to only include the columns which have been inputted in this select input option and then it's gonna output the relevant bar plot. This is a really simple example of how reactivity works in Shiny where a user input then generates a specific output and this application and many, many more are available on the RStudio and Shiny gallery. There's lots of really simple applications and more complicated applications on there and most of them have the code available for you to play around with. So now you understand a little bit more about what's going on behind the Shiny app. Why would you actually want to learn to build it? Is it worth the effort involved in learning how to code in Shiny? And here I wanted to present some examples from our research group and how we've used Shiny in our own work. First of all, if you conduct large systematic reviews and meta-analyses you'll know that you typically generate quite a lot of data with a review and sometimes in a publication you can't really put all that data in there and you can't really visualize it all in the way that you might want to. And also once you come to publish your research you ideally want to be sharing it somewhere so that it's accessible for other people to reuse that data. And rather than putting it in CSV files and a repository just we also wanted to share it in another way that made it more engaging and accessible for the research community. So this is an example from a review of animal models of neuropathic pain where they induce neuropathic pain in animals and then try to treat it using various treatments as a way of modeling neuropathic pain. So you can see here, this is a couple of, here's a couple of donut plots. We've got the number of publications which use different models to induce neuropathic pain. So 52 publications use pachylataxyl which is a chemotherapy to induce in neuropathic pain in the animals. You can see different numbers here and then we've got the same for treatments and we've also got the same for outcome measures where you can see the number of papers that use different outcome measures to measure the pain. Then some interactivity is built in here because you can actually select a certain model of interest. So if I was a researcher interested in pachylataxyl models maybe I would be filtering to look just at that. This is a select input box, similar to the one we've seen in the previous shiny app. And here I've changed the output. So it's looking only at the pachylataxyl studies. You can see which treatments have been tested in those studies and which outcomes have been measured as well. Another feature I wanted to highlight in this application is it allows users to download data of interest. So you can select a model of interest. Let's select pachylataxyl again. You can select all the drugs of interest and all the outcomes and you might want to only look at publications that are more recent. So it could use a slider to filter and then you can download the data of interest and use it for your own review projects or whatever you want to use the data for. Another use case for making a shiny app is that they can allow you to make your evidence into this tool accessible. And this is probably pretty relevant for this conference as well. So I created a tool in R to deduplicate citations across different databases that were obtained during systematic searching. And because I developed it in R, I realized quite quickly that it was difficult to actually share with people unless they knew R as well. Or quite often then I would end up having to run it for them which isn't really that productive either. So to save everyone time, I created a shiny application to do this for them. So I'll show you this now. It's quite a simple application. Basically users can upload a file of their systematic search. They can go to the deduplication tab and they can click to deduplicate their references. This is one that's been already clicked as you can see. So the tools already removed a certain number of references and there's also an option for manual deduplication here which you can do on the app. Once you're happy with your unique dataset, you can then download it. And there's a few different options for exactly what you want to download. This is quite a simple app actually. It's really just got an input. The app does something in the background. It processes the data and it deduplicates it. And then finally the user can download the resulting dataset. So it's really quite user friendly. And finally, as I got more experience with making shiny apps, we worked with some collaborators to start building what we call systematic online living evidence summaries which is sort of flipping the idea of systematic reviews a little bit. Essentially we're creating shiny apps which house all of the citations for a really wide ranging field. So for example, Alzheimer's disease or in this case, the effects of pesticides on human health and animals and ecosystems. So you take all of that citation data, you can apply some automated tools to it using R as well. And then you can display all that data visually on a user interface and shiny. And it means that then collaborators can go on and download the data that they want to. So this is another example and it's really more to highlight what shiny can do. This is another shiny app. I won't go through all of it because I don't really have the time but this is one of the main functions. So this is a database here, a data table showing all of the citations. You can actually search this database using keywords and Boolean and or options to a bit like PubMed but a bit more simple. And then you can click to filter studies by lots of different options. Where do you want the research to come from and whether you want it to be in animals or humans? Lots of different options there and what year you want it to be published in. And then you can download the data to different export formats as well. On this page, I just wanted to highlight some of the visualization side of shiny and some more visualization. So this is a map visualization showing where the citations come from across the world and how many papers were published coming from different countries. And finally, I also wanted to show this page because this app was a work in progress. So there was actually reviewers working on annotating data in the background as well and to keep track of lots of different reviewers and how much work they've done. We built this Leaderboard functionality as well. We can see all the different reviewers and how many publications they've reviewed. So moving back to this, this really using shiny enabled us to create a resource but for our collaborators and to visualize a lot of data and summarize it and gain insights from that. So I also wanted to just highlight some of the key advantages of using shiny. So first of all, it is geared towards those who have no web development experience. So if you already know are, it's probably going to be quite easy for you to pick up shiny, at least initially. Once you create your first shiny app, it's really easy to host online and share it with other people. It's actually free to do this at least with quite simple apps on shinyapps.io but there's lots of other options for sharing it online as well. Also I like shiny because it scales in complexity from beginner applications using all of the ready-made functions available like adding buttons and tables and sliders. All those options are built into the shiny package and the reactivity at the start is quite intuitive and it's simple whereas if you click this input, this is the output that you'll get. Moving on to more intermediate functions of shiny, you can add on packages for more user interface customisation. You can make the reactive programming side of it really quite complicated so there's a lot of engaging contents over user clicks. A certain point on a plot, maybe another plot's generated and you can do a lot to make it really engaging. And finally, if you want to produce more advanced production ready applications, you can use modularisation where you get chunks of elements that you quite often use in your application. So for example, maybe you always generate that sort of map for Offer Country. You could put that into a module and then use it for other applications as a chunk of code and it just makes it easier to sort of generate more applications along a similar line as well. And you can also integrate HTML and JavaScript code in there. This is particularly useful if there's some little element that you want to add or change within the website that there's not an R translation for yet. And finally, I wanted to share some top tips to get started with Shiny. Once you've installed the Shiny package and loaded it from the library, you can get started in about 10 seconds. All you have to do is create a Shiny web app from the new file option and then you just give it a name. Once you do that, you'll be left with a script that already runs the web application. So you can just click Run app already and then it will produce an app in a new window. And then from there, you can modify it and run it again and modify it and run it again and sort of get to know what each part of the Shiny app is doing. The Shiny package also comes with 11 examples for you to run and then you can modify them. And I really found that probably the most useful when I was learning Shiny is to just modify other people's applications first before you start writing your own. And testing out all the different inputs and output options and really taking the time to understand how reactivity works in Shiny. So how do we go from a user input to an output on the application? Once you've sort of mastered those fundamental aspects of Shiny, you can then move on to customizing the layout and appearance and there's lots of packages which let you do that and let you really change the appearance of a Shiny app. I also just wanted to recommend this Book Mastering Shiny by Hadley Wickham. It's available as a book down online and it's just a really useful reference guide for everything in Shiny from really beginner level applications to more advanced functionality as well. That's the end of my talk. Thank you all for listening and I'm really happy to take any questions about Shiny. I really hope that I've inspired you to create your own Shiny app and to use it in your research. Thanks so much, Caitlin. That really was a very inspiring talk as someone who's just got into Shiny in the last 12 months. I am really amazed at how professional you can make websites, although that might be debatable if you've seen my work, how professional you can make things look with Shiny. And I think it's really important to make things so accessible to people who can't or maybe for whatever reason can't code in R. So I'm afraid we've run out of time there, but our presenters are all available online, ready to answer your questions. So do please engage with them, respond to the Twitter thread with their name in or comment on the YouTube video that you're watching now. Thanks very much for tuning in and that's all for our special sessions of presentations today. We still have our workshop later on in just over an hour, just under an hour. We'll see you soon. Thank you so much to our presenters and thanks for you.