 I want to introduce our next speaker, Jay Patel, and they'll be talking about mapping and evaluating evidence for application, philosophical, methodological, and interaction design foundations. Can we now? Good. Thank you, I'm excited to be here, and simultaneously regretting having the Tex-Mex pasta option for lunch. So this research is from the University of Maryland's College of Information Studies in the Human-Computer Interaction Lab, where my doctoral advisor, Dr. Joel Chan, conducts research on knowledge synthesis and creativity. And fittingly, the National Academy of Sciences stated mission is providing independent, objective evidence, advice to the nation on matters related to science and technology. And that's also my mission. What we're doing is we're exploring ways to map and evaluate scholarly evidence, writ large, for application. And in this talk, we hope to bridge a few islands of scholarship that may otherwise remain unconnected. So please note that although I will mention some business products and services created by startups and various other organizations, I do not have any scholarly, financial, or social conflicts of interest to declare. And my intentions are simply to catalog different options thoroughly, analyze them rigorously, and hope to redesign them to be more usable. So let's begin with a puzzle. So as a student, as a scientist and scholar who likes to keep up with science news, health news, and economic news, I'm really puzzled by a few different problems, like a disconnect between research and practice and education. Why is it that we have the same debates about reading and math instruction? Why do they recur? Why am I unable to teach more effectively myself, especially given that my own research background is an educational psychology technology? Why the gap? And then why do businesses and academic institutions fail to adapt to the complex and changing nature of the workplace? And on a more personal level, how should I optimize my own work at the psychological, social, and organizational levels? Now, answers to these questions are usually found in intervention research. And though sometimes we look to correlational research, the concerns risk. So in and out of my own home, what's risky and how risky is it? So this list is long, but for now I only want to convey the ubiquity of complex decision making. So nearly every outcome that we care about, health-related, work-related, and societal, is available for us to study as researchers. But typically the weight of these decisions passes us by. But every few years, elections can surface this disconnect between the mountain of research that's available for us and our inability to analyze and synthesize it. So these are just a few examples that matter to me. And I encourage you to find examples from your own interests, from your own work experience. I'd like to crudely think about different categories of scientific research, as those concerning causal explanations, so interventions, risk assessment and mitigation, few or no intervention studies there, and descriptive research like biodiversity databases that catalog observations. These are the examples I think about. And in daily life, if you synthesize an accurate and up-to-date answer, watch it, you can't synthesize an accurate and up-to-date answer. It isn't very feasible for many reasons. But there are two reasons that are critical. So the first is that there's an exponential growth in the number of publications. And in my view, this is about to burst the aging pipelines of the scholarly infrastructure. And currently we have about 2 to 3 million papers that are published annually. And the readability, as we know, is declining according to several measures. And that's partly due to jargon, an increase in jargon, but other variables as well. So assuming even a very impressive reading rate of 2 to 5 papers a day, we simply can't keep up with even the primary research topic of interest that's of concern to us, much less make those interdisciplinary connections that we need to break through. So our options so far have been largely static, prose heavy reports. And these can be detailed, sometimes credible, and occasionally filled with some actionable plans. And they might specify what we want to do under diverse conditions. But real progress is needed here. So our research explorers newer emerging synthesis modes that are highly detailed, meaning includes plenty of information, credible, meaning rigorous and thoroughly reviewed, peer reviewed, and actionable. So that means knowing how to apply recommendations to specific contexts. And when, if ever, will they be applicable? I want to know that from my own work and my own day-to-day living. I want to make research-based decisions wherever I am, whatever I do. And I want decision-makers to also have that information at their fingertips, at their policymakers, business leaders, university deans, so they can have a greater positive impact in their organizations. What to do? Well, we're going to try to review a few creative options today and fuse them into a solution. So this is a roadmap. OK, so maybe we can look to some inspiration. We can inspire ourselves with examples like this one. This is just to start off with. This is by UNICEF working in collaboration with Campbell Collaboration, the systematic review organization. And this is just digital interactive visualization, which is called an evidence map, that shows independent variables on the y-axis and dependent variables on the x-axis. And so in each cell of this grid here, we see that each bubble represents collections of studies of low, medium, and high confidence or quality shown through the different colors. And the size of the bubbles are just the number of studies. So rather than conducting separate research projects or querying a large text-focused search engine, this at least helps us get a visual overview of a larger literacy space that can be investigated through more clicking and zooming and scrolling and panning to study the various research summaries and the interventions here. So imagine you're the policymaker or the staffer who benefits from this more inviting starting place. Or consider other examples like evidence databases like examine.com, and it reviews nutritional fitness and wellness studies for laypeople, practitioners, doctors, professionals that are independent and filled with plain language summaries as pros. But beyond the pros, they also have these database components near the middle that are much more summary. So they're filled with independent variables like the effect of a supplement, outcome variables, high-level summaries, like the quality of the evidence, shown in a hierarchical form, how much evidence there is, the quantity and effect sizes in a somewhat coarse way. So as researchers, we would not find this to be terribly rigorous because there's a lot of information hidden, but for laypeople, this can be very useful. You go ahead and click, we find additional details as we move through that interface that have to do with the context of applicability, like the trial design number of participants, sex distribution, age range, and so on. Now we're getting a different kind of flavor through the research feed here that shows a more recent feed of research that is analyzed, but in a less thorough way and a more rapid way to help laypeople keep up to speed with matters of concern to them. So inspiration of a different sort, more academic sort, comes from the US Department of Education's What Works Clearinghouse, just down the road from here, it's actually a few minutes away, and they publish more academic, more technical research synthesis at multiple levels, like practice guides, intervention reports, reviews of individual studies, as well as open data from those synthesis that target practitioners, teachers, to infuse evidence-based teaching into their practices. And so this might look a little bit more kosher to you, so for example, on this RCT of a reading and writing curriculum for high school students, which is very large, you find a number of different variables that you may care about, outcome measures, comparisons, period, meaning trial length, sample, intervention, mean, comparison or control mean, and the significance of the test results as well as the improvement indexed, expressed as percentile gains, which is a more concrete form of an effect size. Evidence tier here is kind of like the grade metric from examine, which is just a quality metric, which is again hierarchical, so that's something I'll be critiquing later on. And here's just a little example of that for a particular intervention that has tier one or strong evidence. This is one way to do knowledge translation. So in our work, we conceive of different ways of summarizing information and maps and databases and with prose reviews and try to come up with some kind of construct evidence synthesis systems, which is this idea, an ideal of knowledge translation online that uses living information structures, so the information is dynamically updated. The information is explorable, it's credible, and it's application oriented. So here are a few humble first steps towards such a system. What we've been doing is simply cataloging various and we've got about 30 or so examples of these evidence synthesis modes that fit our operational definition and these primarily come from the US and the UK. So these include the maps and the databases that I mentioned. We've been doing analyses of where they come from, the domains that they target, social welfare, economics, education, health and so on, as well as conservation ecology and all of that. We've been targeting the way in which they analyze that research of the critical appraisal models and the peer review models underneath, the rigor, the clarity, who they target and whether they're actually used or not. Turns out they're not. So when we do conduct our inquiry, which is very conceptual, we take a very interdisciplinary and mixed epistemology, very deliberate approach. We're weaving together different strands of research, different insights from philosophy of science, meta science, human-computer interaction, cognitive science and even data visualization to form this rich tapestry of insights. And these will answer seven key questions in this talk and they ban over three categories, so conceptual, empirical and constructive. And we'll argue that anyone who wants to build knowledge translation systems or evidence-intensive systems, any tools like that should first answer these questions which are grounded in research and theory. So conceptual questions like what's the nature of evidence and evidence-based practice are answerable thanks to philosophy of science and meta science and what we advise after conducting our inquiry is adopting a more pragmatic view that considers scientific activities and the outcomes as tools for solving practical problems rather than searching for objective, neutral and knowable truths that are independent of context. We also advise a shift from evidence-based practice to evidence-informed practice because it's not that every decision out of the thousand that practitioners will make can be connected to very costly studies. Instead, we need heuristics and models and theories that will enable practitioners to reason approximately. We also adopt realist synthesis from research methods which is this idea from implementation science, excuse me, that connects context, mechanisms and outcomes in a tight link to create a summarizing explanation for what works and what doesn't. So to answer the question, how do we appraise the research and how do we synthesize the research? We favor granular checklist, granular processes, much more granular systems of reviewing that we have now and the triangulation, not just of empirical effects but diverse levels of explanations, theories, studies, epistemologies, taking a very broad approach to even viewing research, synthesizing research. On the more empirical side, we are doing some ongoing interviewing and focus group studies for those various evidence synthesis systems that I mentioned earlier and showed in the video. This is ongoing right now. We've made conduct additional think allows and user walkthroughs to get an idea of how it is these systems are actually being used. I think it's important to take a very empirical approach to studying knowledge translation. It turns out there isn't a whole lot of research in this area which is kind of puzzling. On the more constructive side, how to build this tool, what should be the user experience and what should be the UI user interface. We can bring in some insights from data visualization and cognitive science as well to help us make sense of the overall design. So this idea of sense making versus optimizing is very important from cognitive science and that's the idea that we don't want to present users with a definite algorithm for computing truth but we want them to get a gist of that information first in overview and then explore additional details over time and have them make sense of the information in relation to what's known from various other authors. Another thing that can be very helpful is visualizing replications and creating sort of a map of theories in a network diagram which is very exciting to us and we have a few little prototypes that we may show later. As well as visualizing uncertainty, we can bring in quite a lot of information from human-computer interaction. There's quite a lot of literature work being done in that area that shows that hypothetical outcome plots which are animated plots can be very useful for clearly showing the uncertainty of effects and we have all kinds of other psychological evidence. I can't go into too much detail but there's plenty of meta-scientific and psychological research on expressing effect sizes in particular ways using Cohen's U3 and the probability of superiority. There's very useful for clearly conveying effect sizes beyond just Cohen's D or something like that. So the broad takeaway from all of this work what I'm kind of gesture at is that we should be building a particular class of evidence synthesis systems in a very deliberate interdisciplinary way and beyond that we can build a pipeline that connects all of these different insights from diverse disciplines and uses them to build a knowledge translation infrastructure that continuously takes all these millions of papers with these millions of insights and actually makes a meaning out of them continuously so we don't have to wait for some global pandemic to arise and push out this large volume of research. Synthesis should always be occurring. It should always be in our DNA. It should be a part of the research cycle that you're engaged in right now. So this is very long-term vision with a few leads to track and I hope that you'll join me in this very difficult and arduous process but yeah, let's normalize deep synthesis. Let's infuse interdisciplinary into the process and let's create many more evidence synthesis systems and scale not just a few per domain but really one for every topic area and this problem could be of interest to both theoreticians and practitioners. So I hope you chat with me sometime. As I mentioned, this project is conducted with my advisor, Dr. Joel Chan and here he is right there, beaming as always. And here I am, you can contact me for any questions, comments, concerns. Thank you. Thank you, James.