 The registration was to make sure that the registration is easy and straightforward, is to pilot the form that is actually used to upload the details of your study. And we know that it takes between five, seven minutes, depending on the detail that you give. And we are happy about that because we believe that encourages that people just go visit and upload their records for the work that they're doing. Now, there are two ways to interact with the registry. One of them is through as a guest, which allows you to just see the list of records that have been added to the registry, but no more. And to encourage the buildup of the community, we ask people to register to the hub. And when you are part of that, then you have access to every other detail attached to a study or the record of a study. Now, as I say, we piloted the form that we used to register this information. And we believe that it's the most relevant and the most useful that could be gathered initially to understand what kind of work is happening in this space. So, for instance, we can learn from the registry the areas of the studies as people are working on or the dissemination strategies that people use to communicate this type of work or where the support is coming from, which is not surprising. Mostly comes in the studies that we have so far. It's coming from funding organizations, programs of continuous improvement. But there are other sources as well. Now, this is a map that shows the community so far we are adding orange dots as people register to the hub. And equally, when you register, you are able to see the topics that are included in the discussion forum, but you cannot participate as long as soon as you register, then you can be part of that or contribute to the discussion forum and add your thoughts and contact people and so on. And with this, I would like to finish with the message that what we want to do is to bring together the different researchers and organizations working in this space and through somewhere where you can actually talk to others and make sure that your work, everything that we are doing, all the exciting things that we've heard in these past few days is visible and that we all know about it. And with this, I'm going to finish and I'm going to thank you for listening, but I'm going now to play a short video that presents the registry and hoping that you find it as enjoyable to watch as we found it to make. Thank you very much for listening. Thank you, Alejandra. And that was a great video. Thank you for sharing about that registry. Next up, I am going to play a recorded video for the presentation from Robert Ross He is a post-doctoral researcher in the Department of Psychology at Macquarie University and it's titled, What Proportion of Studies Worked? Don't go to meta analyses for the answer. Give me one moment while I share this video. What percentage of studies worked in psychology? There have been a few estimates over the years, some of which you've probably seen. They have ranged from 97% to 94% to 91%. Impressively, this 91% estimate for psychology was higher than for any other academic discipline. Psychologists are clearly very clever. However, when I see these estimates, I can't help but think about the many psychology meta analyses that I've read over the years that seem to suggest that success rates aren't always so high. For example, this is a religious priming meta analysis by Shreve and colleagues. Many studies were included in this meta analysis and a highly significant overall effect of religious priming was found. Moreover, the authors used robustness tests to come to the conclusion that results are not an artifact or publication bias or p-hacking. Take a look at the forest plot from the religious priming meta analysis. In light of the very high percentage of studies in psychology that worked, do you notice anything surprising? According to the forest plot, only 61% of published studies had confidence intervals that did not pass through zero. I have coded these studies in green and will refer to them as studies that worked. And I have coded published studies that didn't work in yellow. This meta analysis suggests that the religious priming literature has far fewer studies that worked compared to psychology in general. Let's take a closer look at some studies from the forest plot to try to figure out what's going on. Here is a six study paper. According to the forest plot, studies one, three, four and six worked, but study two and study five did not. However, if we look at the abstract for the paper, we see that studies two and studies five actually worked. So according to the forest plot, 75% of the studies worked, but according to the abstract of the original paper, 100% of the studies worked. Let's consider another example. Only study three and study four from the paper were eligible for the meta analysis. According to the forest plot, neither study worked. However, if we look at the abstract for this paper, we see that both studies worked. So according to the forest plot, 0% of the studies worked, but according to the abstract of the original paper, 100% of the studies worked. I read all the abstracts for the studies included in the forest plot to code the percentage of published studies that worked. What percentage of studies do you guess worked? According to abstracts, 98% of published studies worked. This is even higher than the highest estimates that we've seen for success in psychology. How can we explain the discrepancies between the forest plot and the study abstracts? I can think of at least two possibilities. Number of results interpreted as trending towards significance in the abstract might not be statistically significant. Effect sizes used in the meta analysis might not be the same effect sizes as those that worked according to the abstracts. This might have profound implications for tests of bias. Consider funnel plots. They might show less asymmetry when examining studies that worked 61% of the time compared to studies that worked 98% of the time. Consequently, the extent of the bias in the literature might be hidden. And consider p-curves. With p-curves, non-significant effect sizes are removed from the analysis. This means that when 61% of studies are significant, many studies are removed, but when 98% of effect sizes are significant, almost none are removed. Again, the extent of the bias in the literature might be hidden. And in the case of the religious priming meta analysis, there are other concerns. For example, 0% of the studies were pre-registered. 0% of the studies included raw data. And after searching the literature for replication studies, I found that 0% of pre-registered replication studies included in this meta analysis supported the original study. All this leads me to suggest that meta analyses might sweep troubling issues under the carpet, at least in psychology, if insufficient attention is paid to what studies worked according to abstracts. According to a meta analysis, 61% of studies worked. But according to abstracts of studies included in this meta analysis, 98% of studies worked. This leads me to a research plan to try to examine the discrepancy between meta analysis and abstracts. I'd like to develop a coding scheme comparing the two and code papers from this meta analysis. And I'd also like to code papers from other meta analyses to try to see if this is a general issue. This is quite a large project, so I'm looking for collaborators who might be particularly interested in this topic. If you're interested, you can email me here. Thanks for listening. Thank you so much, Robert. And I'll put a link to his Gmail for everyone to see right there. Rob Ross46 at gmail.com. If you'd like to share that. And just as a reminder, the RIMO channel is available for networking. It's open right now. Next up, David Lang, the Executive Director of the Experiment Foundation, will be presenting on Science Angels. David, are you ready to take it away? Yeah, I am. All right, cool. So can I share my screen? I should be able to, right? All right, cool. Can you guys see this? Can you see slides? Yes. All right, cool. Well, I want to start off by just offering a disclaimer. And I am not a scientist. And I wasn't kind of trained as a scientist. I kind of took a backwards side door route into research. And so I want to start off with that disclaimer. I'll tell you a short story of how I got into this. So my friend and I were building a really low-cost underwater robot, an ROV, a remotely operated vehicle, in his garage, because we were trying to get to the bottom of this underwater cave. And we didn't know what we were doing. So we created this website called openROV.com. And we started sharing our designs and inviting people to help. And we got started. We got a very small grant from an ocean technology foundation to get this off the ground, you know, less than $10,000. And then we launched a Kickstarter and started this company and ended up sending these kits all over the world. It was like the citizen science project that got all these people asking questions. The design evolved. It eventually became this whole industry. So these tools that had once cost, you know, $50,000, $100,000 were all of a sudden less than $1,000. And it was all because of the tinkering we were going to do in our garage. And throughout the course of that experience that took place over the course of the past decade, I interacted with a lot of scientists. And I got actually involved in quite a few projects as a citizen scientist, as a contributor. And we got invited to all these ocean conferences all around the world and presenting. And some things really stuck out to me. And I think as a non-scientist being invited into that world, it was very clear to me that there were some major blind spots of the academy of academic science. And I think there's a real focus on publishing papers and the blind spots that I saw were around tools and the importance of tools and how to improve tools, science communication, how to engage people, not just tell them the facts, and also some just real inefficiencies around how funding was allocated. And so for the past year, I've been thinking and kind of testing actual ideas for that have come about things that I think could improve science to make science better. And I think improving the funding dynamics is a really interesting area to push on. I know there's a lot of meta science that studies like science funding. And I've spoken at the science of science funding conferences and talked to the researchers there. But there's very few experiments that actually try new things. So I teamed up with my friends Cindy and Denny. They started this company called experiment.com. And you can go to experiment.com. It's like Kickstarter for science. And a thousand projects have raised money on the site. It's really a fantastic tool for crowdfunding small research projects. And I started the experiment foundation. And we've gotten grants from foundations to fund projects in different ways. And we've tried a bunch of really interesting stuff like quadratic funding. We've explored can we use NFTs for science? Can we incentivize people in new ways? But the most interesting idea was this idea of science angels. And the idea really came from the meta science conference two years ago, where Carl Bergstrom asked Paul Estefan, hey, we're here at Stanford. We're here in Silicon Valley. What can we learn from angel investors about science funding, about improving science funding? And she said that's a really good question. And I had been in Silicon Valley, I've raised money. And I kind of decided to overlay, well, okay, here's what the financial world does and how they think about risk and how they think about amounts, high amounts, budget levels. And it seems to me that science is really missing this angel investor, this person who can come in and write a quick check really fast and say, hey, this is a this is a weird idea. But it's interesting. And I hope you keep going and follow it. It's kind of betting on people really early. Science doesn't really have that the NSF kind of thinks that they do that. But they don't do that. And there's there's still real risk aversion there. Some professors have like budgets that they can send grad students off on different ideas. But that's not ubiquitous. And it's not really formalized. So we've proposed this idea called science angels, where we're going to we're giving budgets to, you know, $50,000, $100,000 budgets to scientists and saying you can fund whatever projects you want on experiment fund, you know, your friends fund projects you like, when projects you think are crazy, but you're the ones in charge, you have the capacity to bet early on people. We just launched this this week. This is one we're doing for ocean solutions. I have a $50,000 budget that I'm putting towards ocean technology and research projects. And we can move fast. We started this on Tuesday and we'll have money going out the door this week, right? So this is a fast small grant program, a fast grant program. And we're going to be announcing, you know, between five and 10 more science angels by the end of the year. So we have a million a million dollars of small, fast, grant money, going on to researchers to pursue ideas that matter a lot to them, not just the ideas that their grad or their professor or their PI is telling them to do. It's like we want, especially early career researchers, to go off in new and novel and interesting directions. And so that's what we're doing. I've been interviewing scientists and documenting all this work at sciencebetters with sidebetter.com. And I'm happy to talk more about this. My email is david at experiment.com. And this is an experiment we're just getting started with. So I hope you'll follow along and get involved if you think it's interesting. Thank you, David, for sharing that. Next up is Theodore Kaku, undergraduate student at MIT, double majoring in computation and cognition and linguistics and philosophy. Theodore will be presenting on how linguistics papers talk about non-English languages. Theodore, I'm going to give the controls to you. Yes. Hello, everyone. My name is Theor Kaku. I'm an MIT student. And I work with Kyle Mahon from UT Austin to investigate how linguistics papers talk about non-English language. So linguistics is the study of language, reaching from how words and sentences are formed to what happens in our brains when we use language, or how to make AI that can produce an understand language. The goal of linguistics, which can be applied to all these subfields, is to make generalizations about how human language works. What makes linguistics challenging is the vastness of possible languages to study and their parents. To illustrate, here's a mapping of one specific syntactic variable linguistics, word order, where each of these points is a language colored for the value for that variable, word order. And typically, these distributions look like this with the concentrations of languages with the same value within one area. And ideally, to make sure that we make accurate generalizations about language, the language that we study should be sampled relatively uniformly from this distribution. However, within the language community, the language research community, there has been a lot of discussion about an English or, more generally, European language-centric bias, which recently gained popular attention on Twitter via the hashtag Bender Rule. This rule, there is the name of its creator, Emily Bender, who is a linguist and combination linguist at the University of Washington. And she has been advocating for this cause since as early as 2011. The rule states that when you publish a paper in language research, it is important to acknowledge the name of the language that things started, even if that language is English. Because by neglecting to do that, the work that you are doing can be falsely interpreted as language independent, when in fact, it might be specific to the properties of that language that things started, which is often English. And to illustrate this very fact, as early as 10 years ago, and not in college, I've taken a look at about 4,000 papers of research done in psycho linguistics. And they found that English is easily the most mentioned language in those papers. And it's mentioned about 30% of them, followed by other Western European languages, Mandarin and Japanese, ranging around 10%. And even here, languages that are fairly common to the journal research population appear very frequently in this corpus, not to mention languages that are relatively unknown or unpopular. But before showing you the data that we collected, let's try to understand it. These are examples of a paper whose focus is on Walfiri, which is an Australian language with a relatively small number of speakers, about 3,000, next to a paper which focuses on Germanic languages, including English, with many millions of speakers. So let's compare two different conclusion sentences from each. From the Walfiri paper, the structure of label peer overall is that of a mixed language, and that most verbs in some verbal anthology are drawn from English and Quirol, and most normal project is from Walfiri. Versus our results challenge of hypothesis of the supports from encapsulation of rules of inflection to realize the research in which sensitivity, probably, is recognized as intrinsic to human language. So even here, we can see that the sentence about Germanic languages is drawing a generic conclusion about how human language works. While the one about Walfiri has a much narrower focus and lower consequence implication, and even uses English as a reference point. To emphasize, we claim that when an article is about English, the language that's being studied is less likely to be mentioned, and the result is more likely to be framed generically. And when an article is about a widely spoken language, that language we will mention more often, and the result will be framed more specifically. And the quantitative test that we decided to do is the following. We examine the frequency of language mentioned in journal articles as a function of language. So given that the article is about, let's say, English, how often does the word English appear in the article? And our data comes from 658 papers from the language journal. And to identify the focus of these papers, we measure the most frequent language in each of them. And for this language, as you mentioned, we measure how frequently they need within the paper. And we found that English is, again, by far the most mentioned language in the journal. In fact, it's the most frequently mentioned in over half of the articles, with the next closest one being French with only about 3%. And strikingly, when English is the most mentioned language in an article, it tends to be mentioned less, significantly less often than when another language is around a language, which creates this clear separation between English and all other languages, which we found illustrates the bias that we were talking about. And some of the current research questions that we are interested in are the following. Are there systematic differences in how linguistics papers discuss the language of study when the language of study is English, as opposed to when it is not? And can we build a model that can perform that? And as for generic sentences, which are sentences that make generalizations, we are wondering if they're more common when we talk about English, and how do they differ for when we talk about English versus when we talk about adult languages? And we hope that the work that we're doing can improve the generalizability of linguistic research and help push for a new era of meta science in the linguistics community. And that would be all. Thank you for listening. And if you have any questions or we want to talk about this topic more, this is my contact information and my whole information. Thank you very much for that. Up next, David Miller, co-founder of Neurotech Consultancy, postdoctoral researcher at the Institute for Translational Psychiatry at the University of Munster. David will be presenting on many gaps to fill. How can we boost the use of open science research practices in technology development? David, if you're ready, take it away. Thanks very much. I'm going to switch quickly. Okay, I hope you can see the slides. Yeah, thank you. Yeah, thanks very much. I'm happy that we can today present to you rather some concepts and ideas that are still to be done or to be implemented, the work that needs to be done, rather than things that have been done. Many gaps to fill. How can we boost the use of open science research practices? I did a PhD in neurofeedback training and clinical applications and during that work got into contact with meta research and at some point we decided to make that type of knowledge available also to technology, to industry. We started a startup and just for disclosure, besides the startup, I'm also still academically working and recently became a junior PI in Aachen. Maybe just to give a broad perspective on the landscape of companies that are around more than 200. It's a big market, more than $10 billion is the estimated market size at the moment for Neurotech with an annual growth of 10 to 15 percent and that includes consumer and medical Neurotech products as well. So particular technology like neurofeedback training, which I've been working with and brain compute interfaces, but also brain stimulation devices, for instance. And as Neurotech consumers, developers and also investors, what we want are trustworthy products, robust evidence, predictable outcomes, sustainable business models for growth that is particularly important for young startups and then resilient strategies for the product development process, which can be quite long winding. And so in short, what we need are good decisions and they are based on solid evidence. However, we look at the current situation and there have been concerns raised that there's a big gap between the way medical and consumer Neurotech technologies being developed and also advertised. Language is misleading in marketing claims, so that can't be backed up by evidence, for instance, and that also involves ethical and regulatory implications. And lastly, if we look at the history of unicorns, there is a lack of peer reviewed evidence for a lot of companies that have made a lot of revenue, but the evidence base for the products that are being sold is often scarce. And so the vision of our startup is essentially to build three pillars and bring together three key stakeholders. And it starts with the user, so the people who are going to buy the products. And at the moment, we are starting a market research where essentially we ask Neurotech enthusiasts and early adopters how they view evidence, what role it plays in their mind. And this is an exemplary question that we ask where we contrast two different companies that fulfill different types of robust and open research practices. For instance, study pre- registrations fulfill in this example by company B, but not company A and so on. And we essentially ask them whether consumers, future consumers would rather tend to buy a product from company A versus B and how much that would be valued to them. And so how much would they be happy to spend essentially on top of a certain price if certain evidence criteria were fulfilled. The second pillar is we bring in the perspective of startups. So we conduct interviews with startups, so B2B interviews and essentially try to understand their pains in their research and development process, their needs, how they approach evidence, how they think about evidence because we are dealing with quite a heterogeneous group of startups. Some of them are startups that started at an academic setting, others not. And so it's really important to understand their perspective and to offer education on robust and open science research practices in the form of workshops. One aspect that we highlight there is that there's an increasing demand for regulatory oversight in this field and by starting to implement robust and open research practices now startups in the new tech field will be ahead of the curve. Lastly, the third pillar involves initiatives and regulations. And one particular initiative that we would like to pitch is that the open science framework could broaden its scope and reach by including a registry that includes the possibility for public-private partnerships and, for example, new tech startups to register studies and to have a space. And the second idea that we are going to pitch is reintroducing new tree labels, things like batches that are familiar to the open science community and that were earlier already used in the nutrition industry, bringing that back into the new tech and in general tech development by introducing labels that certify a certain standard of evidence base and open and robust research practices in the R&D process. So lastly, just to give you a quick overview of the vision that we have for a new tech evidence ecosystem, currently hardware validation and initial training validation happens mostly within startups and sometimes also in collaboration with academic partners by using open and robust research practices, more incentives are created and more trust is being built and that allows to form more public-private partnerships and also more independent academic researchers are incentivized to collaborate and test available technology for its merit that would eventually lead to more peer-reviewed scientific papers and independent publications as well. And that's good for building trust with consumers and funders and lastly also creates the need for certification, as I mentioned earlier. So certifiers would be the third stakeholder that is already needed in order to provide a comprehensive ecosystem for evidence building. And with that, I'd like to close and yeah, if you're interested in this topic and would like to learn more about it, you're welcome to follow us on Twitter where we are going to report more recent updates in this emerging market. And if you're interested, you can also get in touch with us. We're happy to chat with you. Thanks very much. Thank you so much, David. Next up is Jason Williams, Assistant Director, Inclusion and Research Readiness at Cold Spring Harbor Lab. Jason will be presenting on Introduce Plenarity and Inclusion through Career Spanning Learning. Jason, ready to take it away? All right. Thank you very much. I am going to go ahead and try to share the slides and let's just see if my computer operates. Okay. I'm trying it the recommended way and we'll see if that works. Hold on one second. If not, I'll share it the other way. Yeah, it's thinking if not, I'll share the field fashion way. I'll give it five more seconds so we don't go too far behind. All right, I'll do it the old-fashioned way. Let's take it a little bit too long. Okay. All right, so I want to thank everyone for the opportunity to present and let's go. Did that just stop sharing? Okay, I see that. Sorry. I guess I get to be the one with the technical difficulties. Let's go back to that. Okay. That's working. All right, so thanks for the opportunity to present. I think I won't take the full eight minutes, so we'll have to get back on track. I'm just going to get my time at mic, so to speak, to pitch something that I work on, not as a full-time activity, but something that I think is nonetheless valuable, which is the idea of Career Spanning Learning and Improving Interdisciplinary and Training. The problem where this starts from as an educator, which is primarily what I'm doing right now at the laboratory, although I come mostly as a molecular biologist, is that no matter the type of training that we try to prepare ourselves or our students with, that's going to get really, really difficult as science becomes more complex, as there is more interdisciplinary, as fields become more specialized, and so having spent a lot of time invested in improving undergraduate curriculum to help faculty bring newer technologies and techniques into science, it's a losing battle in the sense that the shelf life of our skills is always going to get shorter as we get more advanced. This is from a study that we did a few years back, and it's a problem within the sciences, in my opinion, because we spend a lot of money invested in infrastructure, and in this case, this is looking at the needs of computational biologists that's published, where we spend a lot of time saying, how many computers do we need to buy people? Do we need new computers? Do we need more storage space? But when you actually ask people what they need most, it turns out that the top three of these unmet needs is not cloud computing or being able to share their data with colleagues, it really is training on how to use these things effectively. And so I think overall we spend a lot of time paying attention to physical infrastructures and not human infrastructures. It's rather easy to measure those, how many CPUs I have, how many terabytes of data I've moved from one server to another, but the human outcomes are sometimes not as well structured. And it turns out that we spend a lot of money on this. This is from another paper back in 2017, I believe, where we spend lots and lots of money trying to develop workshops, especially to develop, and this is my field as well, computational skills for graduate students. And with this paper found, much in agreement with what I think educational psychology will tell you is that a lot of that training is actually fairly ineffective in preparing PhD students. They do get there somehow to a large extent, but my question was, isn't there a better way to do that? And I also think that this is a problem and it starts in many different ways, but this is a brief looking at different institution types from some research we did on the barriers faculties face in integrating new technologies, particularly computational technologies and approaches in the classroom. And kind of what you see this circle that's out here is that the two-year colleges and minority serving institutions, two-year colleges, by the way, where almost half of STEM graduates start and then transfer into a four-year, they have a lot more difficulty, though those institution types and contexts look a lot difficult, more different than the other ones. So not providing training to our faculty members and to our students actually amplifies disparities. So you get two people with a PhD theoretically in the same field, but as we know, that doesn't mean that they have the same level of preparation and the same level of access to technologies that everyone else has. And therefore they're going to be disadvantaged in many ways for going after the high impact grants, which only reward novelty. And if you're not being able to perform with the latest and greatest technologies in your approach, you are yet further behind. So what can we do? Well, this is my little stone thrown to the ocean. This is a community of practice that I've developed and that is ongoing. And that is just the beginning in my opinion called ysidetrainers.org, which everyone is welcome to take a look at or join. And what we're trying to create is a global community of practice for short format training in the life sciences. So I've literally traveled the world teaching workshops and interacting with other people who teach workshops that are aimed to improve the way that life science is done. This started as a Slack channel in 2018 and we currently have more than 400 members in 20 countries and lots of messages exchanged from communities that are working on the same problems in terms of how do I most effectively bring the latest technologies and approaches to other life scientists, professionals. We're all working on the same thing, but we're usually with our heads down in the sand trying to solve it for our one problem or community. And the idea here was to bring itself together. And so we do monthly community calls and shared challenges. We're working towards other types of internal training and professional development. And we were also going to meet, we have planned a meeting in 2021, a latter half to help solidify those challenges and develop more of a global call to action announcing the goals of our community. But overall, yeah, these are our goals to foster community practice across institutions, organizations and continents. Promote people to do this training by giving them a voice. Also develop standards. You, as a person who wants to learn something, oftentimes, and you show up to workshop as somebody claiming they can teach you something, you actually have very little idea. Oftentimes, because the workshop is not in a university curriculum where it's some theoretically assessed, and there's a certain level of understanding that that person is qualified to teach. Oftentimes, what we get is people who are experts in the topic but not experts in the pedagogy. And so you can leave a workshop actually worse off than when you attended. And overall, the ultimate goal is to accelerate science by promoting inclusive interdisciplinary and career spanning learning and really normalize the idea that we're going to have to be trained and reinventing ourselves for our entire careers if we want to really be interdisciplinary. This is actually selected back in 2018 when I wrote this and actually was selected by NSF as one of their next set of big ideas. I hope that I see more activities and things coming around it, but it's a cool thing and I'm glad to share it and maybe raise awareness of people who might want to interact. So feel free to look me up on the website and or contact me. I'm also on Twitter and my email address, all those things I can put in the chat. But thanks for your attention. Jason, thank you for that. Now I'm thinking back to old workshops of your and wondering if I've ever left any which ones I've left worse off than before. And there are certainly it is a wide spectrum. So thank you for sharing that community of practice. Last up on the program, let me get my, you have Ricky Jeffrey, assistant professor in education in the University of Nottingham in New Mo, China. And Ricky will be presenting on use less language, use more figures, tables, color, highlighting and multimedia. Ricky, I'm going to give the controls to you and you should be able to present. Ricky, are you there? There. Great. All right. Here we go. So my field is language education, both as a researcher and a practitioner, very interested to hear Theodore's presentation earlier about the sort of overemphasis on the English language and linguistics and then also the talk just now, which was felt like more weighted towards an educational practitioner talk. So my talk is very much heavy on opinion and theory, very light on statistical empirical evidence. And the central recommendation, which is quite obvious in the title, it may feel that it's something that's already done in your field. It's done across STEM. If it is, then I think that's great. But I think that it's good to explicitly consider why we do it, when we do it, how it helps with the general scientific mission. And then also this last bullet point on this slide, actually in many parts of the academy. And I've seen it as a language teacher. I see it as a researcher in a plug linguistics. I see people struggling with thousands and thousands of... What do I? I'll just carry on. I see people struggling with thousands and thousands of English words usually when it could be cut down and communicated more effectively using tables, multimedia and so on. So I think that a lot of the open science reforms that I'm familiar with have focused on the message of science, the core scientific knowledge, trying to improve how scientific knowledge is created so that the conclusions, the findings are more robust. And then also the sharing of it. So that scientific knowledge is more transparent and accessible for people. So on the left, you've got science practice, the evidence, existing claims leading to new claims. But the arrows, if you look on the left of this slide, thin black arrows, things are rather flimsy. The inferences feel rather weak. So these thicker blue arrows towards the middle of the slide, we're doing things like pre-registration, improved statistical inference, more replications and so on, so that we can have more confidence in the reproducibility of findings and so on. And of course, we try to improve the transparency as well. So that thick black box that hides a lot of science, we try to make that much more poor, as much more open, we're sharing the data sets, open code, open peer review, pre-prints and so on. So this is most of the stuff I see about open science reforms, meta science reforms. But I still think there's a problem, or rather there's one problem that isn't considered so explicitly in my experience. The medium that this scientific message is communicated by is the same in all of these cases. It's primarily paragraphs of English, paragraphs of English language. And we all hear generally presenting at this conference, maybe it doesn't feel like so much for barrier, but for the majority of the world, learning English is really a significant barrier. And of course, each of us are learning English, but we're confident enough that we can participate in this conference at least. So I think the consideration of the communication of science. Yeah, I'll just move out of the way. So whereas if we don't pay attention to the barriers that are inherent in language, in natural language, and usually it is English, then things can be obscured. If we do pay attention, I think there are some steps to try to reduce the noise and helping the signal come through more clearly. So the why? Why is natural language a barrier for science? Well, there's a screenshot on the right of a research article written in Chinese. I don't know if anybody reads Chinese here. I do read, but this is very, very tough for me to try to deal with very, very slow. So just like this may seem to us, the English world will seem to the majority of scientists and suddenly the next generation of scientists across the world who will not be growing up in areas where English is easy to learn. If, for example, you're growing up here in China, then the linguistic distance between Mandarin and English is really quite high. So it takes many, many years to learn. Fred Oswald has raised his hand. David, I'll leave you to let me know if we should be taking questions. It's difficult to create. It's also difficult to consume. So as a reader, even if your English proficiency, you've learned English and everything, it's just slower to read these pages and pages, paragraphs of English. And then also, the spinal bullet point, there's ambiguity in heritain language. I'm not going to provide examples of that today, but I think we can think of cases of ambiguity in language. And this has been discussed for centuries. So quotes from Bacon and Leibniz and the Vienna Circle 100 years ago. Knowledge, it's knowledge itself that is beautiful, more beautiful than any apparel of words that can be put upon it. So the scientific knowledge here, but it's covered in the medium, which is usually natural language. Leibniz wants science to be as natural and easy to use as possible. And the Vienna Circle wanted free scientific communication, free it from the slag of historical languages. And the thing is that we have the information technology that Bacon, Leibniz and the others didn't have. So things are possibly a bit easier for us. I better get to the practical parts. So I gave a presentation at a conference earlier this year. There's the link where I took more broadly about different linguistic measures, ways to deal with the barriers in natural language. I'll talk specifically about this final one, more multimedia. I'll just jump through some quick examples. Basically, if you're using multimedia, so you're not just relying on English, you're not just relying on sentences of declarative English in paragraphs, then it can be more efficient to create for many people. It's certainly more efficient to consume. And I've mentioned Mayo's book from multimedia learning about the stronger purple evidence showing that words and pictures are much faster, much more effective for learning than just words alone. And there's less ambiguity. We'll finish with a couple of examples. On the left, we've got three sentences and you can read it. And there's pesticide exposure. It increases this So while you're reading those three sentences on the left, much more quickly, if you look at the DAG on the right, it's exactly the same information. This is taken from a source where this is identical information, but there's no extra information in the paragraph that we want to communicate. But the figure on the right is much faster. Similarly, on the left, this is from some qualitative research. We've got paragraphs, we've got linking words, we've got discourse structures and relative clauses and so on. Turning it into a table, so it doesn't need to be quantitative. And you are expressing those relations, relations that you would express in the left with things like, however, and similarly, on the other hand, then the form of the latter further, you just delete all of that and you just have this geometric arrangement on the page. Another example as well, if we try to communicate this table in maybe a page of text to do it, color as well. So this simple table on the left and it just takes you a few seconds to look at it and say, oh, okay, this was the major employment status. This was the major marital status. Color, simple in Microsoft Excel and you immediately see this group, they're primarily married or partnered. There's lots of people with high school or further education and the vast majority of them are employed. So the point is using multimedia, getting away from the reliance on natural language to get a visualization that better matches the nature of the information. So just like live and it says, it's more intuitive and we can focus a bit more on the scientific message rather than wasting time in creating and consuming the medium. Here's as relevant to, obviously it's relevant to those of us who are teaching the researchers of the future. I've worked a lot in English, academic English training for years and years and years and I see people really hammering the vocabulary and the grammar and the pronunciation and all these years. I haven't really seen any textbook saying, well, how about the drawing tables and figures and why these are so crucial. Obviously, if we're going to use color, then we need to keep in mind accessibility needs. APA has some good information about that. I think that's about eight minutes. It's 10 minutes, apologies. So I'll finish there. All the email address, Twitter and so on is all that. Thank you. Thank you so much for that. I've just put in the chat the link for Q&A and discussion in regards to the lightning talks. Thank you very much to all our lightning talk presenters. Over the next minute or so, I will be demoting you with apologies, so no hard feelings. And the next session will be starting in about three minutes. Thank you, everyone. Thank you.