 So I'm going to ask to start us off. Priscilla Van Even, if she could take the stage, share any slides she has, and yes, share with us head talk on open science. Hello and thank you for the introduction. I'm Priscilla and I'm a researcher at the KU Louvre where I'm currently doing a PhD in social sciences and I'm here now to present my research on open science is my presentation shared right now. Yes, yes, that is working fine. Yeah. Okay, perfectly. Thank you. So I'm here to go a bit deeper on a discourse inquiry to open science that I did with my colleague Christian, and I will shortly present some highlights of this. So open science is an umbrella term that touches upon a plurality of concepts such as citizen science crowdfunding and open access scientific value such as reproducibility inclusivity transparency and collaboration are at its core. The movement hints at new science paradigm and the promise of better scientific practices, however, the notion of open science itself is still surrounded by ambiguity, and it is not clear as to what it precisely entails. It can mean and refer to just about anything from reproducibility to co creation with citizens by means of a bibliometric analysis we wanted to take a reflexive approach in the open science discourse in academia. The analysis is based on web of science accessible journal articles published up to the end of 2020 with a mix of qualitative and quantitative methods. The analysis provides insights on the evolution and geography of research, as well as on its structure content keywords adhering terminology and possible directions to give us hints on what the open science research in academia is all about. Let me share a few highlights of our study in this lightning talk. Inquire to is defining and publishing open science. We looked at others geographic distribution of authorship and affiliations, and found that only a handful of European countries and the US account for almost 90% of global open science research. These findings to show how centralized the open science research still is. Equally divided, not even across the Western worlds needless to say that the open science narrative is based on a hegemonic Western knowledge regime, and even though the open science movement sees itself as a part of the decolonization movement. We will have to see whether in the future smaller countries will be able to catch up and bring a new perspectives in the discourse, or whether these countries have to implement a hegemonic narrative in order to join and be able to participate. We will look at what open science is and examine its meta narrative. When you look at open science from earlier stages to today, we see the concept has become larger, but much less clear. When analyzing the different articles, it became clear that open science vocabulary has many buzzword characteristics, namely, there is an absence of a proper or real definition, yet there is a very strong belief in what the notion will bring about. Furthermore, contours are blurred, the vocabulary denotes everything and nothing. Also buzzwords carry multiple meanings depending on who uses them, where and with what purpose, consequently depending on the intention of the user. This implies that open science can be used or abused as one wishes and adjusted to their agenda, and it can also be used as a catchy title or feed into even confirmation bias. The concept of this has a very strong semantic force, but is also very fragile at the same time. For example, if you look at the definition of Elsevier on open science, we see words such as inclusive, transparent, collaborative empowerment and so on, very popular, fashionable and frequently used in the open science discourse. And although they might seem innocent, they are dangerous and problematic because they are seductive and intended to invite automatic approval. How can you disagree with something that sounds so good, yet has no concrete meaning. Buzzwords are compelling because they have vague and euphemistic qualities, a feel good character, because they contain a multitude of possible meanings, they are vague and ambiguous. And because they have a normative resonance with values that are considered universal and good. They can be either like a facade of a movie set that has no content behind it and in which we can put anything we want, or they can be like a facade that decorates a completely different kind of construction. In this way buzzwords are actually hiding the ideological constructions they stand for. Open science should therefore be approached as a particular cost of mind as a perception which models reality. The approach is important because we encounter issues of power discourse and representation in the open science discourse. Open science should be examined as historically and culturally specific form of rationality, inseparable from the Western based knowledge regime and its practices of power. And the Western based not Western based knowledge regime and power should be considered in the open science speech, since it is the dominant narrative that defines formulates and projects. Worldwide, while there are actually different kinds of knowledges in the world. So to conclude, the language of open science defines worlds in the making, it shapes our imagined worlds, but also policies and real life interventions are based and just justified upon these shaped worlds, for example to obtain funding. We have to pose ourselves the question who gets to construct the open science reality and be aware that if words make worlds struggles over meaning are not just about semantics they gain a very real material dimension. With this lightning talk we want to set the stage for a flexible approach and inquiry of the evolution within the field of open science that still copes with ambiguities with our research, we want to launch a call to guard the open science concept better. Thank you. Thank you very much, Priscilla that was really fascinating talk. And so moving ever onwards I will now welcome to the stage you again Schneider. Yeah, thank you very much. I suppose you see my slides. And you can hear me good. Alright, hi, hi everyone. My name is young Schneider. I'm from the University of tubing in Germany. And I am excited to talk about open science badges and their effects on trust in scientists. Why are we even talking about open science badges and trust. Well, recent studies suggest that there's a detrimental effect of the replication crisis on perceived trustworthiness of scientific disciplines. And a primary reaction to this was the call for scientists to increase the transparency and reproducibility of the entire research process by for example engaging in so called open science practices. The number of academic journals have adopted open science badges, which allow us readers to quickly determine whether a study has actually implemented open science practices. And there are first indications of their effectiveness to foster the implementation of open science practices. But besides that, however, there is little research about the effects of open science badges at an individual level, such as the issue of trust that I mentioned earlier. We therefore conducted experimental studies in which we investigated whether open science badges affect trust in scientists. And in our studies, the participants received a title page of a fictitious empirical journal article so they just got the title page, something like you see here. And on the journal article title pages, there were either either grayed out badges. I'm going to just quickly zoom in here. There were either grayed out badges that signal the non adherence to open science practices. So open materials were not available or open data was not available and so on. And the participants were also aware that these explanations in gray on the right hand side were just added by us and not part of the journal article. Another condition the part, the participants received was a journal article with colored badges, indicating that the authors implemented open science practices, or the participants received a journal article without knowing what constitutes the control condition. Now we conducted the study with the same design in three different samples. We conducted it with undergraduates with social scientists, and with a general public sample with one difference for the general public we used so called translational abstracts and simplified versions of scientific abstracts. There is more information about this in the preprint you can check out the preprint via the link in the at the bottom of the slides. I will now only report on the effects of badges on epistemic trust, the subscale integrity to be precise again more analysis are available in the preprint. For perceived integrity of scientists, we preregistered the following hypothesis. Participants who read the journal articles with greater badges will attribute less integrity to scientists than participants in the control group, who will attribute less integrity to scientists than participants reading journal articles with colored badges. And these are violin plots with means and standard deviations of the results of all three studies from left to right with undergraduate social scientists and the general public with conditions on the y axis and integrity scale on the x axis. If you look at this, you might already be able to spot that we were able to create evidence for our preregistered hypothesis in the study with undergraduates. Participants attributed least integrity to scientists are receiving great out badges and most integrity for scientists are receiving colored badges with small to medium effect sizes. In the same picture with social scientists, we were able to replicate the findings with similar effect sizes. In the public sample, however, there was a difference between great between the conditions greater badges and the control conditions, however, not between the control condition and the colored badges condition. To sum up, our results indicate that badges actually do work for the target audiences of scientific articles. This is good news, because knowing that higher trust is being given by the readership. For me, who has badges or who was able to achieve badges may strongly incentivize open science practices, badges did not further increase trust in the public sample, compared to a standard journal article. One explanation that is also brought forward by unvariant lack hands maybe that non scientists believe that transparency is already fully ingrained in the scientific process. The treatment check is further in line with this assumption. And with these discussion points, I will conclude my presentation. Thank you very much. That's really great. Thanks, Jaegen. Yeah, we are. It may be the decision has to tip a little into the next session because just the way that time is working against us. But I will now pass on to it's very intriguingly titled talk is science updating the definition of human diversity. And our speaker is actually guy. I just want to make sure the screen is being shared. Hi everyone, my name is Sakshi. Thank you so much for the opportunity to be a part of this lightning talk I'm a PhD student in psychology at the University of Cambridge. In today's lightning talk, I want us to reflect on whether science is updating its definition of diversity in an increasingly multicultural globalized and digitized world. To set the stage, we all recognize that we have a diversity problem in behavior science which is reflected in essentially two ways. One, we have the lack of sample diversity and relatedly we also have the lack of researcher diversity. Many groups continue to be underrepresented, whether that's women members of LGBT community ethnic minorities, socio economically disadvantaged population scholars from low income countries and of course the list continues. We also know that the majority of our world's population lives in the global South approximately 88% as researchers, particularly in behavior science is interested in human behavior we must consider whether our science is truly generalizable. In many studies non western populations are often very interestingly called an exotic sample. And I think the question to ask here is why is the majority of the world's population considered exotic. To start with our first challenge on the diversity of our study populations, we know that we have an extremely biased sample, which largely includes populations from Western educated industrialized rich democratic context. We have a very clever acronym coined by Joe Hendrick and his colleagues, and we do tend to ignore non Western populations. And further, they also helped us see that these samples aren't truly representative. We often rely on skewed college educated white rich samples, even within the global law. So it's not. It's, so one is of course we're ignoring the majority of the population in the South, but even in the north we continue to exclude much life groups. In fact, it's not 88% if we do go by, if we do strictly go by a Dr Hendricks definition of what includes Western, because it does focus on not Western Europe and British descent societies accounting for about 8% of the world's population. Therefore paints almost 92% of the human species with one flat brush. So here I'd like to argue that we have to go a step further to address the lack of heterogeneity that perhaps maybe skewing our research landscape and limiting our interpretive power. But how do we, how can science ensure that we, we can remain representative and and and really benefit all of humanity, and also not continue to ignore culture in our need to replicate the universal effects. And here I'd like to argue that we do need to do a bit of a diversity audit to both create a diverse scientific workforce, as well as promote a more representative science. We need to understand and appreciate whether our current current antenna or microscope or or you know any analogy lens itself really well to this is picking up the right definition of diversity. Because we are stepping out of the world where previously dichotomized ideas are no longer dominant. And even in a label like the global south, or if you consider the non weird category. It's incredibly diverse and it's incredibly huge and it's hard to classify the extent of human diversity under one label. We must ask researchers and scholars from diverse diverse groups around the world to weigh in and really help us develop a nuanced understanding of diversity. And particularly the overlook diversity categories in different parts of the world just as a very quick example I know we're running against time. I'm from India and growing up some of the salient identities like caste and religion are not well represented in psychology equally we might find in other parts of the world Latin America, Africa. There are other overlooked diversity dimensions like for example indigenous groups or perhaps sexual orientation, which all have important implications. It's a very complicated matter. Today these boundaries are becoming even more blurred and thus they are inherently non binary that continuous, you know let's take gender for example, it's no longer male versus female today it's trans it's cis it's agenda. The definitions or the dimensions that that define a population need to keep evolving to promote an accurate representation. Why because intersectionality really matters and interconnected nature of a social categories makes human diversity continuous. And all these different identities I'm not going to read each of them but seem to overlap and intersect whether that's sexual orientation or even language or ethnicity. And further many forms of diversity we have many forms of diversity today for example social cognitive linguistic ethnic, which makes diversity more complex and multifaceted. However, we also have to ensure that we do have to ensure that sample diversity is really not possible without the help from a diverse scientist. We know that systematic biases and inequities have excluded so many groups from science. In fact, there's so much information asymmetry to even enter the world of science that once they enter we really have to make sure that we are retaining diverse voices and the pipeline is for research diversity, and I'm sure I'm preaching to the choir here is not only good for the optics but it's extremely critical to move the field forward to answer important questions relevant to diverse populations and find innovative solutions for humanity stuff is but both these challenges go hand in hand and if you really want to make our science global inclusive we have to go hyper local and adopt a culturally sensitive approach when we are recruiting study samples or whether we're promoting equitable collaborations between scientists from around the world, and just as open sciences revolutionize the field. I believe the time has come to also include diversity as a part of the same revolution. Thank you. Thank you so much, actually, and I hope that anyone of the attendees who has questions but actually are any other speakers will feel free to either share them in the chat, or to share them on the meta science 2021 slack workspace. And so our fourth of five speakers today is Nicole C Nelson who's going to be talking us through a mapping of the scale of landscape of reproducibility discussions. So please go ahead Nicole. Thank you so much. The research that I'm going to present today actually shares a bit in common with Priscilla's research question of thinking about what people are talking about open access. Whereas in this study, we were asking, what are people talking about when they write articles about a so called reproducibility, or replication crisis. One of the observations made by people who've been participating in this field for a while is that there's a lot of heterogeneity in terms of the way that people use the terms reproducibility and replication, as well as a lot of problems that get collected up in this label, as well as concerns that the reproducibility or replication crisis gets represented differently in the popular press or public facing media than it does within the scientific community. So to address these research questions what we did is a mixed methods analysis of about 350 articles about reproducibility issues published across two fields, psychology and biomedicine. We talked about articles with a maximum variation sampling strategy trying to get as much diversity as possible in the dates of publication, the types of authors and the types of venues that those authors published in. Once we have that database, then we did a grounded theory approach to doing qualitative data analysis where we coded these articles for 29 different themes. And those themes might be like questions around the integrity or purity of reagents used in bench science, or appropriate or inappropriate uses of p values. Then what we did was a factor analysis where we took the kind of unique fingerprint of each article, and as created by the amount of space they devoted to each theme in their article. And then we did a factor analysis to see how it is that our themes assorted. What we found was a little bit surprising to us is that rather than reproducibility slash replication discourse being highly heterogeneous or heterogeneous, as some of the commentaries were predicted. What we found is that quite a lot of articles clustered together around the center of our factor analysis first, first plane here indicating a lot of discursive similarity. Most articles were talking about the same themes to similar expense. We did not find, for example, two different clusters representing articles talking about biomedicine and articles talking about psychology. Most articles were talking about core kinds of concerns such as transparency and other open science attributes. This was true across the articles authored by scientists and journalists, and across the articles published in popular venues and scientific venues, although there was a little bit more differentiation between the public facing articles and the science facing articles. So this tells us that reproducibility discourses on the whole more coherent commentators have assumed what we did identify three unique clusters, in addition to the main body of articles. These three clusters were one focused on reagents, another focused on P values, and another focused on the heterogeneity of the natural world. And I'll touch very quickly on each of those in turn. The articles focusing on reagents were arguing that most of the crisis in biomedicine could be attributed to poorly labeled or poorly standardized reagents, and that the general system of science was not broken was just reagents or cell lines that were contaminated that could be a problem. A similar type of argument could be found in this cluster of articles around P values that people were arguing that most problems could be attributed to the misuse of P values. We had better knowledge of statistics or other kinds of statistics such as Bayesian models. And again, we would not see so many problems. This last cluster is a little bit different in that the authors here argued that there should be an expectation of heterogeneity in the natural world. And we should expect things to reproduce because the natural world varies. And so when we do results are when we do experiments again we are going to be repeating them under slightly different conditions and so we should expect to see different things. These three clusters are all useful I think for meta scientists to know about because they represent three different clusters of people who do not see a need for a reform of the core structures of science or the incentive structures of science. And so meta scientists should be encouraged because by and large what we found was a fairly homogenous discourse where there's a lot of agreement within these articles about what the key kinds of problems and solutions are within the scientific landscape. A short link down below to that article recently published in plus one, and you can check out some more of the figures that we have there. Thank you very much Nicole. And that was really fantastic talk this has been all of the lighting talk sessions have been really great over the last week to 10 days but this one is prompting a particular number of questions and thoughts in my mind. And we'll now go to our final lightning talk speaker. I'm aware that some of our attendees are probably joining now for our upcoming session that was due to begin on the hour on the funder perspectives on how meta science and forms brought a world views. We have our time has slipped by about five minutes or so. And so we will just finish off the lightning talk session before we move into our next session hosted by Aaron McKinnon. So, our final lightning talk presenter will be Gabriella go and she will be talking about open science and qualitative research. So take it away Gabriella. Okay, just checking. Can you see the presentation. Can you hear me. Okay, so hello everyone. Good morning. Good afternoon depending where you are. And, well, within the next five minutes. I would like to show you perhaps a little bit different approach to open science. And I'm like, I want to I want to tackle the question if science can be sorry something going on with my computer. If I want to tackle the question if science can be responsibly open if someone is a qualitative researchers studying sensitive topics. And my answer would be yes and no. Let me let me explain. This list here is by of course no means exhaustive but it gives you an idea of what qualitative researchers deal with. And my own research belongs here at the bottom I was conducting in depth interviews with older adults asking them about their sexual life. And when I first came across the idea of open science I was amazed by the ideals and you know possibilities but then I became practical. Okay, how am I supposed to open my own research in similar way. And I was inspired by some truly truly valuable papers published on the topic. It's really a shame that there are no that many of them. I came up with this. When it comes to opening raw data, I would argue that it is indeed problematic even dangerous in qualitative studies it may be. And here I draw on my own experiences qualitative data is always the by definition contextual just just imagine participating in an event making pictures then telling your friends about it. And no matter how detailed your description is going to be being there and experiencing it will be different to hearing about this event or seeing the videos. And I would say that's that's the difference between working with own qualitative data for example after conducting interviews and having the context versus using open data from a repository without all this context. I hope it gives you the feeling. And also, what about the informed consent, like my, my participant may agree to talk to me, of course, but may not necessarily be happy about an idea of a bunch of other researchers, you know reading the transcripts from from our conversation. It's not confidentiality. If I open a transcript there is always a possibility to identify a person based on their story. And this is something we definitely don't want to happen. Because of course these dilemmas do not apply to all qualitative data, you remember the list, but still we must be cautious. So, if opening raw data is so problematic, then I thought about a preregistered registration which is another aspect of open science yeah. And here my answer would be definitely yes like three times yes. We, like in qualitative researchers, we rarely have a set of hypothesis to test at the beginning. But we do always have a theory we based on, we know what sort of participants we want to recruit or what data collection strategy we plan to use. So, registration in our case can be a great way of opening our study design for a scientific community to to scrutinize for example. Secondly, knowing that flexibility is crucial in qualitative research, pre-registration can be extremely valuable as sort of a living document of our progress. Because even if study design changes, it is not necessarily a flaw as long as we are transparent about it and we justify the changes yeah. So, pre-registration may be a way of tracking our development, openly tracking, and this tracking could be accessible for example to a reviewer and later to an interested reader. And finally, this inherent subjectivity of qualitative research makes in my opinion, pre-registering even more useful and needed. It may enhance credibility. It may motivate us to be more explicit about our frameworks like theoretical lenses on a priori values. It might be an incentive to track the research progress in a structured way, which I would say that is something that that many qualitative researchers do need. Okay, and that's that would be my very very brief lightning talk of about some of the changes and opportunities of applying the ideas of open science in this type of research. I hope it will inspire you to look at opening science from a slightly different perspective, or maybe it will give you some ideas if you are a qualitative researcher yourself. And here at the very end I would like to thank the authors of these papers whose work really changed the way I do science and hopefully for the better. Thank you very much.