 Right, hopefully you can see that. Yeah, looks good. Great. So I'm James Smith and today I'm going to talk about open science and risks arising from the misuse of biological research. I want to start with a little bit of history. In the 20th century, the Soviet Union developed and operated a massive secret biological weapons program, which employed over 25,000 scientists across 18 research institutes. The research involved weaponizing and testing a range of agents, including smallpox, plague, anthrax, and many others. And it included genetic engineering with the specific aims of developing vaccine and antibiotic resistant pathogens, of increasing virulence and stability and creating new clinical symptoms. And this extensive violation of the biological weapons convention, which was enacted in 1972, wasn't discovered until the 1990s. And clearly these weapons had the potential to cause a huge amount of harm. But fortunately, they were never used. Unfortunately, very few states have had the combination of capacity and will to do a project like this. But now imagine a world where rather than 25,000 coordinated scientists, any graduate in biochemistry who managed to get their hands on fairly basic lab facilities could make a weapon like that. A scientist with some laboratory experience could decide to take unilateral action and over a few days engineer pathogen that causes a global pandemic, something like COVID, but designed to be worse. Luckily, we haven't seen a major event like this. But given the continuing advances in biotechnology, I think it's worth taking this possibility seriously. Biotech is being democratized, meaning that the technical expertise required to work with an engineer pathogens is going down. And many techniques that used to require a lot of skill are now available off the shelf. I think that the dramatic decreases in the cost of genome sequencing show this. And DNA synthesis is another example. It's now possible and cheap to send a DNA sequence to a company to be synthesized. Some of those companies screen sequences for known dangerous pathogens before synthesis on a voluntary basis, but some don't. Since many pathogen genomes are openly available online, along with the findings from gain of function research, much of it, of gain of function research, this is pretty concerning. And I think an important point here is the information itself, such as the pathogen genome sequences can represent a serious risk. Since the global nature of this problem makes regulation extremely difficult, I think community action can play an important role in influencing that risk. And that, I think, is where open science becomes relevant. In the vast majority of cases, I think that open science is a good thing. It should improve our ability to deal with biological threats, like through more efficient development and evaluation of drugs, vaccines, technologies that detect emerging pandemics. In all these cases, and in many others, having reliable and fast scientific research is clearly essential. But I think there are some areas where open science could actually increase risk. Sharing of digital tools that replace physical laboratory processes is one example. A viral engineering tool might have dual use potential. It might be used to do harmful as well as beneficial research. And tools like that, along with the data used to develop them, are publicly available on GitHub. The spread of what you might call open protocols or more detailed recipe-like instructions for procedures means that less know-how, less tacit knowledge is needed to work with biological agents. And the increasing use of preprints means that any gatekeeper role that journals may have played in the past in preventing the spread of dangerous information is being reduced. And so these concerns aren't limited to biological research. We might ask whether it's a good idea for code that would allow anyone to create a deep fake video or audio that could be used to blackmail someone to be available on the open science framework. It seems pretty difficult to me to anticipate exactly which areas of research could be concerning. Now, I'm not suggesting that in response to this, we should return to a model of data and code available on request. Instead, I think one possible approach is to be explicit about the terminal goals of open science and find other ways to achieve them when public sharing isn't possible. For example, imagine that we want to know that a manuscript is computationally reproducible. Could a review report involve verifying this and the article come with a statement that that verification was actually done? I think incentives are important here as well. As an option of open science becomes part of candidate assessment during hiring practices is the chance of accidentally encouraging sharing of dangerous material being considered. And do initiatives like open materials badges allow for restricted access sharing of code that could be misused. I think a large part of the open science movement involves a change in culture around the conduct and sharing of research. And just as there is for personal data over privacy concerns, I think there's a need to encourage and incentivize the culture of responsible openness. And this might include development of tools that allow certain goals of open science to be realized when we can't have full transparency. I think that thinking about the downstream effects of new policies or incentives seems particularly important because my impression is that the changes that open science pushes for have very wide reaching effects across the scientific ecosystem. I'm generally a very big open science fan. I teach open and reproducible science topics and I use those practices all the time in my work. But my hope with this talk was to highlight a possible downside risk that I haven't often seen discussed. I think the failing to consider it could have negative consequences for the open science movement and for society much more broadly. I'm working on a paper on this topic at the moment with Jonas Sambric at the Future of Humanity Institute and would love to chat with anyone here about it. So please do reach out if you have any thoughts, positive or negative over Slack or in the remote if you want to speak straight away. Thank you very much for listening. Thanks a lot, James. That was great. Really excellent, very important questions to be asking about open science as well as all the efforts to go into promoting open science. It's very important to think about unintended consequences, of course. Our next speaker is Peter Andre who is going to be speaking to us about knowledge problems in economics. And so, Peter, do you want to take control of the screen share and open up any slides you have? Perfect. I think I can only start sharing once James stopped sharing. Sorry. No worries. Okay, you should now be able to see my slides and full screen and moving forward backwards should also work perfect. So, hi everybody. Thanks a lot for tuning in. It's a great pleasure to be here today. I'm Peter. I'm a PhD student at the University of Bonn where I do behavioral economics. But the project today, it's called What's Worth Knowing and joined work with Armin Feig, is more of sociology of economics or sociology of science project and hence I hope of interest to the broader community present today. So, in this project we start from the observation, which I think is not very controversial today, that what researchers work on holds central societal importance. The topics we explore, the objectives we pursue, these shape the impact that research and science have on society and hence they matter. I think they are among the most important decisions we make as researchers reflecting both our academic freedom, but also our academic responsibility. Unfortunately, what's interesting, what's worth knowing, what's worth studying, what's worth exploring, that's these are very difficult questions they are hard to answer. And there's no objective answer. There's no scientific procedure to reach an answer. Instead, in practice, we need to retreat to our intuitions, to our gut feeling, to our personal value judgments. And of course, as researchers, as scientists, sometimes this feels uncomfortable because we try to be, to stay in the realm of facts, but here we can't. We need to make value judgments. And we wanted to know how our colleagues in economics, how economists evaluate our own discipline econ in these respects. How do they think about the research objectives that the discipline is currently pursuing? How do they think about the research topics that economists explore? These are the research questions that we wanted to ask in this project. With objectives, I'm thinking about things like policy relevance, the multidisciplinarity of research and research topics here are broader fields such as public economics, labor economics, development economics, different themes into which we can invest as community our research resources. In order to investigate economists' views, we organized and conducted a large global survey in which almost 10,000 economists participated. It's a large survey, but I think its most valuable feature is that we actually know whether or not it represents the full profession of economists. So what we did is we kind of derived the senses of active economic researchers. Starting from the top 400 journals in the profession, which is a large set of journals, we identified authors that published in the last five years, published mostly in economics, restricting on English publications. And all of these authors, for whom we could find email addresses in total more than 50,000 scholars, we were able to contact and invite into the survey. Many participated. And the cool thing is that we know whether those that participate differ from those that do research in general. And after we do some derive some correcting weights, we actually see that in terms of rank, field, gender, age, age index, and even things like the eigenvector centrality in the core of the network, our sample reflects the full diversity of active economic researchers. This is just one figure showing you distributions in the sample and the study population for different metrics. The key point here is they overlap almost completely. So our sample speaks for the profession. And here are the key results. So first of all, what's worth knowing? There are very heterogeneous views in the profession, which is not surprising, given that there's no clear answer to that question. And it always depends on personal value judgments. And these are diverse. There's a huge plurality and this is reflected in the heterogeneous responses of economists. And I think this fact is important to acknowledge. And it's important to document. To give you a visual impression, I have two figures here. On the right hand side, on the right side, you see the topic share, the share of research economists want to see in different topics here enumerated with A to Z. And you see that they are very different assessments. On the left side, you see different research objectives and how much they are weighted, how much they are emphasized. And there again, you see a huge heterogeneity in the responses. Yeah, I'm sorry, I don't have the time to really delve into the details of the figure. I want to give you a big picture summary, and hence I will continue with the next main finding. We document a large dissatisfaction with the status quo. In fact, many, many economists disagree with how things are currently conducted and it turns out that the majority also agrees in the direction of change for the profession. Economists want to see more policy relevant research in the discipline, more multidisciplinary research, more risky research, rather than incremental small step traditional research, and a greater diversity of topics. What is interesting in this context is that female scholars tend to be even more dissatisfied than their male colleagues, meaning that their research preferences are less reflected in the status quo, which could explain why there are still so few women in the economics profession, if then research preferences are not yet fully represented in the status quo. Let me conclude so asking what's worth knowing, we document a huge heterogeneity in responses but also a large disagreement and it's with the status quo and it seems that economics as a discipline does not fully appreciate and work on what its individual researchers, economists collectively prefer. And this I think is an important finding that deserves the discussion that deserves an exploration of why this is the case. This is not what we can offer in the paper. But I think the paper gives us a good reason to demand an open and inclusive, a continuous debate about what's worth knowing it's difficult to give an answer to that question. But that doesn't mean that we shouldn't ask it that we shouldn't debate it instead, we think that this is one of the most important questions for research, and that it deserves continuous attention and an open debate in economics. The papers available online feel free to check it out. And yeah, my seven minutes are over and I thank you for your attention. Thanks. It's really excellent to take in there. It's very, very intriguing results. I'll be really interested to learn a bit more about the gender disparity you noticed in satisfaction with research priorities at the field. And we, yes, we are pressed for time so I'm going to move quickly on to our next speaker, Ruben Lopez Nicholas who's going to be speaking about analytical reproducibility and data reusability of meta analysis. Okay, thank you. You can hear me. Yeah, that sounds good to me. Okay, perfect. Well, my name is Ruben Lopez. I am PhD student from University of Murcia. And well, I'm very grateful for being here today. I am going to talk about the importance of current meta-analytic data sharing. Well, first of all, I would like to point out the relevance of meta-analytic research over days. So they are usually at the top of the hierarchy of scientific evidence, which make them one of the most useful tools for practitioners, even more so in a context of a growing number of scientific publication. On data sharing and analytical reproducibility, when we talk about analytical reproducibility, we mean to obtain the same results using the same statistical analysis on the same data. And it can be considered as a lower limit to expect quality standard of all published research. So to be able to check compliance with this minimum quality standard, the original data must be accessible. Regarding meta-analytic data sharing, in my opinion, we can find several other benefits, besides analytical reproducibility checks. For instance, open availability of meta-analytic data allows the issue of usability for new scientific purpose or date the result as new primary evidence emerge. These points are particularly relevant for the efficient development of scientific knowledge and taking into account the aggregative nature of the meta-analysis. Furthermore, open availability of meta-analytic data allows for robustness checks of results and conclusion, barring different choice such as inclusion criteria or analytical approach. Also, it's important to note that there is no good reason for a meta-analysis not to share all the code data. We know that with the exception of individual participant data meta-analysis, the unit of analysis involves summary data from primary studies. Hence, sharing the meta-analytic data usually entails no ethical conference. With this in mind, efficiency guidance and limited reason for not only so, it seems reasonable to always share meta-analytic data openly. For this purpose, the final guiding principles are a key guideline to follow. These guidelines state that scientific data must be findable, accessible, interoperable, and usable. Summarizing this principle for an efficient use of available data, it should be registered or indexed with persistent unique identifiers, which will be open free and universally retrievable. So it will be used a formal, accessible, and broadly applicable language for knowledge representation. And so it will be richly described with a clear and accessible usage of the license. So what is the current situation on meta-analytic data sharing? In our recent meta-review, we reviewed several transparency and reproducibility-related practice in published meta-analytic research in clinical psychology. Over the last two decades, we assessed and reviewed different indicators, including the availability of data from different studies reviewed. We found that the vast majority of meta-analysis report at least some data. And it's important to note that the unit of analysis of meta-analysis is usually the primary study. So when we talk about the data availability, which refer to the summary study level data, for instance, effect sizes for each primary study. In systematic reviews and meta-analyses, it is common to report the characteristic of included studies, as well as track table or forest plot, the individual effect sizes. So we also quote what data was available and in what format it was. As we can see in the chart, it was uncommon to find the primary statistics used to compute the effect sizes. Moreover, in order to assess minimal compliance with the fair principles, the interoperability of the data was also checked. Defining her interoperability has no proprietary data formats that allows to easily manipulate and read the values for open-source statistical software. As you also can see in the chart, in a very few cases, we found interoperable data. The vast majority of data were reported in PTE format, forcing to time-consuming and error-prone manual recording of the data for reuse. And well, financing, the major points of this talk, or the key role of meta-analytic research in the current scientific arena, the limited reasons for not sharing meta-analytic data, and the fact that there is still room for improvement in meta-analytic data sharing practice. Thank you all for your attention. If anyone has any questions or comments, please don't hesitate to contact me. Thank you. Very much, Ruben, and yes, I hope our attendees, if you do have any questions for Ruben or any of our other speakers, will have a question for Peter in the Q&A. Our next speaker is Alejandro Manca, who will be speaking about, it's picking up on themes that were discussed in yesterday's Lightning Talk session actually, and I'll give you a qualitative research in open science. Alejandro, please feel free to... Alejandro, your microphone looks muted at the moment. I'm not sure if you're trying to speak. Yes, sorry, I'm trying to share my screen. I think Ruben has relinquished the screen share, so that should be possible. Okay. If you're having trouble with that, we could maybe skip to our next speaker and... Yeah, yeah, sure. You and I could catch up on chat and see if we can sort out the issue with the slides. Okay then, our next speaker then is Alejandro Herwick, who's going to be speaking about the problem of the problem. Yes, go ahead, Alejandro. All right, thank you so much. Next sec. All right. So I'll present to you a project that I'm working on, and for this presentation, I've titled it Engaging with a Problem of the Problem, and it actually complements the talk by Peter earlier on what's actually worth working on in academia. So my motivation for this problem is basically we would probably all agree that science and research is all about solving problems. In the end, we often really do not think really explicitly about what actually makes a problem a good problem to work on, right? We heard that there are different perspectives and different people will have always different opinions on what's actually means to work on good problems. So I want to call attention to this, what's sometimes called problem of the problem, and encourage as well as enable researchers to engage with this problem more practically. So for this talk, I'll just share like how I unpack the problem of the problem and then also what we can do with this understanding, at least from my perspective. I've done a different approach than what Peter has outlined. And I thought about it more in terms of like really trying to get a abstract understanding of actually what do I understand the problem of the problem to be, right? And I try to develop an ontology, an abstract representation of the problem of the problem. And it's quite complex, so let's step through it step by step. Let's look at the center. Basically, I just claim or describe a problem as something that basically describes unsatisfactory situation, right? And we can define problems in terms of defining it in terms of context, the situation that it relates to and also boundary conditions. But then basically a problem also is always dependent on basically a person or an actor that it affects, right? And what I noticed here is that basically actors can have different understanding, different value systems, different logics guiding them. So not our problem will be perceived as same by different actors. Also, problems always kind of help us frame and identify solutions. And what is the solution? In the end, I say a solution basically just something that transforms this difficult or problematic situation into something which is better, right? So that's for me a solution. But now, how can we think about what actually makes a problem a good problem to work on? And there I draw on research from the field of effective altruism and basically a field of applied ethics and think about it in terms of an expected value problem. So basically effective altruism suggests the best problems to work on are those which are large in scale, basically really important to what matters to us. And tractable, easy to solve in terms of like having low barriers that actually limit us in terms of making progress and actually transforming situations into better situations. And then also I suggest that we should think about importance. It's a relative construct. So it can change over time, which it always considered like the trajectory in which problems, the world might be changing and problems might be changing. And we might need to update our understanding of actually how good it is to work on this on a specific problem, right? So basically that's my general understanding of how I would unpack the problem of the problem. And basically to summarize, I'm thinking about the goodness of problems in terms of its expected value on working on them. Then I would appreciate that different perspectives may perceive similar problems as more or less important, right? So that we have different perspectives on how high the expected value of actually working on a problem is. And then I also would suggest that it's generally informative to make quantitative estimates of expected value for on specific perspectives, but just turn back really quick. And the things that I mentioned here, scale and tractability as well as trajectory, those are things we can actually quantitatively estimate how high is the scale and how much progress can actually make on a problem. And if you do that, we work quantitatively, we can actually start to compare between different problems and assess them in terms of like really how high is the expected value for me for working on this problem. So what can we do with this kind of understanding, right? I suggested we can actually build scaffolding and tools to engage with the problem of the problem more proactively. And one way I've done that is I've developed a problem assessment canvas, which is basically just taking this ontology and putting it into a format which can be easily filled out on a piece of paper. And guide people through the process of actually really figuring out what problems maybe the most important for them to work on. And this problem assessment canvas also has like a quantitative digital component where you can basically like in broad terms how important it might be for you to work on a specific problem. And then we can also develop digital platforms and tools to support collective problem assessment to basically get people to compare the understandings of how important specific problems are and enable also science or research fields to maybe get a better shared sense of actually what problems are the most impactful and interesting ones to work on. But that's work in progress for me at the moment. So concluding remarks from my talk, I think it's not like the problem with the problem is a difficult thing. There's still many open questions and one that's driving me nuts a little bit is how to integrate problem assessments across different perspectives, right? We can have different value conflicts. And I think it's important to make assessments from different perspectives, but then there's always also the question, how do we get people to look beyond different perspectives and actually integrate the problem assessments across perspectives. And if you're interested in this, what I'm working on more details about what a sense of farce in the paper that I've listed here, and you can find the problem assessment canvas that I mentioned on the USF at the link here. And you can always reach out to me for further discussion using my email address. So thank you very much for your attention. Thanks a lot, Alex. I can see the chapa was lighting up while you were speaking to there's at least one or two comments in there for you. Thanks a lot for your presentation. Alejandra and I are going to try now in a team effort to share her slides. I'm going to share the slides while Alejandra speaks, there might be a little bit of awkwardness, but let's see how we can make this work. Thanks. Sorry for the issue. Okay, I think the slides are on screen now if you just let me know when you need me to skip on. I just weekend and I can do that. Yeah, yeah, sure. Sure. Thank you so much for inviting me to give the short talk about an overview of qualitative research on in open science. My name is Alejandra Mancon I come from Leon University. Next one please. The purpose of this literature to review is to provide an overview of the characteristics of qualitative empirical articles on open science that have been published so far. In order to answer what published qualitative empirical research on open science can tell us about the development of this topic. In order to do so, we use nine facets about the techniques that research design the open science component, the subject areas, the sampling strategy, study sample unit of analysis study limitation. And research data availability. Next one please. So the searchers, the search were made in Google scholars and the scholar and his couples and a total of 28 articles were selected. The keywords were open science and it's a translation to Spanish, Portuguese and French and the period of the bibliographic research was between October 2020 and January of this year. Next one please. So as for the most used techniques are interviews and document analysis interviews are used to investigate about researchers opinion perspective, or knowledge gap on about some specific aspect of open science, while document analysis is used to research on journal policies or institutional official governmental policies on open science. So usually the research designs is either descriptive or exploratory or a combination of both and opens science components, most research is open science policy and the overall science policies. And the subject areas for from which these papers come from our science and the technologies with a prevalence on biology, biomedicine genomics and medicine. Next one please. So as for the sampling strategy, it is, it is notorious that there is actually a lack of a formal sampling methodology. When it's a, it's only seldomly mentioned and when it happens. So the most common sampling strategies are a snowball process, if criterion convenience and self selecting sampling strategy and their combinations as for the study sample and the unit of analysis it changes according to the technique. So in the case of case studies is usually a small interview sample, while the content corpus of documents is usually larger and it could be further so divided into two groups, a small corpus of around 100 documents, and a corpus of thousands of documents. In the case of interviews, the unit of analysis is researchers and in the case of content, a document and content analysis, the documents. As for the study limitations, there are a number of issues. When it's the case of explorator research is usually about the representatives of the sample, the comprehensiveness of the research, or the lack of accessibility. There are also generalization issues about the chosen languages, countries or study areas and overall a lack of representative of the sampling criteria instrument. As for the research data availability, the majority of the article corpus do not publish their research data. When it happens, it's published in international platforms such as in a door fixture and the next one please. They chose the methodologies are rather traditional qualitative methods, their interviews and document analysis, and none of their review papers have used digital qualitative methods, for example. The second point is that there is not a clear distinction between qualitative approach and qualitative methodology, they're both used as synonyms in the articles. And the research design is usually descriptive and exploratory. Other types of research design such as explanatory occupsel have not yet been used in qualitative research and overall the qualitative research is very localized. So this makes generalization rather difficult issue. The next one. Well, thank you so much for your attention. And this is my content information. Thanks. Thanks a lot. And I love that work. Okay, I just realized at the end that I don't think that was full screen but I think people could think the slides. And yes, thanks very much for your presentation. Okay, so that brings us to our an ultimate presentation of this latin talk session, which is from men glue, who will be speaking about researchers perception of open science practices in the field of applied linguistics. Just hold on let me share my screen. Okay, so my screen now. Yep, in full screen. Yeah, hello everyone. I'm a PhD candidate at the University of Cambridge, and today I'm very glad to share with you a project that I collaborated with professors as a cat at the University of Leeds, which is about open science in applying linguistics. First, a little bit of background applied linguistics as a field embraced open science practices fairly early on. For example, we have iris and open repository for research instruments and materials which was established back in 2011. We also have Oasis open accessible summaries and language studies, which is a repository for short summaries of latest research in non technical language, so that practitioners such as teachers could easily access the latest findings of the field. However, despite these initiatives open science has not been widely discussed in my field, and we are still a little bit in the dark when it comes to a our researchers perceptions and dark engagements regarding open science. And therefore we did a preliminary survey to try to get a sense of what's going on on the ground. I want to emphasize that this is a preliminary small scale survey, and we do not claim our sample as representative, the self selection bias of our sample means it was likely composed of people who were sufficiently interested in open science to take the time to respond to our survey. We measured open open science practice or as attitude or as barrier and some background information, one of the most important one is years after PhD or career stage, which is what you see on the right. Note that we discretize the variable into five categories, we understand the kind of values are arbitrary but we used it for data exploration purposes. As you can see from the graph, there is a more or less even distribution across different career stages. Our first research question is about to what extent to our researchers engage in us practices, we operationalized as practice in terms of sharing preference and data. There are three categories. The top one is having shared both data and preference. The middle category is having shared either, and the bottom category is having shared either. You can see from the graph about 80% of respondents had some experience with us practices, but still, there were nearly a quarter of participants who had no such experience at all. Moving on to the second research question, what predicts OS practices, we consider three main factors career stage, or years after PhD, always attitude, which is composed of five five point like her skills listed on the slides. For example, attitude to reproducibility is to what extent the respondents agree that research should be reproducible. Now I won't have the time to really go into details about OS attitude, but I've provided supplementary materials in case anyone is interested. We also measured OS barriers in terms of five variables, including insecurity, practicality, necessity, ethics and other barriers. I'll delve into greater details on this later. So now you're looking at the results of the second research question. We build an ordinary logistic regression model and what you're looking at is the regression coefficients in terms of odds ratio plotted on a lot 10 scale. The error bar shows the 95% confidence interval of the point estimates and I color coded the constructs so that the patterns are more visible. As you can see, years after PhD is a significant predictor with a very narrow confidence interval. There was a positive relationship between years after PhD, or career stage and the likelihood of engaging in more OS practices. Looking at OS attitude, only attitude to sharing data was a significant predictor, which means there was some degree of alignment between the positive attitude to and the behavior of sharing data. Moving on to OS barriers. The first impression is the effect size of the coefficients or the magnitude of the relationships, which is much larger than that of the OS attitude variables, specifically insecurity, practicality and having other concerns, corresponded with much lower likelihood of engaging more in OS practices, considering the magnitude, let's delve into OS barriers a little bit more. On the left, there are all the pre specified items related to insecurity. For example, the fear that sharing preference might jeopardize anonymous review. On the right, there is a graph breaking down insecurity across career stages, specifically that yes no answer indicate whether respondents selected at least one of the items listed on the left. As you can see, there are some interesting differences between or among the different career stages. For instance, for PhD students, vast majority express such insecurity, whereas for the most senior group, almost the opposite distribution was observed. The next barrier is practicality that this one refers to the practical barriers such as no time or don't know how compare with insecurity here we see more or less similar patterns across career stages. And sharing data and preference time consuming or that they do not have the necessary required knowledge for skills for doing this. Next we have two barriers, ethics is particularly relevant to sharing data, and you can see for most respondents, ethics was not an issue. As for necessity of sharing preference, the pattern is pretty straightforward as well. Almost all the respondents saw the need to share preference, though we did not see the same level of alignment between attitude and behavior, like we saw in the case of sharing data. And lastly, we have open ended responses, that is participants were asked to offer any concerns in addition to the please specified options. Let's look at the graph first, we did a very simple coding of the responses into a binary indicator of whether additional concerns were reported or not. And you can see the first three groups appear to have slightly more concerns, compare with the more senior groups. Now looking at the specific concerns of the respondents we may have a better idea of the nature of the challenges were faced with. For example, when researcher mentioned, I really don't understand the ramifications of the process, I know they share preference in physics but not sure about apply linguistics. We can also reflect the lack of white discussions on open science on why we would practice open science in apply linguistics, which may be somewhat easy to fix as in we could start by having more discussions with regards to the toward more discipline specific understanding of open science. The other participants that ethnographic data is very sensitive and personal and safeguarding participants anonymity is common practices. The first response is representative in the sense that it reflects a more profound problem that we have. That is the compatibility of core OS values and different research paradigms, such as ethnography. Even though conversations are expanding and we see for example in this conference more disciplines getting involved, the currently dominant discourse on OS is still very much underpinned by a narrow set of research paradigms, particularly quantitatively oriented. To conclude, apparently there's a lot work to do regarding open science in the file linguistics for starters we should start devising concrete steps to help address those barriers, and to seek disciplines of civic definition or understanding of open science. And lastly, more efforts are definitely required regarding the tension between the dominant OS discourse and different research paradigms. And that wraps up my talk and thank you for listening and happy to take any questions. Thanks Ming, just had a brief fight with my new button but I think I'm audible now. And that was a really fantastic talk and I'm very cheered to see that we somehow managed to keep the time. And our last speaker is Alfonso Perez Escudero, who is going to be speaking about the promotion of scientific collaboration using. So Alfonso as our final speaker, please take the floor. Thank you. Hello. Alright, so I'm Alfonso Perez Escudero, I'm a researcher at the Center for Interactive Biology of the CNRS in Toulouse. But today I'm here as co-founder of CrowdFight. And I'm not going to present research, but a particular implementation of a platform where we are trying to promote mostly scientific collaboration and in particular certain types of scientific collaboration. But then I would like to start with the history of this platform because it did not start with this aim in mind. It started basically to help fight the COVID-19 pandemic. So this started at the beginning of the lockdown and the idea we had was the following. We had the idea that there might be researchers who are already working on COVID-19 and who needed something. So for example, here we would have a researcher who needs to do a literature search. At the same time, you had thousands of scientists that because of the lockdown were at home willing to do something, but unable to do so. And some of them, even if they are not experts specifically in COVID-19, maybe they would have the right skills to help in this particular literature, for example. So the idea we had was to set up a platform that worked in the following way. We have this very important figure of the coordinator who's another scientist whose job is on the one hand to understand what the researcher needs to understand what is a literature search and what is the topic. And on the other hand to find another scientist, a volunteer, who will perform the task. Then we put in touch the volunteer with the requester and we let them collaborate. It's important to note that this is not like a forum where you get many different answers. This is very centralized. We give a single very high quality match to the requester. These types of platform work surprisingly well. We have more than 40,000 volunteers who signed up willing to donate their time. And we have hundreds of requests and we have a high success rate within these requests. Where success actually means not just to make the match, but we follow up and we make sure that at the end the volunteer did help the requester in the way they intended. We've got many different types of requests, people who needed reagents and got them shipped to their lab, literature search as I mentioned, people who were stuck with the protocol and needed help troubleshooting it, people who wanted data sets or clinical data and samples, translation of clinical documents and many other types. And a important thing to note is that even though these things, these interactions started as just one scientist quickly helping another. In many cases, they ended up as long term collaborations where even in many cases, they ended up publishing together a paper, the volunteer and the request. So sometimes actually benefited both both parts. So this was very useful in terms of helping with COVID-19, but then the question is whether it can work in general. And originally we think we thought it could not that this was something that worked because of the lockdown because of the specific situations imposed by the pandemic. But then as we saw events unfold, we realized that we were wrong, that actually, at least part of this could work in general in normal science. And this is because of four basic reasons. So first, some of the requests are what we call high value requests. This means that the architect in which just a little bit of time of volunteer saves a lot of time to the requester. So for example, if someone spends one hour helping you with a protocol where you are stuck, maybe you will save them a month of trial and error in your life. So this adds a lot of value to the scientific community and therefore it makes sense having it even in normal circumstances. Second, because of this high value, actually the volunteers are very happy to help even in exchange of nothing. When you see that a little bit of your time can make a big difference in somebody's project. Usually this is just enough personal reward to help. Third, it's a great way to make contacts. We all know how important in science is networking and actually this is a very nice way of doing it. You will meet someone, you will have a very focused discussion on some particular topic which can be even fun. In some cases, as I mentioned, even this develops into a collaboration where you publish a paper together, but even if it's not the case, usually you have made a contact that can be useful in the future. And lastly, it turns out that we are already in our usual work used to donating our time. We do this all the time when we do peer review, right? And if you think about it, we basically donate our time to control each other, to review each other's papers, to review each other's grants. So just a little bit of change of mind to decide that we can also donate a little bit of our time to help each other from time to time. And for these reasons, we decided to expand CrowdFight a few months ago. So from CrowdFight COVID-19, we went to just CrowdFight. And right now we accept requests from any scientist. Anyone, regardless of what they are doing, they can make a request to us. It's completely free. We are non-profit and our service is 100% free. The only requirements are that the scientist needs to have a concreteness that it needs to be high value, so the person helping them will help them a lot, and that it can be done remotely. With only these requirements, anyone can make a request. So if you want to help to this, well, so yeah, I would like to mention that of course we are doing this because we feel that shifting our culture from our current state to a state of mind where we get used to helping each other, we think can help a lot the community, both making time more efficient and also making it much more agreeable and less competitive. So if you want to help with this, there are many ways in which you can. You can sign up as a volunteer. It's very easy. Just go to CrowdFight.org and sign up. If you are a scientist and you have a need where you can help, please make a request. Let me emphasize it's 100% free. You can tell people about us. You can invite us to give a talk if you are organizing a conference or even in an internal seminar series, we would be very happy to give a virtual talk like this one or longer or shorter. You can make a donation also at our web page and you can also give us ideas and feedback. This is very much work in progress and we're very happy to get all the feedback we can at any of these links on the right. And finally, I would like to thank all the team who are working super hard to make this happen and all of you for your attention. Thanks a lot.