 OK, welcome everyone to this open science and SEI webinar. I think we now have a critical mass and we can get started for the day. So let me start by telling you that this webinar today will be recorded. And Ian, whenever you like, you can click the record button. We will actually post the presentation sections of the webinar to the SEI YouTube channel after this, just so you know. So we have in front of us an exciting program to discuss open science and SEI. We have a guest speaker from the Center for Open Science and we'll have a response on that keynote presentation from SEI Research Director, and there'll be an opportunity throughout to ask questions and of course participate in the breakout groups. Today will be supported by the open science learning project team, and that includes Neil Hadaway, Sarah Tilavian, Brenda Occella and Daniel Debar. So let me say up front that if you have questions as you go through, please pop them into the chat box. And there is a Q&A section after the presentations where we will gather those together and pose those and direct them to the right person. So what is open science? Let me start with a short story about this gentleman here you see on the screen. Neils Bolin, he's a Volvo engineer who invented the three-point seatbelt in 1959. This innovation replaced the sash style seatbelts that you might know from an airplane style and it became an industry standard that we're all familiar with today. And this innovation undoubtedly saved millions of lives. So what's this got to do with open science? Well, this innovation was in fact released as an open patent, which means that it was free for the entire industry to go ahead and use. The Volvo CEO at the time had agreed to take this route towards an open rather than a closed proprietary patent after he experienced a personal tragedy losing a relative in a car crash. The conclusion that the CEO and Volvo made at that time was that all of the research and development that went into developing this three-point seatbelt was too valuable to the public to keep to themselves. So they performed a very significant public service in making that innovation available. So there are a range of definitions of open science. But essentially the idea is to make science as accessible to the public as possible because it is after all a public good and having it available to everyone can help to solve problems quicker and potentially as well the solutions better. So now to come back to what it is for SEI. Well, in this open science learning project, our intention is to try and answer the question, how should SEI engage with open science? And in this slide, I've given some examples of the guiding questions that we pose for ourselves. What are we doing now in terms of open science? What could we do? And what do we want to do? And as part of that, the circles below show some indicative activities that will be undertaking as we go through this learning process. And we call it a learning project because we're trying to understand the concept and its relevance to this organization as we go. And we want to involve as many people across the organization SEI-wide as possible. So one of those activities in the first circle is an internal mapping. So trying to understand current activities that we're already doing in a way that might be consistent with the principles of open science. Things like having our policy briefs and discussion briefs online and available for free. Attempting where we can to publish in open access journals. That's on the publication side. But there are many other aspects of our work which are consistent with open science. And one of those really goes to the core of what SEI is about. And that is to try and take science to decision makers, this bridging role. And so we're going beyond pure openness, but rather trying to present the science and our understandings to an audience that can make decisions and lead to change. And what could we do? That's governed by some of the inspiration we might draw from other organizations that are similar to us. It's also in some ways you could say constrained by the funders that we have that support our project work. There's an increasing tendency towards more open practices. One example is the new EU Horizon Europe funding, which succeeds the Horizon 2020 funding. And that has open science alongside some cross cutting considerations such as ethics. So it's gaining more and more prominence and we need to be able to respond to that as an organization. So then the final circle and area of work in this project is to decide for ourselves what we want to do. What is open science for us as an organization? So we need to set the ambition level. We need to adapt the open science principles to what is relevant to the SEI context. And then we need to also make an assessment of the current SEI systems. Are we set up to be able to deliver our work in an open way and what implications does that have for our work day to day? So that will ultimately end with us preparing an SEI approach to open science. And this will come in the form of recommendations to the Global Research Committee, which also Persian leads. And we hope to come as far as possible with some ideas of what that will look like but also make some recommendations for how SEI systems might need to be adapted to make sure that this is a realistic prospect for us. So that takes me to the end of an introduction and up on your screen you have the agenda for today. I've also pasted that into the chat box so you can see anytime you like. So let me now introduce our keynote speaker, Alyssa Klein-Strule, who comes to us from the Centre for Open Science. And she has a diverse background. It comes from cognitive science and she's worked in many areas of empirical research. And since that time has broadened to work in the Centre for Open Science. And I am very excited to hear her perspective today on open science and how it might be relevant for us as a research institution and our bridging role. So I'd like to ask Melissa now to take the floor. All right. Can you hear me? Yes, we can. Welcome. All right. Okay. Thank you so much. So good afternoon to everyone. Thank you so much for having me. I'm really excited to be here. I'll be virtually. So just a caveat as I'm getting started here. The Centre for Open Science was founded by social psychologists. And similarly my background is in basic experimental psychology. In my study, for instance, child development, my context has usually been sort of in lab studies that may be several degrees removed from policymaking. So a lot of what I'll talk about today is about sort of incentives and decision making and structures within kind of purely an academic context. But then we can also look outward both in terms of like how an individual institution can make choices about what they do. And then including also how you might be interacting with external partners or messages that you're trying to send out to a policymaker or anything else like this. So I hope that there'll be some good connections for you all and I'm excited to talk about you talk with you all about that a little more. Could I have the next slide please? Thanks. So as we all know, even if we're going to limit ourselves to what you might think of as sort of like a traditional set of academic stakeholders, we know that the values and the choices that we have as individual scientists are never made in a vacuum. They're going to be part of an ecosystem of contexts that surround us. So this could be everything from what my university requires me to do, the types of venues that are available for me to publish in, and then even down to just either formally or informally what my peers think is good science and what they're telling me they want to see from my science. And then of course I'm also trying to stay employed. I'm trying to keep my job. I'm trying to succeed as a scientist. And we know that very much succeeding as a scientist is very closely linked to publishing papers. It's actually pretty remarkable in terms of the types of things that are rewarded or the types of incentives that people have. It's somewhat notable how strong the single incentive to publish papers in high-profile journals is for scientists. Out of everything that we do as scientists, out of all the activities, all the things that we contribute, this one thing getting published in journals has a very, very strong impact on how we're evaluated. And that means that whatever desires we have to get our science right to do a good job, we also are going to get rewarded for being published. And that means that we're going to get rewarded for doing the kind of science that's easy to get published. And historically, that's going to mean things that are new, things that are flashy, things that have a big impact. And ideally, results that don't have anything that's kind of inconvenient or difficult to explain in context of whatever else the story is that we're trying to tell. And this set of incentives we think lies behind a lot of the dysfunctions that we see developing in research cultures. And we can sort of trace downstream from the types of pressures that individual scientists or individual institutions are under through to individual behaviors. So I have the next slide, please. Thank you. So individual scientists' decisions that they make in the lab or in the context of a single study, all the way through to its broader results on scientific literature and progress. So for instance, if conducting a replication isn't something that other people find very exciting, we don't publish it, if we're not making our methods available to other people, those are the types of things that are going to inhibit us from being able to self-correct. So we often talk about science as being something that's self-correcting. We find our errors, we detect them, and we fix them. But the less that we know about what's there and the less time we have to slow down and check our work, the less we'll able that is to function. Similarly, if we're reporting selectively, if we are not taking the time to make our science as error-free as possible, over time, that can degrade the credibility of the literature. And we know that it's really, really important that we are publishing things that are credible. We want our science to be very strongly supported. We want to be able to not to say that we have no doubt about anything that we present, but to be clear and well-calibrated when we're saying this is something that we strongly believe to be true, this is something we think might be true, we still have a lot of doubt, so that people who are reading it can really evaluate what we know and what we don't know. These types of problems really do impede our real central values as scientists. Our research gets slower and less accurate. And in particular, we waste lots and lots of time, money, resources, our own energy and stress, going down blind alleys because we're tracking down something that we couldn't clearly understand because of a lack of transparency. The types of impacts that these sorts of problems can have on an entire research field are things that we see across a wide variety of domains of science. And we also see it in both the processes and the results of that science. So starting out with the process. Can I have the next slide, please? The Center for Open Science is currently conducting what we're calling the reproducibility project in cancer biology. So this is a systematic sampling of 51 high-impact preclinical cancer biology papers that were then identified and which we then attempt to reproduce. So to run the same study again on a new set of data to collect that data and then to analyze it following the methods of whatever that original paper was. Can I have the first animation popping up here? Yeah, there we go. Great. So the thing that we find here that was really shocking and honestly fairly depressing is that it actually just turns out to be very challenging to get enough information about what happened in the previous study to be able to conduct a replication without getting information from a source other than the paper other than what's available. So just by reading the paper you often couldn't get the data that it was based on. You couldn't verify the results of that particular study. And then one more click. Can I have the next slide? Thank you so much. Essentially what we found is that out of these 51 papers it was never possible to take the paper and the publicly available materials and to just run the study based on that. It was always necessary to get in touch with the original author, have it back and forth, ask questions, discover things that would be important for the experiment to actually run correctly but that hadn't been documented anywhere clearly for people to find. Can I have the next slide please? We also see these issues coming up in the results of the studies. So this is something that people might have heard referred to as the reproducibility or the replication crisis in psychology. So here what we're seeing is that if you do 100 replications that were sampled from the psychology literature. So rather than picking out studies that maybe you're a little suspicious or you think something might be going a little bit funny with a particular study, these are systematically sampled papers that are then all pre-registered. So that's an open science practice where you write down the exact hypothesis, the exact analysis, the exact data collection methods that you're going to be using. You write down and register it beforehand so that you can then clearly show that you're following your original plan. They made all of their materials, all of the data when possible available. So these are attempting to be very high quality replications where someone else can hold you accountable to check to see if that replication did a good job. And what they're finding in these is only about a 40% success rate. So out of those 100 studies, all of which are reporting a positive, significant result, only about 40% are actually coming out, still significant, and the effect sizes in general are becoming much smaller. So this is indicating that the results that we're seeing in the literature are inflated. All right, so at the Center for Open Science, we think the solutions, that's great. At the Center for Open Science, we believe that solutions lie to this in lessons that we learned as children, so show your work and share. We think that these are very basic lessons, and of course, sometimes it's the most basic things that they're hardest to figure out how to get right in practice. Could I have the next slide, please? Thank you. So in order to do this, some of the things that the Center for Open Science rests on are some of the values and opportunities that we have in our scientific and research communities. So something that's really, really working in our favor is that, at least in theory, scholars really value transparency and sharing. When we publish our papers, we're doing so, so that other scientists can read our work, see what we're doing, follow our logic. Sometimes we say that we are not about saying trust me, we're about saying show me. We also see that when it comes to increasing the amount of transparency that we have in our work, there is a lot of idealism, and especially among early career researchers. We often see graduate students, postdocs, early career professors who are really going out on a limb to say, I think that there's a better way that we can do this, and just showing in their own work, even if they don't have access to, you know, big levers of institutional power to do what they can to do really strong science to demonstrate how this transparency can be used. The other thing that's really nice about research communities is that we trust evidence. So if you can show evidence that changing a particular way of doing your work is effective, that it leads to an outcome that you like, we're fairly ready to say, like, all right, I'm going to listen to that evidence. I'm going to change what I'm doing. Can I have the next slide, please? Thanks. When we think about the barriers of the things that make it hard to take on open science practices, this is something that I hope that SEI will be actively talking about because the barriers and the things that seem hard are going to be some of your most important teachers for figuring out how you can start to take on open practices and which of those are going to kind of have the most bang for your buck or even be the kind of first places that you can start when you're looking at changing your practices. It's important to remember that these barriers can happen at every level. So they can happen at the level of your institution. It can happen at the level of the funders or the policies or the government decisions that impact those institutions. It can also be social. So if everybody around you is doing science in a particular way, you might be concerned about what other people are going to think of you. Like, will other scientists think that you're judging them if you start doing something? And then, of course, scientists are very busy. They're doing a lot of things already. And if they're being asked to take on, you know, yet another thing, it needs to be because there's going to be a very clear benefit and that that's something that they can spend their time on. And then, of course, there's the individual. I sometimes wonder if this is why the Center for Open Science kind of successfully came from social psychology because social psychologists are very, very ready to believe that our reasoning is not perfect. We're very ready to know that we can be subject to biases. We see what we want to see. We can be swayed by attachment to our own theories, our own career success, and all of those things are going to make it difficult for us to make good choices unless we create, you know, seatbelts, honestly, for ourselves. There we go. I found the connection with seatbelts again. Other challenges for science is that it's often very siloed. In a lot of cases, scientists may be working in individual labs at universities and even if there's a more collected institute, like wherever you are, whatever your structures are, it's always going to be harder to set things up across groups or across divisions than it is within. So you'll often have coordination problems. And then specifically in academia, and I don't know whether this is something that you all face as well, but there is often going to be awareness for commercial solutions, awareness of, especially in tech, of things that might just be looking to profit off science without particularly improving it. So the Center for Science, Center for Open Science's earliest and largest project and approach to improving open science is the Open Science framework. If I could have the next slide, please. Thank you. So osf.io is our website here. So this is a free online platform for scientists. So it's essentially just a data repository. So the Center for Open Science is an open source, independent nonprofit. We're not affiliated with a university or a publisher or anything like that. And what we provide is essentially a way to link together all the different materials that you might be using in your research. So if you have cloud storage that you're already using, either through an institution or through a service like Dropbox or Google Drive, you can link that up to your OSF project to allow you to bring together all of the things that you might want to share and then let you make choices about which exactly of those materials you're going to keep private and which of those you're going to choose to share with the world. Something that we can talk about later is the fact that in many cases there are differential concerns, right? Sharing isn't all or none. It's often the case that some things are going to be easier to share than others. And so one thing that OSF is doing is trying to make it really easy and clear for you to make those choices. So I could have the next slide please. Thanks. So this is the basic OSF project. This is what you would see if you were visiting kind of a standard project. So what you see here is essentially file storage on the right. So this is whatever files are involved in your research. It could be anything from scripts to text to media files and then just a simple wiki that lets you explain some context about what these materials are and how they maybe fit into a larger project. I could have the next slide please. Great. So then what OSF does is now that you've got that place to store your materials we can now provide a number of different interfaces to those materials that support your work around the research lifecycle. So if we start on the right over there, the OSF data repositories give you a place to manage your data. Let's collaborators in multiple locations upload and work in a common place. If some of you are using GitHub and others of you are using Dropbox you can link them to a common place so that everyone can see all of the materials. But then if we go in a clockwise direction when you're getting ready to report you can then take those materials. Maybe you've been keeping some of those private up until now but now you'd like to share them with the world so it's possible to now go ahead and open up those repositories make them available publicly. We then also provide things like OSF meetings and OSF preprints so this is storage that's particularly designed for people to upload posters or preprints of your work into a common location so that it can be accessible and searchable by other scientists. So OSF preprints is a platform that allows scientific communities to run an archive like archive or bio archive. So when they're hosted on OSF essentially what that means is that OSF is providing storage, common versions of metadata that then the community can make decisions about what types of preprints they're looking to be uploaded, what types of moderation if any happen in that preprint uploading process. So you'll see that differ across different OSF preprint servers. And then when you're coming back around to the next project that you might be wanting to do on the basis of maybe some more free information that's now out in the open OSF allows you to pre-register your research. So what pre-registration means in case anyone's not familiar is just that when you're getting ready to run your experiment and you've got some confirmatory hypotheses for confirmatory analytic tests that you're planning on doing you can write them down beforehand. So you can imagine this is literally writing it on a piece of paper putting it on an envelope and signing it over the top with a date on it. So what this lets you do is essentially say I'm showing you the time-stamped plan that I have for how I'm going to conduct this research, what methods I'm going to use, what tests I'm going to do so that later on when you're looking at your data you're not as tempted to get swayed by something unexpected that you found. That's not to say that you shouldn't report those unexpected things you absolutely should. But this lets you clearly remember for yourself and for other people what are the decisions that you're making prior to the data and what are the decisions you're making based on the exploration of that data. All right, can I have the next slide please? All right, so what do we do when we think about how an institution or a community or an organization is going to change what they're doing? How do you change the culture? We often think about this in terms of the cycle of how people adopt a new technology. So you've got your real early adopters who are really eager to try out anything new, kick the tires, be your beta testers, really work with something that's maybe not quite ready to go, but it's new and it's cool. They're ready to help you kind of figure it out. Then you've got people who are happy to try something but they're looking for a little more support. And then maybe later on you've got people who really would prefer to kind of stick with what they're doing unless there's a strong reason to change. So when we kind of think about this, you can think about this almost as like a pyramid of different types of interventions, different types of ways you can think about changing. So right up at the beginning, if you don't have a mechanism for how people can share something, you might need to start with that basic infrastructure. There just needs to be a way like physically or software-wise for people to actually share this information. Once you've got that, if you do have something like that available, if you're just looking at these kind of additional types of cues, additional types of pressures or support, they're going to help people decide whether or not they're going to make that decision. So this is everything from like, look, it's just people are more likely to do things that are easy for them to do. So if you've got a choice between something easy and something hard, people are going to go for the easy options. So if you can make your option easier, it's going to get more attractive. You can make it something that's rewarded just socially. You can make it something that you don't expect that we all have for each other. We're holding each other accountable. That can be a really strong force for changing how we start to make those decisions. And then for an institution this goes all the way up to, it can either be really formally rewarded for taking on particular behaviors or even required if you're in a position where you can actually set a policy that people are going to be following. Can I have the next slide, please? Thanks. Just to let you know, so in that case what I'll do is I'll tell you a little bit more about one of the larger interventions that cost us focusing a lot of our attention on right now, which is registered reports. Could I have two slides forward please? Oh, one back. Sorry about that. I think I lost the slide. All right. So registered reports, I've talked about pre-registration so far, which is just writing it down for your own application. Registered reports are about bringing that to the journal. So rather than conducting peer review only after the results are all available and the paper is all written, it's about taking that plan, that protocol, your specific hypotheses and actually running your peer review at that point. One of the things that makes that really exciting is that you can get that really strong feedback about what experiment you should run, what data you should analyze to kind of make that as best as it can possibly be. And then the thing that makes that more attractive for scientists who might take this on is that now you have a commitment from that journal that they're going to publish your study no matter how the results come out. Can I have a slide forward please? Thanks so much. So this is a really interesting example of an intervention because it brings in a lot of your stakeholders. Lots of people care about what happens in that peer review process for how a paper is published. So what we can see next slide. We can see that when journals start to provide registered reports as an option for people to publish, we see some of the types of changes that we're looking for in research culture. So compared to traditional articles, we see more null hypotheses getting published, null hypothesis results, excuse me. That might seem a little strange, like we failed more, like we're looking for that and the answer is yes. This means that we are more quickly and more successfully finding out when our hypotheses are wrong or we can focus on those that are correct. And one more click. Great. And even though we're seeing more of these null results, these things that are maybe a little harder to think about, maybe not as flashy, they're still being cited at similar or even higher levels as other articles in the same journal. So scientists aren't losing this, you know, really important marker that their work is being listened to. All right. So what does this mean for all of you all? I kind of moved this into a discussion about what steps SEI might want to be taking. The primary message that I want to leave you with is that you've got a wide variety of types of change that you can consider and then you can target those changes to your specific goals and contexts. So if you don't already have just sort of a basic set, maybe you have a missing piece of infrastructure, maybe you're not currently pre-registering and you'd like to, maybe what you need to do is to identify a platform that you're going to use for everybody to do it. Next slide. You can also think about taking what you're doing and making it easier to do. Easier user experiences mean people are going to enjoy what they're doing, they're going to have an easier time. So if there's pre-registration, maybe you want to make a template that's specific for your institution where you're pre-registering the specifics, the types of hypotheses, the particular important parts of protocols that are relevant for your types of science. Next one. You can make it normative, right? So this is very much about just how you talk to each other, what kinds of things you focus on in your communities. You can highlight, so we use badges to say like, hey, look, I did share my data. You might be interested in knowing that. You can fold it into training courses and it's also really, really important that this type of debate can also remain open. So this is just an example of two papers arguing about whether or not pre-registration is valuable either at all or whether it's valuable only because this kind of debate is really, really important because it's what's going to sharpen your focus and help you identify those changes that are actually going to work in the right ways at the right times. And then finally from these informal norms you can move on to the types of things that institutions can do. So either with your own funders or as you're making internal decisions, funders can ask you to take on open practices. You can have awards or other types of recognition that specifically focus on rewarding those behaviors and if publishing is something that you focus on then you can look for opportunities to publish things like registered reports that are going to let you kind of show more fully this transparent process. And then finally one more level something that we're only starting to focus on just a little bit is the next slide is actually making it required. So you can make the decision to actually make something sort of mandatory in a particular context. So here one thing that costs does is provide a couple policies that a journal could adopt to say this is what you would do if you wanted to make open data mandatory in certain aspects. Here's some language that you could use. And then just last slide next one I've talked a lot about pre-registration and registered reports but just to remind you all when you're thinking about the types of things that you might want to bring in or things that you're already doing that you want to highlight and kind of pull forward there's a lot of different behaviors for the individual choices that you're making right now. How do you want to shift those and what are going to be the right tools to make those some successful shifts for you in your particular context. So yeah, that's about all I have. I've got the next slide just has some links. I guess people can hang on to as they're going and if you could take a screenshot or the slides will be available after I guess. But yeah, that's all that I have. Thanks for keeping me roughly on track with the time here. Yeah, all right. How do we want to move on to discussion? Thank you so much Melissa. That's a fascinating presentation. And I think here on this slide you should be able to click through. So take the opportunity now for people in the group here to click through to any of those links. But thank you so much Melissa again. Now the next part of our agenda is to hear from our deputy director and research director also who is the respondent today and that means opportunity to reflect on the keynote that we've had from Melissa and and think about some ways to frame this in advance of the question and answer. And now keep please to everyone who's in the group please type in any questions you have in the chat and Neil is there to take note of those questions and can direct those to Melissa or also and we get to the Q&A section next session. So now I'd like to ask also to please take the floor. Thank you so much Tim and good afternoon everyone. Thank you very much Melissa. That was really interesting and I think we all learned a lot. I also want to thank the whole team working with Tim to make this webinar happen and sort of bring this project and discussion forward. So I don't want to take too much time and because what we really wanted was this webinar to kind of kick off a discussion activate you all as researchers in thinking about this. But just maybe I'll start with this reinforcing basically what Tim said also when introducing why we are doing this because SEI is not your we're not a big university we don't have lots of labs we're not doing that sort of type of science that's where you have a big new breakthroughs or noble prices and so on so you can feel a bit distant. So let me explain why we wanted to have this kind of learning project this year. First one is to really invest in our learning around the science part as you know we're trying to bridge science and policy and we have invested quite a lot recently at SEI in the kind of understanding better the policy and the policy process that we want to influence and impact. We have a very nice publication now and a policy assessment and prepared by Johan Sjölynsjärna with various tips on how to engage effectively with policy makers now of course we also have the strategic policy engagement streams where we are trying to work more strategically and systematically with improving our potential to have impact. So now we just thought let's invest a bit on the science side and try to keep up to date what's going on in science, what are the big trends and we chose this sort of framing of open science but maybe there are also other big trends which we should consider and of course the ultimate purpose is to understand also what do these science trends mean for sustainable development and also what do they mean in terms of north-south relationships etc. So that's the one the first motivation, the second one as Tim mentioned also compliance with funder requirements and for example we know that we need to be more well structured in how we for example explain data management plans and here Melissa has various tools come in very handy a more kind of technical level how to make sure to share your data and make it available and also of course here comes a question that Rasmus put in the chat also when is it not appropriate to share data and when do you not want to have it open etc. So that's a really important question for us. The third motivation again mentioned by Tim is that I think and this would be interesting to hear Melissa's reflections on being a research institute where our mandate is not only to produce excellent science but really to make it and sort of make that bridge how can we kind of position ourselves do we try to actually go beyond open science not just make it available out there to stakeholders but actually sort of actively make it accessible, make it relevant and show there is uptake so is there you know how can we can demonstrate leadership in how to do that So those were the motivations and I thought I would also say something I thought this next two final slide from Melissa was very useful taking a step back also looking at what are these sort of open science behaviors I believe you call them that we are really looking at when we talk about open science and this was really the first step that we started discussing Tim and the team what are the sort of key principles and I'll just read them out now maybe as an input also to later group discussions but our understanding and again we have experts with us is that open science it sort of involves maybe six or up to nine principles some of which were mentioned in Melissa's behaviors but basically it's open methodology sharing how what methods you use and how you apply them at an early stage open source do we share our codes and how and I know there's a discussion among our tool experts in SCI on when and how this is appropriate or not appropriate open data I think you covered this really well Melissa how do we prepare data management plans and store the data etc open access I think everyone are quite familiar with this how can we make our journal articles openly accessible open peer review I think this is maybe more for journals how they organize the peer review process open educational resources so the idea that we are moving away from the classroom towards MOOCs virtual courses and here SCI Asia spearheaded some of that sort of online training work that was six principles there are three more open synthesis I will actually leave this this is something that Neil Hadaway is a really expert on so he can explain it open interests which I assume refer to being transparent with your who is funding your research etc and finally open discovery how do we actually discover research how do we conduct literature reviews again this is something where Neil Hadaway has a lot of expertise so those are the things we are looking at and trying again as Tim explained to you what is relevant to SCI and what is not because we are not so big so we can invest endless amount of resources into adopting all the newest best procedures but we need to be even smart what does really matter for us to rigors robust research but also have impact so I think final well two more comments from me one was I think it was interesting to hear your perspective Melissa was on the dysfunctional research culture which I think many of us kind of have seen are aware of but I would also dare to say that something we try to avoid obviously at SCI but we try to balance how we internally evaluate ourselves also not just individual level but also as an institute we are not only looking at the publication, citation metrics but also our impact again and that leads me on to my kind of final big open question maybe for Melissa to respond to or in the groups Tim you can share say what we have time for but I'm curious is there any relationship between the whole open science agenda and the sort of impact led research agenda we hear a lot now also about mission oriented research and that science is supposed to be very active in solving society's problems and in this vein I think SCI has always sort of operated in that mode very problem led research but are these just sort of parallel trends in science or are they actually related will open science to help us to have more impact ultimately and vice versa so that was my big discussion question to you all but again thank you very much Melissa for sharing your expertise and we look forward to hearing from all SCI colleagues thanks thank you very much also and yes I think it would be great to hear from you Melissa on that question that Osso's just posed and right after that we can go to Neil who can take us through a couple of questions that are emerging from the chat so over to you Melissa first yeah sure so thank you so much also it's great to hear a little bit more about what you all are focusing on right now and I think the question of what are you going to do kind of given like the size and the focus of your institute with these open behaviors is the exact question to be asking because you all are in contact closely it sounds like with scientific work you depend on that scientific work in some context it sounds like you're also doing your own analyses internally so something that I think that you can do that's very very valuable if you have those partnerships with scientists especially if you use things that are open if you notice that another scientist or a group is making things available and that you're then able to use them as much as possible right so you can make it clear both back to the scientists so that they know that these things that they are doing are actually having an impact on your lives and then also when you're sharing things out you can say I'm able to do this because the science was made open and transparent I'm able to make these policy recommendations because I can see clearly what things we're sure about and what things we're not sure about so this is a little bit related to the question about data confidentiality which is very very close to my heart as a psychologist most of my PhD I spent taking videos of other people's children so there are many contexts in which you can't share absolutely everything it wouldn't be ethical to do so but what you can do is you can talk about why you're making the choices that you're making and you can identify when you see something that's ethically inappropriate to share or should be held back to really take a look at why it is that you're holding it back both so that you can communicate what your values are sharing you with this because I don't have permission from this child but that also can help you identify the boundaries of what might really be okay to share so I'm not going to share the videos but maybe I can share for you the tabular data I can't show you the video of this child but I can tell you these are the responses that they made on every line that's de-identified I don't know who this child is I can show you that data or maybe even that's too sensitive but I can say hey I can't show you the values of this data but here's my complete data dictionary definition and range of every variable in the data set you can make those kinds of materials findable where then it might be appropriate to reach out to a researcher or to reach out to somebody to say hey I know this isn't totally totally public but can we come up with a way that we can share in a more appropriate context like you can't do that if you couldn't find at least a hint of what might be available so I really do think that both talking very openly about the choices that you make and then also about how you use other people's transparency can both be really powerful and effective the other thing that I just wanted to flag on this related to the question of impact or mission orientation I don't know if this is as much a dynamic in the EU in the US there can sometimes be this funny tension that will develop where there's a mandate for some set of policies to be evidence based but there's no definition of what evidence based means and you can even sometimes get into a place where people may use that mandate for evidence based to block change or to block an initiative to say we can't do anything we don't have evidence for this like I don't see a strong study saying that this is true we shouldn't do anything right now and so something that I think is really really important for a policy or facing organization to do is both to make a really strong case if the evidence is there make that case say this is definitely evidence based these are the policies that you know that we can follow and you become credible the more that you focus that on the things that you're very strong about and the things that you say look we only have initial evidence for this but this could be exciting or look it's true we have very little evidence on this but we don't have time to wait and although we don't have the evidence that this is the best program we can we have strong reasons to believe maybe we have observational data maybe we have data in a very different context maybe we have imperfect data but something where we can say like this action is better than no action or this action is better than status quo I can tell you based on some limited initial data but there's a mandate to act and so here's why you should act so I think that that like I think that it can be very kind of nerve wracking like if you are trying to reach a goal like a that's values driven if you are kind of feeling like what if I'm in the position where I'm going to find out that you know something that I wanted to promote doesn't have a strong evidentiary base that can be very scary but the thing that I hope that you that I think you have the opportunity to model is why why that's not something to shy away from but why it's something to move toward because it's what's going to let you get to those goals right like if you're able to say I very strongly feel this part is credible this part is not so credible that you become a trusted source you become a trusted platform that you can make your strongest recommendations kind of in the most focused way does that make sense maybe I'll stop there and now let us move on to some other questions thanks very much Melissa it's really fascinating talk really interesting to hear you talk about something you're so passionate about we've got a couple of questions that are related Karen asks how have the top tier and other journals responded to this philosophy and movement because researchers aspire to publish in the top journals in their fields and their career advancements often depend on the current journals in which they work yes so what I'll say about that is it's certainly an evolving it's an evolving field I won't speak to any particular journals just because I don't want to say anything wrong about a particular group or not kind of hit the ones that you are all are most relevant to but what I'll say about it is that we certainly catch their attention so for instance some of the work that the Center for Open Science does is to study and to make public information about what journals are currently doing so for instance with the top guidelines what these are doing is essentially saying our journal policies require you to either share your data or say what you can't our journal policies require you to upload your data either to a public repository or protected third party repository like so they're these kind of different levels and then you can take a journal stated publisher publishing policies and say like okay which of these things are they currently asking for and some of the top journals are going to show fewer of these practices than some of the smaller kind of like front-runner kind of cutting edge journals that are trying out different models the other things that we see so we see that larger journals will notice if they're getting beaten on any metric by a so-called smaller smaller journal and we also see that many of these larger publishing houses are also trying to promote their own open models right so something that we see is that they understand that there's a pressure there they understand that there's a desire for more open options and so they are going to try to figure out how it fits within their business model and how they can you know kind of compete alongside rather than getting left behind if scientists are moving away from the older and more close models great thanks very much my notifications are popping over the button thanks very much we just had a quick question from David which I'll go to because this is I think as you've answered to what degree can we pursue open science paradigms where many of our analytical tools are proprietary yeah we run into this as well we work a lot with journal metadata which we don't have permission to share or journal full text articles which you don't have permission to share and it's a lot of open science is going to return you to things that are very old and familiar which is just good documentation so if you're not able to publish the you know you can't publish the MATLAB code base or whatever it is you're not you're not able to just go around handing out free copies of MATLAB but you can say this is the version I use these are the packages that I use here's my analytic script that I can share with you the other thing that you maybe so that's one thing is to document as much as you can share kind of everything around that proprietary part that you're not able to share whether that's the code the data and the other thing that you can do is you can provide very clear instructions for how you would access that right so let's say it's a relatively unknown program that's you know only sold by one software company you would say I use this program this version you can access this program at this location or you can access this data source by applying to do this organization so it's always going to be about limitations there's almost no projects where absolutely everything can be made open something that I think can be really nice for just individual scientists so you don't have to wait for a new mandate from your organization not just scientists I'm sorry for people in general something that you don't have to wait for is to do your work in a kind of sharing aware way so even if right now you share nothing you can work in a way where you can say like well what if I was going to share everything how would I do this if I was going to make this all public or available for someone who doesn't already know about it and that includes both like making it just available and making it accessible right so how would I present this if I wanted it to be accessible to someone who's coming in for this the first time this has a big advantage for just you personally very selfishly which is that person is you in six months right so you've gone away from your project you've come back you don't know what anything is that documentation is going to help you as well and then what that does is essentially if you've been aware of the possibility of maybe sharing at some point in the future when you find that right opportunity where you're going to make something public that you weren't before you're going to be in a much just better state of preparedness to try to do that that's great thank you so much I was going to say be kind to your future self is the best reason for me to be absolutely selfishly be open