 So we're here to talk about how to make your research open and fair for those that are just starting out But then even those who are have some experience in making your research open But want to learn about more about what others are doing so we're going to be going through the tools that we use and the best practices that we found after iterating making mistakes learning from others and also why we do it So we'll be talking about the motivations as well and whether that be for the benefit of greater science or trying to improve our impact on society We'll be going over also with this current state of practice is for conducting analyzing and publishing research when you want to make it open and fair My name is Caitlyn Hall. I'm a grad student at Arizona State University My name is Lieke Melze. I'm an assistant professor computational hydrology at Wageningen University Hey, my name is Niels Rost. I'm a research off engineer at the eScience Center in the Netherlands I'm a symphonic assistant professor also Wageningen University in hydrologic sensing And I'm Rolf Hood. I'm an assistant professor at Delft University of Technology in the group of water resources engineering I'm a lead PI on the water cycle project which aims to verify hydrology where Niels is also involved So to start off We decided we thought we should actually talk about what is open and fair science and what those terms even mean and We have some experts on this panel that are here to talk about it particularly Niels who will Go through what fair sciences and Tim will talk a little bit more about what open science is. So sure right so fair I think Everybody really Understands the need to for science to build off each other Right. So this is the whole concept of standing on each other's shoulders And for that people have been sharing research results for a long long time And now actually what we're finding that in the modern age. We really need to up our game on this so a couple of years ago the number of people came together and actually came up with this concept of fair which is in the In the context of data for research, which is now becoming more and more important and there they said well to make sure Your research is is usable and reusable. It needs to be fair, which in this case means it needs to be findable accessible interoperable and reusable and Basically what they said there is that if you produce something in your research other people should be able to use it And these were kind of guiding principles for creating Data that's other scientists can then still use I know maybe you can elaborate a little bit because I always find that if I talk about fair science the first letter I can I sort of I remember and I can understand what it means to be be findable Yeah, and the other the other letters. I always forget directly what they mean or how to interpret them Yeah, I think this this goes for everybody I think it's for me. It's more of also sometimes a philosophy in that in that it's actually an acronym But yeah findable is very much on if you don't know data exists Where will you ever be able to make use of it? Right? So this is the first thing you really need to make sure that this data you can find Accessible then is kind of the next step. So you found out this data exists That's nice, but you know, is there a link I can click on or is there some other way that I can get to my data Or to this data, can I download it? And they will pay walling paywalling comes to mind, right? It's nice that there's a link, but if you click on it and well Well, the paywalling is actually separate. That's the old fair versus open, right? So fair doesn't say it needs to be That you need to be able to download it with a click It says you need to be you need to describe Well, where you get this data, right? How to get to it? And maybe this is writing a letter to the ethics commission of some medical Facility that actually stores this data because it's privacy sensitive. So it's not about making it open. It's it's about Getting it accessible describing. How do I access this data? And it's the same Because if I think of accessible it to me it feels like oh, yeah, you know, everyone should have access to it Or you should grant access to the rule, but that's not exactly what it means They just making sure you describe how to access those data. Yeah, it's also because the fair was also Described by quite a lot of people from the medical Community, which is one of the the founding communities. Let's say of this fair principle And that where they you really see the fact that you know, you cannot just open somebody's DNA data, right? Or somebody's scans that you made or other patient data that you gathered, but you still want them to be able to Build be billed upon by other people. So I that's why this accessible and the whole fair principles were really also meant for Privacy sensitive data and data that cannot be shared for whatever reason Yeah, and then Niels, could you I know that Leica and I have always like we talked about this maybe like a year ago We're like, yeah, it should be findable accessible interoperable and we're like, yeah, it should be interoperable We're not we weren't entirely sure what that meant Could you describe a little bit about what that means particularly in the geosciences because I since this was started more for like data Like what does this mean for geoscience? Yeah, so then so I think the nice thing about the geosciences is there's quite a lot of knowledge already about how to make something interoperable So interoperable again is kind of the idea where you want to make sure that Some other machine is able to read your data, right? If your data is just a picture they just scan of a picture that you made of an excel sheet Then it's kind of hard to still do anything with this information So then you still need to to be able to better describe what this data is not that here's the data, but also, okay What's the format and then even more for reusable, right? You really need enough references and enough information behind it to actually rely on this data to do something with it So that's kind of the R And then where you really see in the geosciences where if there's awesome things like netcdf, right? This is a file format made specifically to be able to better share data Which already has Metadata in the format, right? You can see the author you could even see the history of a netcdf file, right? Why was this made and how? So far, I guess I get understand the difference between fair and open so we can say open really means it is accessible to everyone Well fair basically means it's accessible if you follow these rules for these guidelines that are provided to you So you know where to find it Because I mean, I'm I'm just a scientist doing my work trying to be as open as fair and fair as possible At least that's what I'm saying most of the time But what I actually do when I write a paper I use some data and I drop those data on Hydro share or in 40 you on a way to share it What am I doing? Am I I'm doing a little bit of open? How can I make it fair? I really I don't know where to start basically Well, the nice thing about these resources like the 40 you But also other other other systems is that they come with an index, right? So you kind of already start covering the fair the F for fair when just by putting them in a in such a repository You know other people can find it. Maybe you still, you know, still quite hard Maybe to find it in the 40 you data set if you don't know it's there, right? So let me just make it maybe you shouldn't forget that that all our viewers are Dutch geoscientists, so You oh, it's one of the many repositories that are around to share data. I mean I guess the 40 you it refers to the Federation of the four universities of technology in the in Delft and the Yeah, but it's it's just like hydro share but not specifically for hydrological data, but yeah Yeah, well another actually nice resources to use is The is an auto which is also this awesome open repository Which is does that does that mean that at some point? We just need a whole list of search Places that we start looking for if we want to look for this. I need to look at Sonodo 40 you data center hydro share At some point I just need to know what's where right isn't this like Dying by its own success. That's a very bad Dutch translation. Well, well for for a part you're right, right? If they just like If I tell you it is on the internet and this doesn't help you much either, right? So you still need hints to know kind of where to go and what to look for but it already Helps quite a lot that it's available at all. You also get these awesome things called doys Attached to most of these resources Which is also Something you can then point to again in publications when you tell people about this data And this gives you a nice solid URL identifier thingy to give to people So that they're sure they can define it again, but then again, I'm wondering to what extent we as scientists Well, first of all, how do we invest the time to prepare these data? But second of all what I find myself doing very often is to share the process data And to what extent should we share the source data and to what extent should we actually share the model set up? I mean, I'm a computational hydrologist that do a lot of modeling So there's there's many questions involved with sharing there yeah, but so but then again that is kind of a Turtles all the way down question in this in the sense that I Do a lot of experimental design of sensors and And and so related field work Which results in the data that you would consider as input data So do you want to have my raw measurements and then should those raw measurements be involved in amperage? And do you want to have the so there's a You can maybe send a sensor to that you used they be well preferably so there's there's a Limits to what you would consider raw data or basically Someone's raw data as someone else's output and where do we set those limits? I mean, that's that's something I'm questioning when I share my data Well, basically, I think what you should be able to show to your colleagues is The pathway from the data that you took that ideally also has a DOI What you've did with it and what you resulted in it and if that's if that's a transparent pathway where you Where all the steps or both the the data coming in is fair The you describe the process really well, which I think leads us nicely to bridge into fair software And then you describe the resulting output data And make sure it has a DOI, etc. Then if then if that is all transparent then your colleagues can build on that with the trust That whatever you're claiming in your paper is actually what has happened to the data, which brings up another nice point I think Nils started with like standing on each other's shoulders But so far in science, we've always build on each other's conclusions and Now we're claiming that what we need is to build on each other's data. Do we actually need that? Or do we just like need to have the conclusion from someone and then gather our own data again No for me actually for me really the motivation to share my data where data again is a question like what's data And what's conclusions of course because model output that your conclusion But I think if you go back to where reproducibility comes from because to me fair Contributes to reproducibility and I think reproducibility is a core Achievement in science that we should maintain because well, I guess it's most clear we're doing measurements in the field You can measure hydraulic conductivity and in the unsaturated zone It will vary between sand and clay and you can determine that the soil type actually has influence on hydraulic conductivity But then if you repeat it again and the soil moisture level differs then you start noticing that actually The soil moisture content determines conductivities. So with repeating reproducing or trying to reproduce conclusions You can find out whether you are actually having any epistemic uncertainties if you're missing any relevant processes And I think therefore you should always build on start with reproducing see if you can Rebuild these conclusions and then continue building Mean that's why we need fair science in the first place for this reproducibility Oh, I would I would like to add that there's I mean making your data open a fair also allows for so many new avenues You can explore with the data So I don't think that you know if you collect data for one reason sure if someone needs to build on your Conclusions, I think what you say is completely completely make sense But for some of the reason why I or another reason why I make my data open a fair Is that others can take the data and do something with the data that I don't have the time for or the energy or the inspiration to do and So far that that did actually need to find some nice new fundamental Insights in processes that I would have never gotten myself So Just to make sure that I'm getting it So Leica your your motivation for wanting to do open and fair science is to make sure that the science that we're doing is valid so that There's a greater understanding whereas Tim you're looking to try and further ideas and try to like bolster like more scientific development or Like yeah, like what other motivations do you like you all have about like why you're doing open and fair science? Because I know that there are tons of different ways that not only just build up the scientific community But also go beyond and it's something that I know that offline. We've all talked about a lot So what other motivations do you all have? Well, I think one thing also to remember is that a lot of funders also seem this this increase of reuse and this Other ways that society can make use of this data. So it also it starts to also simply be a requirement It's maybe not never the best of best of motivations But I think you really see the the the funders really see like all this is important We should promote this to to such a point that it's just mandatory Oh news and I talked to Maria Cruz and I'll see you were from MWO the research council And we asked them why they as a funding agency why they Yeah, why they require awardees of grants to make their data open and and fair Well, MWO is the major funding council in Netherlands. So we have a budget of around 900 million euros a year Which is mainly spent on Funding competitive research projects but also on research infrastructure and MWO is also responsible for nine research institutes in the Netherlands at MWO we have a Open science program where we work on and and I think that we we have divided that in Into three pillars. It's open access publishing. It's opened or fair data management, it's citizen science And then we have also a very important ambition That's really trying to underpin the open science movement and is aimed at changing the reward and incentive structures in in science and rewarding researchers for their engagement with open science policies Hey, so the mission of MWO is to To fund a world world-class Research that has an impact on science itself, but also has an impact also on society and I think In order to to realize that impact. It's just imperative to to to also to promote open science researchers should not underestimate How big their audience potentially can be so it's not only the general public public But it's also policymakers around the world that want to to to learn and read about The newest Scientific discoveries also with the current epidemic. I'm really enjoying Well reading if I read a new story and it points to a paper then I can actually You know, I read a new story about the evidence For, you know, foreign against school opening and then I can actually go and read the paper and I've done that I've read those papers and now there's no paywall, you know You can anything on COVID you can just read because most publishers have removed paywalls on those papers I could say So our bait, I mean we So all grants that are funded by NVO There's a requirement that the papers coming out of those projects need to be need to be published or made available via through open access and Research data also needs to be shared not just openly, but in a fair manner at the end of the project or Since January this year Underlying data should be made available together with the publication at the time of publication But that's really interesting to me because somehow indeed Publishing open access makes sense sharing your data makes sense, but then again roles arguments What do I gain from putting everything? Exactly and and that's the other part of the package because NWO and not only NWO other also funding agencies and universities They're also shifting in there. Let's say assessment criteria. So as a scientist, it's not Only the most important thing to have the most papers with the highest impact factors But it's about also shifting the way that we evaluate science and evaluate Your performance as an academic and I think by having that shift You can also really put focus and emphasis on different contributions that you make as a scientist so whether it's replicating a study or collecting data making that open affair that that should be of maybe a similar value or or or assessment in the end as having that one paper in in nature. I Like the idea about that, but it has a hidden assumption. So so basically what they're saying is They funders. We want to have impact with the research that we fund We want this research to have impact in society or broader or whatever Their assumption is basically by making it open We'll have more impact than by not making it open and have scientists use the additional time to do more research That is something that I wonder if that's been tested Thus making your science open actually lead to more impact Yeah Yeah So they actually mentioned some numbers and hopefully we can maybe include this somewhere in the YouTube at some point But they actually refer to some science That showed that not open and not only open access papers, but also papers that are based on Open and or fair data. I think you scrapped me if I'm wrong actually have more citations in the end And sure more citations is still an old-fashioned way of assessing the impact of science, but that's what we have Don't get me started So, yeah, there is science. There is actually science being done on this I and it just showed that that there is huge benefit In a metric sense to making your science theory data Open and fair for I mean, I get it I get it's really nice that there's a moving on from the funding agencies to stimulate this kind of science And also, I get there should be ways of funding reproducibility studies But then I get the question like for now we've been building on 30 year old data that has never been able to reproduce I mean try to reproduce a paper that cited 5000 times from the 70s We won't be able to reproduce any of these at least when it considers computational studies So what do you do with all the knowledge? We're building upon these days that we're actually not able to reproduce I think that's a good question. Oh, maybe evidence is better answer to this But I mean personally, I think it would be great if there would be more opportunities for science for scientists to do To just repeat these fundamental experiments or reproduce these fundamental conclusions that the that were drawn in papers 2030 40 years ago But of course, you know, if there's no incentive and if there's no money, who's gonna do it? Well, I'm pushing my own agenda because In water cycle the the platform that we work on is actually built to facilitate a reproducibility By design and allow people to build on previous research really easily But I think that we should make a Judgment call on what historic research do we actually want to Reproduce because it's gonna be a well if we just blankets say all historic research that doesn't share its data It's useless that we should start by reevaluating gravity because Newton never showed its data, but I think yeah, it still works So I think we can we can build on Because he described the theory and his experiments in a way, right? So that's that's exactly you demonstrate How well he he made a science there in a way. Yeah, but I want to do experiments for sure And I think that we don't need to redo every experiment I think that we can by looking at which Conclusions do and don't hold both within their original context and in the broader context We can maybe pinpoint to some studies that are problematic that we would really want to revisit the test if they're if they still hold up or if we're building on faulty knowledge But that would be great. So I so from personal experience. I remember I'm not gonna say which exact field of what exact papers, but I Don't know names James It's also because I forgot okay. Okay, so I remember that I was For some hydrological modeling study. I was looking for I was looking for a parameter value And I found a paper that used the specific value and it's not like hey I'm gonna use that you like there was some reason to do that and I used it and everything was fine But I started questioning, you know, my decision to use this parameter value And I was looking at the paper I got it from to see where they got it from because they didn't they didn't measure or get it themselves The data also referred to some older paper which referred to some older paper all the way back to You know some random random paper in some random journal that arbitrarily choose chose this value to build their model on and and I was just so surprised that no one really sort of checked or Validated or we or did this again to just confirm that this is a realistic value to Put in your hydrological models. It was such a crucial value in the end That's basically how we got to zero point zero five thresholds, right? Like someone's once said, oh, yeah That'll be a nice one and then everybody said he said is it wasn't nice So we should use that and sometimes it's a nice quick fix, but it's clear. I mean, there's so many I mean Constance that we use in science Just because they fit the model and they sort of tie everything together, but it's clear where they come from Well, it's interesting team that you had that experience because I had exactly the same eventually leading back to unpublished work So it's not anecdotal evidence actually independent from each other We had such an experience like looking back tracing back where things come from and Well, especially if you consider the the the large amount of Decisions that we make while setting up a model. I mean we're talking about modeling studies now specifically, right? So that's a specific branch of hydrology, of course There's there's so much that actually really urges Sharing how you set up your model how you decided all those Values and actually that's why we need a platform like a water cycle because you can share your data and you can mention the version of Your model, but then there will still be so many details that you won't share that still hamper reproducibility of your model study Yeah, and I think that like this also goes beyond just hydrological modeling. I was talking about this literally yesterday like about how This like one method has just been accepted and how we use this a lot of hand-waving during numerical analysis of physical modeling and physical experiments because we're like, well this person this really famous person did it like 40 years ago, so it's got to be right and I think that really starts to highlight Why like not only just our manuscript should be open and fair, but also that The process of like getting to these manuscripts and like not just like that initial data that I think Lika was talking about with like You go in you to take a measurement in the field, but also the process of Getting to what that manuscript output is like actually showing like how did you get these values? Where did where did they come from? So then if we're using these really I don't know just Field accepted values, then at least we're like Acknowledging that they might be arbitrary, but at least then it's not we don't have to go through like four or five different reference papers to get to that point and I think that that starts to really highlight what assumptions we're making and Well, ultimately like help science, but then as we've been talking about Taking this open and trying to get it more fair like this one arbitrary constant might not mean anything to someone that's Trying to make a policy decision or that's working for like a government Because they might not have as much access to these other papers. So then I think that also really highlights why it could be important for beyond Just like our own research There's this sparks two thoughts, but let's follow up on the last one Do we expect this one? Yeah, that one. Do we expect then so in the geosciences We're doing research where a lot of policy is based on that has impact on the way we shape society the decisions we make Do we expect people that make these decisions? To like read our papers and go. Yeah, clean gupta efficiency of point six. This is a hydrological model I should trust or actually look at my python code No, absolutely not But like they're working with scientists that are informing policy like the science that's being done and like people are reading The papers that we're putting out that eventually inform policy But they should the people that are that link between our academic studies and the ultimate Output of policy should at least know where our information is coming from and that should be a lot more clear And so the thing is how do we build a system where people without having to read everyone's work? can know Like who to trust or which papers have more if we if we take it back to geo and hydro At some point I will say to some of my PhDs go and look at these papers be very critical and Then if you're if you're convinced that whatever they're doing is okay start building on that I Will retire hopefully soon whatever that so my PhDs will have their PhDs Will they say to their PhDs go look at these papers again be very critical again at some point We would just say okay these papers. We've been checked and they've been found to hold up You're referring to this. Yeah. Yeah, so how do we communicate? But how do we communicate that? How do we how do we communicate that system of trust as a community because now it's very intangible and I would like like Likes and don't like yeah, I mean but in the review process right you're doing this check, right? That's why earth archive. I mean, it's nice, but it's not necessarily something you can trust Well, if something has been peer-reviewed and published you sort of hope That it fulfills current standards inside so three people have looked at it Have read the conclusion have checked whether they were cited themselves and have then vouched for it I read so I know some papers from my early career that if people go through it very carefully They'll probably find bugs in the code that I hope will not have an impact on the final conclusion But that's the point everyone has bugs in their codes every model has that center Doesn't matter if you're working with a model that was different 30 years ago There will still be bugs in the code. That's for sure. So so The current peer review system by three people Who do not actually not have the time for it nor the incentive to do it, right? Will not help in building that system of trust So I think we need something else and maybe maybe it's just likes and dislikes. I don't know But we got to have something to help Scientist make a judgment call on is this something I need to reproduce before trusting it Or is this something that has been reproduced a gazillion times weird? We're okay Maybe maybe an article should just have a counter of number of Yeah, how often it has been reproduced Maybe maybe at least you should make sure that reproducing like the computational side Which is kind of the easy part right doing measurements again is hard But the computational side we should at least be able to just press a button and get it to run again Does it mean that we need to adjust the definitional fare for software versus data? Yeah, I don't think it's quite the same I think then what I spoke at the beginning that we have this kind of fair philosophy almost that It doesn't matter what the letters mean, but it's this it is where you're thinking I think this way of thinking really translates well into software But the actual steps, right? What is findable software is maybe something you can think about a bit, but what is interoperable software? It's a very different you'll get a very different set of actually concrete things And problems I have I have some computer science friends that would say that software is never interoperable because even if you run the same software the same Computer a second time you'd have big fluctuations because of Boltzmann noise in your random generator Yeah, so you get a different out like our GIS Yeah Yeah Yeah, so actually from At the east end center We also thought about this to how would fare kind of translate the software you actually made a site a fair software dot NL Which although it's NL is not actually in Dutch or exclusively for a Dutch audience Find the link below and then we really try to get like first steps towards making software fair So make sure it make sure it's somewhere with you know in this case probably get up or some other They you know software repository Put a license on it. So interesting one right because it's also just like data has licenses software also has licenses and some other ways to try to you know Get to the first steps, but you're right some things If something depends on the not random number generator on your computer being nice and toasty when you compute it Then it's gonna be a problem well, it's also I think one of the big differences is that If I can read the ones and zeros from old data or maybe even like the printed table form from old experiments I can reuse the data, but if you give me old Lotus one two three. What was it called? Spreadsheets. I don't think I have a machine anymore that can read those and I feel that the turnover time for for that in the software world is pretty fast Yeah, yeah, it's a problem You have to actually start thinking about packing in the software environment not only the software, right? So here's the code this helps Try to make your code adhere to like standard quality standards like don't use Some library nobody has ever seen To do all them off right do that with some kind of standard thing Maybe that survives a bit longer, but yeah, maybe you need to just package over to machine with your software ideally It's actually the same for data. Actually people are struggling to read a these data Be it on tapes or actually online, but also the data formats that they used Yeah, it's a I think it's a constant Think it's up to those young starting early career scientist almost mid-career scientist To make a good selection of the work of that previous generation to see Which we should really scrutinize and try to reproduce But then in the way we're doing this replication We should make sure that at least the replication is again reproducible Making sure that all the code is available Making sure that all the data is available For others to build on and maybe by doing that with some key papers, I'm really pushing my own agenda here Maybe by doing that by some with some key papers We can start building trust that Some of that collection of work is actually worthwhile building on and some of it should be discarded I Guess that's the progress of science to to falsify Stuff eventually so even if we retest old work We can decide that maybe a couple of years research has been wasted on the wrong assumption Hopefully of course one of the problems is that we don't really have controlled experiments and geosciences is too complex to read Well, and I also think that if you take if you look at the physics World the popular opinion about dogma shifts is is usually that this was completely wrong and now we're thinking this Whereas what usually happens is for this stretch of Domain this happened to be truth and then we started looking outside of it and it turned out to be different And I think that in the geosciences that's more true than ever because I'm pretty sure that Darcy's law will continue to hold up for Nicely uniform sandpads We already know it doesn't hold up for inhomogeneous weird stuff and that you cannot blatantly use Darcy for 10 kilometer pixels in a global grid Yes, this is where science versus engineering comes in right for sure Yeah, we kept on using these kind of or we keep on using these kind of concepts because it works practically Yeah, well, we're what the in with that respect We're we're in a field where we're not only asked how it works We're also asked to make predictions or to come up with data that policy can actually use and sometimes that means you're constrained You have to come up with an answer. I can see the tension there Yeah, but then just to come back to Tim's question because he has actually the right question Who's gonna do all this? I mean, we know now We like to check all the knowledge that we build upon and from now on We should make our knowledge that we create available to everyone fair and open and reproducible But then I hear Niels explaining. Oh, maybe we should also store the software environment I mean, I'm not a software developer who's responsible or who's taking care of sharing the data And how are we gonna do that? I Think that we also Coming up to early career scientists as well because like we're the one like early career scientists are the ones that are starting that are pushing a lot for this process now because of like the mid and More experienced career scientists are like, yes, we should be doing these things But then I think you're right that it does come back to what Tim was saying that Like the onus is then on the people coming in rather than the people leaving as well I'm like at what like who should be doing that as well I only I guess we are the ones that are still you know We can we can be mobile to a little bit more easily like we're not so rusted in our behavior and way of doing Especially the the ones who are just joining academia of course, but I think that it's just a it's a shift of mindset You know, if you start early and if you just understand that this is an integral part of doing science You can do it right. You can do it open. You can do it fair. I Then how can I agree with all of that? But it's for the old outgoing generation to create the environment where you can actually do this and be rewarded for that That's the old outgoing generation that has the air of the funding agencies and the policy setters It's it's nice that we have I mean we have the Dora Declaration of course that is And I think that's really good that universities are signing that and I think it's good that funders like NWO are actually starting to mandate these kind of things because Having a few champions is nice to start a culture shift but you'll always need some stronger incentives in place to get the bulk majority of people to change the way they're doing just anecdotal I got some feedback on a proposal and Like basically was written like, ah, you know, yeah, yeah, I'm not so sure But all it's all its papers are published open access. So it must be a great scientist She got me to the to the next round. So, you know there's a sense of there So then I guess like how can the communities? How can the community specifically be supporting this? So in addition to having these open and fair Champions as these experienced scientists, but like what specifically Can the community start doing to either promote this like let's say that they are experienced scientists Like what can that group specifically be doing to support this? Well in addition to only the scientists, it's also the universities, right? So in in many universities around the world I know more examples in Europe specifically the libraries play a key role here because the libraries They know how to make sure how to get your papers published open access They know the mechanisms to get additional funding if necessary and because I find it interesting you mentioned universities It's also us as lecturers the responsibility to where they have the next I mean how all are we? But there's the next generation waiting there to already inform the next generation That's now taking their master course to inform them on these principles As reviewers of other people's paper, we should just blatantly say this data isn't open. This should be rejected This software isn't reproducible or whether this should be rejected And you can you can be nice about that in the way you communicate it of course, especially if you have some experience You could say look this data isn't open. Please use the steps here to openly publish your data as well And make that part of your standard review routine, which I do before I go into the actual content There was there was an editor Who did that who basically every time when a paper was submitted that that editor needs to handle They asked like okay share your data And I think about 30 of the initial submissions directly withdrew their paper because of that question Yeah, well in that case they're well, I I like that the editor puts out his neck. Is that Dutch? I think it's Dutch but that puts out his neck To to force that change if that's the culture you want to support And I also I think I think there's also Also a matter of leading by example And also I think the early career scientists can really Start to help out by also actively asking Everybody around them in their groups say hey, why aren't we making this open? Why isn't this public shouldn't we publish this data shouldn't we open access everything? Right. Look, I found this policy on the university. It actually said we should so why aren't we doing this? right, so I think it's also a matter of talking about it And promoting this also however In whatever group you are And I think also the first few steps in making things fair and open shouldn't be that hard Right. It's it's it's you know, put your data on on a on a pre-print service Make sure your code is available somewhere If only on your website or put it in a in a get up somewhere Yeah, and what y'all are talking about about also making sure that like we're promoting those practices like in proposals So like not just the like, okay, we have all this data. What are we going to do with it now? But making it intentional. I think also make this process a lot easier For people who have some experience in this or are just starting up to like start with the intention that your data your software your Data analysis processes like are going to be open and fair So I think that all that also makes it a lot easier to do rather than having to Do a bunch of cleanup at the end and hoping for the best Yeah, and just to add to that also be honest about that it takes time It costs time and money to do that and it's perfectly okay to in the planning of your proposal of your research project to to intentionally and explicitly Reserves time and funding to make your Your data and your software and your code open and fair A lot of the things that funders actually do is they only fund hours for researchers So I get money for I put in a call. I get some money The only thing I can do and either it's because my university doesn't allow me or the specific call doesn't allow me The only thing I can do is hire a phd It would be really nice if I could just say look within this call I want to hire a phd That's going to study this and this and this and I want to have 10% support from a research software engineer or from a data champion That is going to help this phd out with making a data management plan for whatever she or he is doing I want a research software engineer that makes sure that the code that this phd is writing is up to the standard For sharing it so that others can use it And that is also something that funders can allow and push for I agree. All right, so if we don't have anything else that we want to add Dude, and if you don't know what to do talk to your library, uh, you know You can always reach out also to one of us if you need suggestions on you know How to pay for your open access fee how to make your data your software open if they're uh, we're there If you use their your libraries Find it find a journal that doesn't have an open access fee But it's just open by design and doesn't offer charge you and doesn't that doesn't build on a 40 profit margin Yeah, and so we'll be sharing different Ways that like people listening in can Put their data up in an open Open location or put pre prints to get feedback Different softwares that are already open and fair So you don't have to start from scratch and we'll be sharing those as well so that It's always really hard to find answers to a problem with google if you don't know what to search for So we're gonna at least start a launchpad for that But I think Tim brought up a good point that We're always available for these types of questions But also there are a lot of open and fair champions in the geosciences already and so they There's already a really strong community and support for this It's it's definitely not something that anyone needs to start alone nor from scratch I'm well set Thank you. All right. Thank you so much