 Okay, hello everybody. I see it's we have 28 attendees already so maybe let's wait for another minute so for the for everybody to join us. Well, I think in the meantime I can start with the introduction so good afternoon to everybody. Welcome to our seminar. A fairy tale managing your data to be findable accessible interoperable and reusable. It's a part two of our series and open access and open science as last event also this event is jointly by STI and ICTP library. I'm very happy to introduce to you, Professor Eleanor Julia from the University of Turin. She is head of the open science unit at the university, but she's also part of the International Open Science Network she's a part of several and taking part in several working groups, you funded projects and also scientific boards. And she has been training and advocating for open access and open science since 2010. So there is lots of experience, you will be able to talk about thank you for for that. We can start with the with talk right now. Thank you for coming Eleanor the floor. Thank you. Thank you. Thank you for the introduction but it also means that I'm really cold. Because you know, when you say I've been advocating since 2010. It means also that I just mean you know we pick the best for us. Thank you for inviting. And thank you for the people we I see, I see people still joining but anyway. So, from now on, I will go. I will be switching off my camera. Yeah, sorry, to interrupt something I forgot since you know I'm new to the whole thing. Please for everybody. Of course, after your talk, we're going to answer your questions so please. And during a presentation, please post your questions in the Q&A section. And after the talk, we're going to come back on and answer these questions. Thank you. Thank you. Thank you for reminding us. So let me check if it works. Can you see the presentation mode. Yes, we can see it. Okay, so as I told you last time, because this is our second meeting last time we saw something about open access the wide what and and how today we are going to deal with the fair principles. And as I told you, last time the slides are already available on on Zenodo then I will put the link into into the chat later on. And actually, as I told you last time it's a sort of mission impossible to talk about the fair principles in less than one hour just to leave some sometimes for the for the questions but anyway, we will try. Because these two meetings are just a sort of appetizer, if you can say, if we can put it this way, because there is a lot more to discuss in open science so we should need, should need much much more time to discuss all the elements and the components of open science. Coming to fair, why do we need fair, because data are difficult to find and once you found them, maybe they might be difficult to access. And once access it they might be difficult to interpret to understand. And if you don't found them, then you spend a lot of time and money in recreating them. First, and you ever lost your own data or you could no longer access or understand your own data so that's why we need fair or all the, all the above are the reasons why we need fair data. In dealing with data we have the three steps. Hopefully, all our data should be as open as possible you know that as open as possible as close as necessary is one of the principle behind all the European policies in the last I will say 15 years. So, hopefully, all our data should be open. But if our data are not fair, opening might be risky, because of potential misuse or misinterpretation of your data. And if your data are not properly managed from the beginning from day one. It's almost impossible to make them fair or I would say it's time consuming so it doesn't work okay. And you can also see the figure, the other way around so you need to first you need to manage your data, because managing data is in the, in the primary interest of any researcher, just to have a workflow, which is effective and streamlined. Then you have to make them fair and if possible you have to open them, but being in the eosk era and we will see something about us during the presentation. Manage data and fair data are increasingly overlapping okay, and we are talking about data, which must be fair by design. And if you are planning to to apply to to get funded in in Horizon Europe beer in mind that we have responsible management of data, according to the fair principles as one of the mandatory practices in in Horizon Europe so bear it in mind, if you are planning to to apply. What does fair means the article was published in nature in March 2016 to be findable you need identifiers and metadata to be accessible, you need to know where to find the data and under what access conditions and you need open formats. Now, and I will keep saying it until the end accessible does not mean open does not equate to open. It simply means that I need to know where to find the data. Interoperable means that you have to use standards and ontologies and reusable means that you need licenses and documentation in order for your data to be reusable. And of course, not only for humans, but also for machines so metadata ontologies standards and all the above should be machine readable. Before boarding and just not to be wrong on that, please address. Besides the original paper also this paper about interpretation and implementation of fair principles they also are the same, but it's important. Just because we said the fair are principles so you need implementation, but if you interpret in a wrong way, the fair principles as the implementation also will be somehow wrong. And if you want to know what fair means on a daily life. Please watch this short video it's just four minutes. And as usual you have all the all the links in the slides, just to see how fair works. Okay, and it's a sort of train. Okay, it's a fair train calling just at the station, the train recognizes useful. According to the metadata. So metadata are really crucial in a fair world. The focus is on reuse so it's on the R of the acronym fair. And why is that because data are not used. Because after they were created in 80% 85% of the cases and that's why the European Commission is funding this European open science cloud. This quote is from the President Ursula von der Leyen last year in in Davos. And that's why we have the eos. This was the day the eos was launched exactly three years ago in Vienna on this day. And what is it what what is this European open science cloud is a seamless access to open by default fair data. Okay, so it's a sort of virtual environment in which data producers service producers and innovators and simple citizens meet. And basically they, they innovate. Okay, and they, they can translate research into benefit for society at large. So I think about us as in, let's say in computer science in a computer science meaning. So eos is not a big box is not a cloud, you don't upload anything into eos, you simply make your data fair. So eos services can find them this idea of the train calling at fair station is precisely what what is key is just to give access seamless access to more than 20 million European researchers and as you can see here, the European open science cloud is a supporting environment for open science and not an open cloud for science. And if you look at the eos strategic research and innovation agenda you will see that the first objective of the eos is to make open science the new normal. Okay, so it's, it's, it's a cloud. It's, it's an environment to support open science and not not the reverse. So they are principles so they are as you can see here they are very very technical so like data are assigned a globally unique and eternally persistent identifier data are described with rich metadata. And in dealing with accessible, I, I would stress it and keep stressing it accessible does not mean open data can be closed. If I'm a researcher interested in this kind of data, I need to know where to find them and under what access condition. There will be an increasing overlapping among fair and open. As we saw that in the, at least in the European framework. The principle is as open as possible as close as necessary, but there will always always be perfectly fair close data, and not just for let's say privacy or personal data or sensitive data, but one one example I got in giving a seminar to biologists. We were talking about citizen science and people recording with their smartphone, the immigration of birds in the sky. And I asked why can't you, can't you put this kind of data open. And they, they laughed and told me and what about hunters. Okay, so for any reason you, you have to, you can keep your data close and they are provided that they are perfectly fair. Okay, so if I'm a researcher interested in bird migration, I need to know where to find data, even though the data are not open. So we said that fair refer to a set of principles so it's not a standard. Okay, it's not equal to every F data or the semantic web it's not equal to open we have already said, and repeated it, and it's not just about humans. I would say the contrary so it's mostly about about machines. If you want to know what fair is in a nutshell please please address this infographic, which is not something you know it's not just colored and bright and so on. If you click on the icon on, for instance on persistent identifier, then there will be the corresponding page on the Australian data common service, explaining what persistent identifier is the tools, the training and anything about the single identifier principle. There are also fair principles for software as you can see some were just rephrased some were discarded or some is newly proposed but anyway, we can also talk about fairness in for software. This is one of the, I would say, this is the model of the fair object digital object of the future, coming from this report issue the same day in Vienna in which the EOS was launched. So if you see several layer at the core we have the digital object, then we have a layer of identifiers, a layer of standards, and the more external layer of metadata, which is the contextual documentation. If you look at the fair principles, you will see that some, the one that you find here in red, this is a slide by Eric Schultes from welfare. He is one of the experts with capital in Europe about fair. Some are, let's say, technical and some are domain specific. The one in blue, you see, but they are strictly interlinked. And if you want to know more about this differences when you when you deal with fair data principles. Which is the, your responsibility as researcher, and which are the elements of wood repository is taking care of. In dealing with there with fair, you should set, because fair is something we said it's a principle so community should implement the fair principles according to the specificity. We're respecting the specificity of the scientific community. So it's bottom up. Okay, and you should set this so called fair implementation profile, which then can be used by the community, and the idea behind the fair implementation profile is convergence. So, in the past, nobody told us to use for instance the TCP IP protocol for the internet, but we all use it because it works. Okay. It's easy. It works. It's scalable. It's whatever. So this idea of converging on the most useful solution for a specific community. And basically this is a fair implementation profile as you can see the community should say which identifier. We are using which metadata schema, which ontology and so on, as we will see later. But to create fair implementation profile or simply to deal with fair data. We all need this new profile, which is the data steward data stewards are one of the critical success factors in in the eosk, and any research performing organization should set its own data stewards network, because they support researchers to define their data. And it's a very high profile, new profession, because the data steward should have as a core competence, the competence on data on domain data. And on top of that, he or she gets also transverse transversal competencies on on fair. But the so I should say that the perfect profile for a data steward is a PhD. Okay, so with a strong competence on domain data. If your assess if your data are fair or not, or better, the grade of fairness of your data. We are going to see for different tools. The first one is the fair self self assessment tool, which is really useful and helpful as a, I would say, as a first step. Okay, just to ask you the right questions about your data. Does the data set up any identifier assigned. But of course it's just for human so you can answer yes, but then if the machine goes and does not find any identifier, your answer is not so let's say useful. Anyway, so it's helpful just the first step just to think about how fair your data could be. The second one is fair aware. Again, it's just for humans. So it's you as a researcher, answering the questions. So what you see that if you click on the eye, or if you reply no this short, let's say, information information card will pop up explaining what a persistent identifier is. And if you want to know more, you can access more information but it's very basic very targeted oriented. You have machine readable tools in order to assess the fairness of your data sets. So you simply here you simply put your do I the do I have your data set, and the fair maturity evaluator will check about the fairness, and if it fails. It gives you back the reason why the check failed and what you have to address in order to make your data fair. Another one is the Fuji again is in beta but you simply put your do I, and the system checks. Fair enough, fair enough, not only checks against the fair principles but it also gives you a sort of bonus. Okay, you can see here in the booster. The icon, meaning that you are fair and you, you have also bonus okay so the system found something more than the simple fair principles, fair principle sorry compliance. If you want to make your data fair, you have some tools I would show you at least two. One is the fair cookbook. The fair cookbook is an ongoing project from from a European project based at the University of Cambridge. So it's ongoing, it's on the making so I took a screenshot but today it could be different because it's. It's an ongoing project. And I find it really useful because you have single recipes, okay, for single aspects of, of fair, how to make fair for instance, the metadata the metadata schema or something like this so again very practical, very target oriented and very, and so on. The second one is the fair, the data stewardship wizard, which is useful also to draft a data management plan which we will see it's the tool to make your data fair. So wizard so it creates upon your answers. If I answer yes to open a path if I answer no, it opens another path, and gives you external links to potentially useful tools. And it opens also the book, Data Stewardship for Open Science by Baron Mons, who is the expert with the capital E not one of the expert on fair data. And the book chapter is here so what's up do don't. And that's it. Okay, so again, very, very simple very practical very targeted oriented. Basically this wizard guides you in making your data fair. And in the end, you don't have to write anything so the system automatically will extract the relevant information to fill in your data management plan. Okay, so you don't have to write to draft the data management plan, but the system will take the relevant information from the wizard itself and we will see at the end in dealing with data management plans. So, in order to be findable and please be remind that this course on on fair data, usually lasts like three hours Okay, so I had to cut cut cut cut cut and now I'm running, but anyway, just give you a hint of what fair is. In order to be findable, you need metadata. Okay, and metadata can be descriptive metadata of provenance technical metadata rights and access metadata preservation metadata citation metadata whatever. Okay, so you can address this this page on the on the Australian data service in order to deal with metadata. If you don't know, which are the metadata standards in using your community please refer to this FDA metadata directory, I think you are all familiar with the research data alliance. It's a bottom up organization. It's a interest group and working group and it issue recommendations and standards and whatever it's really bottom up. It's community driven so if you, if you are interested or if your research is is data driven LDA is a good place to be. They are managing this this directory. Okay, so you can look, for instance, in physics, or in, I don't know environmental sciences, climatology or whatever, which are the standards in use by the community because don't forget this idea of convergence. If the community is using a standard and if this standard fits our research please adopt the standard okay don't reinvent the wheel or don't recreate it from from scratch. One of the, I would say most useful tools in dealing with metadata is said, if you want to have a look to to say that because said, it's a system, which make it easy really to to collect and and use metadata you see it opens the drop down menus with control vocabulary so it really makes it easy. Okay, deal with with metadata. You also need persistent identifiers, and maybe you can be used to a do I digital object identifier. What's the rational behind the behind assigning a digital object object identifier that then the research can work by, let's say, bricks, okay, building blocks. So you don't have, for instance, when you deposit a protocol, and you assign a do I to your protocol, then you don't have to rewrite again the protocol anytime you write a paper, you simply recall it by a do I by an identifier. But the same is true as for you as a researcher so please use the orchid ID, because the orchid ID is a powerful tool to link you as a researcher with all the other identifiers and services. In order to be accessible and bearing in mind that accessible does not mean open, you need somewhere you need a box, okay, in where to put your data so you need a repository. So is the repository I would recommend because it's powered by the CERN in Geneva. It's completely free for for user for user. It's free until 50 Giga per record. You can also create a community. For instance, with the acronym of your project, you can choose different level of access closed embargo restricted open, whatever. Dataverse is also very good because you can federate different data versus for instance in difference in different institutes. You have also commercial enterprises like try it or feature. If you are looking for a repository if you don't know if a repository is in use a data repository I mean in your community please refer to registry data which is the registry of research data repositories and you will find more than 2000 data repositories. But another way another let's say another place to put your data in is a data journal. In the following number of data journals. They basically publish only data sets with a short explanation like one page. And the only section the only mandatory section is the reuse potential. Okay, of your of your data set, but look at this paper. A way to put data into the scholarly communication system okay because it's a publication it's a journal. So it counts also for evaluation purposes and so on and so forth. In order to be accessible you also need open formats okay so you have preferred formats. This is for long term preservation okay because your data hopefully should be reusable for the next 10 or 20 years. So, the typical example is don't use Microsoft Excel file, but a CSV, okay, a CSV file or a TXT or PDF. The format should be understandable and for anyone and usable for anyone. This is one of the recipes of the fair cookbook from proprietary to open standard data format as you can see you have the difficulty level, the reading time, the recipe time this is a hands on, if you have executable code inside. So it's right like really like like, like a recipe. In order to be interoperable you need standards and ontologies and if you're not familiar but I'm sure you are familiar with these terms you can refer to this guide. The main tool to add ontologies to your data set is the right field tool to add ontology to your spreadsheet. And this is another recipe from the fair cookbook we saw before and this is the main tool you should use to be interoperable. This is a fair sharing registry. As you can see, it contains the standards, databases, policies, collections, whatever. And by standards, we can find ontologies, metadata schema. Or protocols or whatever. Okay, so again, if you look here and you find what your community is using in order to make your data fair, please convert to this to the standards. If you don't find it, you can suggest and have it included in fair sharing to be reusable, you need documentation documentation is basically what you put in the read me file. It's in your primary interest as researchers to properly document your data and your data sets. Okay, because associating the right documentation to your data set, you avoid the misuse of your data, but you also keep the integrity of your data and the documentation you will also explain all the process, the tools you used, maybe the software, the code you use to process your data, whatever, okay, basically what you put in the read me file. If not, have you, are you familiar with open lab notebook, because I think they could be the future of scholarly communication, our studio or Jupiter, and because in an open lab notebook, you can put everything. So you have the descriptive text text, but you also have the data, you have executable code, you have basically anything relating to your, to your experiment to your research. So my question would be, do you still need journals to publish your research, once you can make your open notebook public but anyway, this is protocols.io the one of the tools. We use in open science to convey this idea of building blocks of research. Okay, so once you deposit your protocol as I was saying before, then it gets a DOI and then you can simply recall it. And if you want to address, you know, let's say in a wider, with a wider perspective, this idea of a reproducible science of a fair science. There is this the Turing way, which is again a bottom up initiative, a book which is co-authored by the community, and it deals with fair data, data management plan, reproducible practices, ethical practices, ethical aspects, whatever. So it's really, I would say, a starting point in making your research fair, because we used to talk about fair data, but actually every component of the research cycle should be fair. To be reusable, you need licenses and this specific part of the R in the afternoon fair would require a lesson or maybe an entire course, because legal aspects in managing data are really complex, okay. This is something I know when I show the next figure, something which is like, you know, a bomb when you are in a physical room, because you see researchers jumping on their seat, because raw data are not protected by copyright. Okay, copyright protects only the creativity. So, on raw data, like on information on mathematical formula, there is no copyright, because there is no creativity. Okay, if you have a database, but you need to have a database which is defined in the directive in the European directive as a collection of independent works data or the other materials arranged in a systematic or methodical way. If you have a database, you have the protection of the sui generis right, which lasts 20 years and basically protects the substantial efforts in obtaining the data. Then, if you have a creative database, will you also have the protection of copyright, but what copyright protects in this case is the structure. Okay, so it's the creative part is the selection, the arrangement of the data and never the raw data the content itself. And I know it's difficult because the researchers tend to think to their data as my data. Okay, but you can have other form of legal protection like contracts or agreements or whatever on your data, but not not the copyright. If you want to know more about this idea of data protection, there is this wonderful paper by Thomas Margoni, who is a lawyer is an expert in copyright law and in Nassila Bastida, and Thomas also published these three guys provided by the open air project. What is research data, the protection of research data, how do I license my research data, and can I reuse someone else's research data so these are three very helpful guide in order to deal with legal aspects and legal protection of your data. Then you have this creative commons fact sheet on your data explaining why the CC zero license. So the sort of what is called the dedication to public domain is the only legally suitable license for for your data as we said that there is no copyright okay. So even a CC by could be not legally right, which does not mean that you don't have to cite the source. Okay, so you applying the CC zero license to your data does not mean to be academic and polite but everything is explained in a very clearly manner in this creative commons fact sheet. And just to finish this this short presentation on fair to make your data fair and basically to manage your data you need a data management plan, a growing number of funding organization is requiring a data management plan. What is a data management plan is a structured way to think to your research from the perspective or your of your data so how do you collect them, how do you preserve them. How do you describe so the metadata schema, how do you share them and if you can't share them. So if you have to keep your data closed why. So the reasons why you have to keep them close. It's a powerful tool, because if you set clear rules from the beginning. All your research process will be will run smoothly. Okay, and all the more so, if you are in a, let's say in a collaborative research, because then you have clear rules for all the partners. The data management plan is a living document. Okay, it's a living document so it needs to be updated anytime the conditions of the research changes. You have basically two tools you have several tools but the two I could recommend one is the MP online. And the other one is the data stewardship with that we already saw in the in the previous slides. And here you have two videos explaining it's it's a they are tutorials okay explaining how they, how they work, but they are really very, let's say, useful tools because they guide you in drafting your, your DMP. You also have tips and tricks. Look for instance at number seven. So, you can't copy the MP because every research project is unique and look also at point nine. No say so. Okay, so if you don't know for instance, the expected size of the data you are going to generate in your project, you simply stated, maybe at the beginning. At this stage of the project we can't estimate the amount, the size, the volume of the data we are going to generate. And then in the first, in the first version in the first updating of the DMP, then you can revise this this statement. And why it's so important, for instance, to estimate the volume of your data, because there might be costs. Okay. To to preserve your data, as I told you, Zenodo is for free until 50 giga per item per record. Okay. And bear in mind when you deal with your data that the principle is as open as close as possible as close as necessary. Okay. And why not. Not only data. Okay, you can make your entire workflow open, as you can see in this rainbow created by Bianca Kramel and general embosement from the University of Utrecht. You can open, you have the tools to open up every step of your research and if you are interested in, of course, we can have another meeting about this, the opening of the entire workflow. And that's it. That was my mission impossible for today, just to leave more time for them. Thank you for the Q&A section. I see something in the chat, but maybe we also have Eva, do you do we have something in the Q&A also. So I'm checking. Well enough for your excellent presentation. At the moment, I can't see any questions in the Q&A section. So your participants, if you have questions, please post them in the Q&A section or also in the chat if you want to. There is a question about Zenodo, I see here. No, the limit of 50 giga is, let's say, if you, if you look at the policies and the frequently asked question in Zenodo, they say if you have bigger data sets, please contact us. So I think you can, you know, you can negotiate about, it's not, it's not the maximum size of the deposit, okay. It's only that if you have bigger data sets, you might, you might have to pay, but they say please contact us, so please contact your colleague at CERN and they will tell you if there are costs or not. Nicoletta, last, yeah, last, last week there was some unanswered question, but unfortunately, we didn't manage to, let's say, your colleagues were, didn't manage to send the questions until early this afternoon. So now I have the questions and I will answer in a, in a, in a file and then the file will be shared with you. Okay. Yes, thank you for that. And there is, I see a question in the Q&A section concerning non-fair data. If I understand this correctly, do you have an example of a non-fair data? I would say any data researchers are producing now, because, yeah, attending, I don't know, maybe your community is more advanced, because you know, physics are advanced anytime. You were the first to share preprints, you were the first to have the conversion, so the transformation of your journals in open access journals. So maybe your data are already fair, but you can go to the fair evaluator or fair enough, you put the DOI of a data set and you can check. Because the situation in other disciplines is not like this, so I would say that most data are not fair. And one of the slides I've cut was a report commissioned by the European Commission for the European Open Science Cloud. And it was estimated that the cost of not having fair data, so the situation in which we are now, amounts to 16 billion per year. Because you don't find data, the industry don't find data, doesn't find data, innovators doesn't find data, or if they find data, when they open the file, the data are not understandable, because maybe they are in a format or they need a specific software to be read. And it was also estimated that the researchers spend 79% of their time in cleaning data, meaning when you get data from different sources. Then you have to clean them, you have to let's say align them, you have to make them usable in the same way because they come from different silos. 79% of the time is spent in, let's say, in making data usable. And that that's how the 16 billion per year comes from. Okay, so there is a cost of not having fair data, but I can tell you the amount of data which nowadays is not fair. There is more than half, okay, seeing workshops or conferences and so on. But you can check, I'll give you the tools. Then there is a question from Professor Barczargi, I have a question about the difference between fair data and open data. Could data published in an article be fair? Not only they could, they should. They should because nowadays, we are in the era of the European Open Science Cloud. Nowadays in Europe, if your data are not fair, they simply do not exist. Okay, so they have to be fair. As we said during the presentation, the concept is fair by design. So your data must be fair and if possible, they have to be open. Speaking of journals, journals are increasingly asking you to publish the data alongside your paper, okay, for reproducibility reasons for transparency purposes, whatever. But as I was saying in the first slide, when I pointed out the three steps, okay, if you make your data open, but they are not fair, it's risky. Because you don't have, maybe you don't have a license, okay, so I don't know what I can do with your data. There might not be the right documentation, so I could misinterpret your data or misuse your data. Or simply, if you use, let's say, if you reuse a data set, okay, maybe also a reference data set, and you use the version 3.5 of the data set. Then I try to replicate your experiment, but I use the version 4.5 of the data set. Your experiment will result not reproducible, but it was not a mistake, it was simply because I used another version of the same data set, okay, and that's again about documentation. So making your data open does not mean put a spreadsheet online, okay. That's why I was saying you need to manage your data first step, and this is in the interest of researchers. And manage your data means also using a file naming system, a folder structure system, and agreeing if you are in a, let's say, in a collaborative research. All the partners should agree on file naming folders, structure folder and so on and so forth. So you need to manage your data. Then you have to make them fair, because you need metadata schema you need standards to make your data understandable by others. You need for instance to associate all the tools you use to process your data. If you use a specific software, if you use, you wrote a code. So, so you have to make your data fair, and then if possible you have to make your data open. But if the data are only open like I put a file online. It could be useful for anyone. And it could be also be risky. Okay, not only not useful but also risky. I don't know. Donatella but sorry if I answered your question. Then Nicoletta. I have a general question, we have been talking about ways in changes. It seems that the pandemic accelerated somehow the importance of open science. The implementation is working towards this goal. What's your opinion on how long it will take for open science to become a reality on a collective that's a one million question and in my opinion, the pandemic not only accelerate the shift towards open science, but it also showed that there is no other way. Okay. We got vaccines in a few months, only because just, I would say, some weeks or the same week. The first virus sequence was sequencing Wuhan. It was available on an open database. Okay, so it's only by sharing that the knowledge and science progress. So, last time we talked about this UNESCO recommendation which is really very welcome because it's a very strong recommendation. Okay. They are calling for every member state to designate 1% of the of the internal gross product to support open infrastructure to support data sharing. And I really don't know what else do we need, but the pandemic to show that sharing is the only way to progress. And that you can't hide behind the payroll, your results. Okay. Because in there, we, we, we didn't have to get the module on open science and on the current scholarly communication system, but in the current scholarly communication system, the average time of publication is from nine to 18 months. Okay, so in the more in the most optimistic scenario, we would have seen the first papers on COVID-19 at the end of 2020. But it's nonsense. Okay, it does not make any sense. So that's why during the COVID, during the pandemic, the preprints where the most used communication channel in the biomedical field, because there is an immediate sharing of results. There is an immediate sharing of data. And that's the only way to progress. So that that's a good that's a good question but I unfortunately, I don't have, I don't have the answer but we have been talking about open access since 2003. So it's really almost 20 years. And the big publishers are too powerful, I think. But we, we, we keep going. Okay, so the behaviors of the researchers are changing. As we were saying last time. Also, research assessment criteria should change in order to accelerate this this process toward openness. So it's, you know, it's a mix of bottom up and top down. The practices of researchers change, like using preprints sharing data, also the rules will change. But it can also be the reverse. So if we change the research assessment criteria, the researchers will follow and it's complex but I think bottom up and top down we can. I would say tomorrow or today, but it's not, it's not upon me. Well, thank you, and I have one, one other question. In fact, since there are no questions in the, in the Q&A section, very often at the library. We experienced that especially young scientists approach us and they are a bit afraid of opening up the science. So then they would say, if I, you know, make my data fair and open it up, maybe somebody else, you know, see something in that data that I didn't see. What would you tell these scientists in order to take away that fear? Oh, I would say a lot. Okay, we should need another section, session on open science. But anyway, no, I would say, first of all, when you deposit your data, your data set or your preprint or even your preprint, okay, in a repository, you have a time stamp. Okay, so you get sort of scientific priority. Then, if someone else reuse your data, which is all fair is about because it's reusable, okay, they cite you. So you get credit for creating this data set, okay. And if you didn't see something in your data set, it's normal, you see when you see your daughter or your son. Maybe you don't see something that a person can can see in them. But that that's the point of fair data, okay, to be reusable sometime in unpredictable ways. Or even, you know, the Hubble telescope. This is another example I always bring up to lessons. There was an exoplanet discovered 10 years after the experiment was was closed, just because the data were there. So that that's the point in making your data fair, but be sure that once you deposit, you get, you get a timestamp. So a scientific priority, you get cited. You've got visibility and so on and so forth. And then the principle is again, as open as possible. So for instance, if you want to deposit your data. But then if you want to put an embargo, meaning that for three years, I will exploit my own data, which is not mine, but anyway, I will exploit my data and it's perfectly fair. So you can let's say reserve the right of using your own data for, let's say one year, two years, and then you open the data set after the date so I think you can find a way you don't have to be afraid. Okay. Because I see only benefits in being recognized as a person who shared with the community, a data set a protocol a methodology software or whatever to make science progress. Okay. That would be my answer. Okay, thank you for that. Are there any other questions? Of course, it would be easier if, if our funders, or if our evaluators would reward a researcher to put the data open. That would be, yeah. Absolutely. That would push researchers to make all the data and the material open. Absolutely. Yes. Okay, I don't see any other questions right now. Last chance. Okay. Good, I think. Oh, I forgot to put the link or do you have the link in, in Zenodo. Just to, to be sure that anyone can access the, the slides where is it. I think we have the link somebody put it in the chat, but it would be great if you. Yeah. Could share that with us. Oh gosh, where is it. No, it's not here. I was giving you another. Okay, that's, that's the right do I. Where is the chat. So here we go. Wonderful. Yeah, with the slides in Zenodo. I saw that someone also downloaded the slides from the first session. So thank you for that. Wonderful. And we've also recorded this session. So we'll put that online so that other colleagues who could not join today after you can can watch it. Great. So then, Eleanor, thank you for coming. Thank you for sharing your knowledge on fair data with us. Yeah, I hope to have you back and yeah, I have a seminar soon or a workshop soon. That would be really fantastic. I think there is, you've seen also with the last in the last session. There are many, many questions being posed. So I think there is lots of interest also at our center. Yeah, and also also if you want in your institution if you want to start something about data stewardship. I'm available also because you I think you need to set this network of data stewards as soon as possible. Do you have this would would be my but this is a question as a librarian do you have already established a network of data stewards at the University of Torino. No, we are trying to. Okay. But it's difficult because first you have to train. Yes, absolutely. Yeah. And so it's not immediate. That's why I was saying as soon as you can. So the sooner the better. Okay, and is the library involved in that whole process at your university. No, because as I told you, they need to have domain data conferences. So you need, you need, you know, PhDs or something like this. Yes with really in physics or whatever field. Yeah, yeah, because they need to know how to deal with how to manage the data in their field. And as you can see, in our university, an archaeologist is very different from a medical doctor. Okay, so you need an archaeologist and a medical doctor to be a data steward in the respective department. For you, it might be easier, but maybe even in physics you have some different disciplines, you know, yes, absolutely with different data. So maybe you might need more than one data steward again, but anyway, it's a path you need to go through. Thank you and thank you very much for inviting me. Thank you very much. Thank you to all the participants.