 So, the program for today is that we have an overview of the open-air guides for research data management, policy, and legal issues, Argos, that's the open-air tool for data management plans, Zenodo that doesn't need any introduction, and special introductions because you all knew about that. The open-air activities for citizen science that have interest for researchers as well, amnesia, the data anonymization tool, and open-air explore, our search engine based on the open-air research graph. Before I give the floor to Eli Dijk to present the RDM guides, I would like to ask Najla Redberg, my colleague, to give you a short overview of the open-air guides. Okay, hello everybody. Just share my screen, is that work? Can you see my screen? Not yet. Not yet. There we go. Right, you can see it now. Yeah, works fine. Okay, so welcome everybody. Ellaria already set out the program for today, I thought I'd just give you a quick context to this session as well. This is really a kind of road show of all our support materials of which we have many specifically targeted for researchers. So we're very much interested in your feedback. We're going to send out a Twitter poll to see if you use our guides and how, if you've already heard of them, we're interested to have any questions after each of the speakers and get in touch afterwards, of course. So if you go to our support section in our portal, we have a number of different materials. So we have primers on open science. We have guides on many different aspects of open science and open access mandates on RDM and all kinds of different aspects on open scholarship and open science. We also have some use cases that might be useful in your own institutions. We have a series of fact sheets as well, which summarizes quite short and succinct overviews of these different aspects. We also have FAQs that you can dip into. And we have a help desk so you can ask direct questions to people in your country on open science. So take a look as the session is going on. And, yeah, we're very happy to have your feedback. So today we're going to show you specifically the guides targeted towards researchers. So thanks for attending. And I'll hand you back to Ilaria. Thanks. So the first speaker for today is Eddie Dyke from Danse. She is the coordinator of the research data management task force in OpenAir. And Ellie, you can start. Yes, I will share my screen. Can you see it? Yeah. And is my video on? Yeah, yeah, we can see both your presentation and your face. Thanks, Ellie. Yes. OK, thank you. Yes, welcome, everybody, to this presentation about research data management support we've developed in the task force research data management. And it's especially for researchers. Well, as you know, that European international organizations promote open science and not only the European Commission, but also, for example, the International Council for Science, the OECD, Science Europe, the Organization for National Funders. They also they all promote open science and not only open access to publications, but the last year you see more and more open data, open software, research software, open education, etc. So the European Commission wants to develop the European Open Science Cloud, the EOSC. And the goal of the EOSC is that every researcher in Europe has access to all the research data across borders and and disciplines. So if you see here, the EOSC ecosystem. It's an image by a project fairs fair. You see turning fair into reality. Fair stands for findable, accessible, interoperable and reusable. And you see that there are many projects or working groups busy with developing the EOSC. And one of them is, of course, open air. You can see it at the top right. And the members of open air are also open areas working also with other projects, but also members of open air are also members in other projects. So I come back to the Task Force Research Data Management established in in open air. I dance my institute and is leading this task force. I do it together with my colleague, Anna Lainard. The goal of the Task Force is to increase the knowledge among the know-how on the subject on research data management, open data, fair data, the EOSC. And what are the know-how? Know-how are the national open access desk from open air. Is that every country has one and they promote open science in their country and have contact with the open science communities in their country. So the Task Force now has 31 members, 20 of them are know-hows. And we established working groups and we looked at the different elements of the research data lifecycle. I will come to that later in my next slide. So we collected examples of existing good practices in every country. Of course, you understand that a lot of countries, institutes are busy with research data management. And we tried to find gaps and developed new RDM materials. So the output is online guides, webinar, blog posts, et cetera, and not only for researchers. But also for data librarians. And that's indirectly to also for researchers. As I said, so we established working groups. Now we have at this moment, we have six working groups, DMP resources, methodology for developing RDM roadmaps, data reuse examples, the importance of long term preservation, use cases of DMPs of existing projects. A survey will start at the end of this year. How do researchers in your institute manage their data? And of course, you have the promotion of the Task Force outputs and not only dances leading these these working groups, but also CNN CNR in in Italy. DCC Digital Curation Centre in the UK and the University of Vienna. So if you look at the research data life cycle, it's from the UK Data Archive. Then you can see when a researcher starts his or her research is creating the data and there's a plan made, a plan has been made for the research. The plan consent for for sharing a collection of data. That can be a reuse of data, of course. But also making a data management plan. As you know, this is a require requirement for Horizon 2020 projects. You have to make a data management plan in the first six months after the approval of the of your research. And very handy here is Argos and DMP tool. It is one of the topics later this this session. It's not an output of the RDM Task Force, but it's quite useful. And we also have a very nice infographic cost to manage and share data. You can find it in Nisenoda, also topic later. For processing the data, the researcher has to enter the data digitize it, validate it, clean the data, of course, with documentation, manage and store the data for the short term. And we have a very nice guide for researchers how to deal with non-digital data. We've made a webinar with EOSC Hub. It's another Horizon 2020 projects, data privacy and data management. We have a blog post, electronic lab notes, and I will talk about it later as an example. In analyzing the data, the researcher has to think about preparing the data for publication. There is a very nice webinar on Amnesia, because also to anonymize the data, it's important here, also Amnesia is not a product of the Task Force, but very useful here. Then we have preserving the data. Preserving the data is to store the data and create metadata and try to find the best format. So we have guides for it, storing sensitive data, find trustworthy repository, data formats for preservation, raw data, backup and versioning. You can all find it in Synodon, also on the website of OpenAir, of course. Then giving access to data, you can think that it's important to think about licensing or copyright. And there are products in the other Task Force about it, the Task Force about the legal issues. But it's also good to have persistent identifiers. We have a guide for researchers, identifiers to improve dissemination, managing access to sensitive data and a webinar with Freya. Freya is also a European project on persistent identifiers. And this webinar has new developments in the field of persistent identifiers. So then we come to the last item, reusing data. And this has been a topic for the last few months of our Task Force. So we're still expecting a number of blog posts on data we use, the data we use, examples. Well, you can imagine it's not necessary for every researcher to reuse data. But imagine that you want to compare the social economic situation of a country with 30 years ago, then it would be nice to have also the data of 30 years ago. So then I will come to a few examples. I already talked about it, the electronic lab notebooks. This is a very handy blog post that describes the electronic lab notebooks. But also it's very useful because it says which tool to choose and give examples of it. The same you see with data formats for preservation. In this guide, the context has been described why it's necessary, but also how to deal with this and then give examples. And also you see below the type of data and recommended formats. The type of data can be documents that can be images, can be videos. What is the best format to store your data? Well, the same as with how to find a trustworthy repository for your data. Also very practical. Well, you know that by default Horizon 2020 projects participate in the Open Research data pilot. So they have to submit their data to repository. And here you can find some steps that you can follow. Use a disciplinary repository if there's one, because then they have the right metadata structure for you. You can use an institutional repository. You can put it in Sunodo or you search a repository in the vcdata.org portal. And try to find a trustworthy repository that means that the repository will take care about your data for the long term. And that is, of course, very important. So we have more outputs. I don't mention everything, but there are blog posts on institutional RDM support. This is more for the data librarians with examples of the support in different institutes in different countries. A very nice diagram. I will show it to you later. Deposit your data in the data repository for long term preservation. Why, when, how, what, where to deposit. And we expect the overview of of European Commission DMPs around the repository of the University of of Vienna. So researchers can can look at other DMPs and see how it can be filled in, for example. And here's a nice diagram. Deposit your data in the data repository and what to deposit, for example. And then you can find the criteria to decide what data to keep. For example, because your institute requires that you keep the data for 10 years or or other examples. So this is the whole output of the task force, the guides and the webinars and also the blog post. And you can you can all find it on the on the open air websites, the web page of the task force. And of course, there is much more. There are guides and there are other guides, fact sheets and and facts there are. So you can find a lot of information that can be useful for you when you're doing research. And not only when you do Horizon 2020 research, but also other subsidized research. So that was my last slide. Thank you. Thank you, Ellie. So this is an invite for all the participants to. Yeah, I mean, check the pages that Ellie presented and put any questions in the Q&A section that you find at the bottom of your Zoom window. Thanks a lot, Ellie. Okay, yeah, stop shares. Stop sharing though. Thank you. So I will move then to Marina Gelaki from EKT Ingris, who is the leading person of the policy task force and will also present the outcomes of the legal task force. Okay. Thanks, Marina. Okay. So I guess you can hear me, right? Yeah, I can hear you, but your presentation is not in full screen. Okay, now it is. Okay, great. So thank you, Ilaria. So welcome, everybody. And thank you for your interest in the work that we're doing in the task forces. My presentation will focus on the activities of the legal and policy task force. And in particular, in the work that we've done in particular on the legal aspects, as you can understand, the work of the RDM task force and also the work that is done in the context of the legal and policy task force are obviously interlinked. So the legal and policy task force is led by Thomas Margoni from the University of Glasgow, Prodromos Tiavos from the Athena Research Center and myself from the National Documentation Center or EKT. The aim of the task force is to support, obviously, researchers, but also legal support staff and also policy makers. We aim to support research performing and research funding organizations in the adoption of open science policies by highlighting and discussing with them the main elements that a policy should include, but also by highlighting legal aspects that are involved in such task. Obviously, a key issue is the alignment with the horizon framework and overall to discuss all the aspects that one needs to take into consideration when making research outputs openly available. The policy and legal task force has a total of 25 members, 15 of which are NOADS. As Ellie mentioned before, the NOADS are the National Open Access Desk. There is one in every country. And they are a key element obviously of open air, but also of this effort as the aim of the task force is on the one hand to enhance the expertise and the competencies of the NOADS so that they can in turn support their own communities. But on the other hand, our work relies a lot on the NOADS in the sense that we seek their feedback and the materials that we produce start to a large extent based on the feedback that we get from NOADS based on their needs and the requests that emerge from the interaction with their community. So regarding the main outputs and outcomes of the task force, we have developed a toolkit for researchers on legal issues that I will talk more about in a few minutes. Also a toolkit for policy makers on open access and open science. Both of these toolkits are available on the open air portal, but also on the NOADS. We have also organized a number of webinars focusing on both policy developments in different open air countries where we had the opportunity to share experiences and show what worked and what didn't work. And in particular over the past year we organized two webinars in last April and in May focused on research data regulation and highlighting issues such as the ownership, access, storing and reuse of research data. Taking into consideration the pandemic, we also used examples from the biomedical sciences and medical data to us as practical examples. You can read more about the webinar on the related blog post and you can also find other blog posts that relate to national developments related to policy aspects. And we have also produced a number of guides focusing on legal issues. Before moving to the legal issues, I will mention that for the basis of the toolkit for policy makers, you can find on the open air portal policy templates that are targeted to research funding organizations and research performing organizations who wish either to adopt an open science policy or RPOs and RFOs who wish to align their policies with the European framework. So they can use this material in their efforts to adopt policies and we have also developed a checklist so that either RPOs or RFOs can check their readiness, the level of readiness in terms of their open science policies. So our efforts during this past year have focused more on the legal aspects as I mentioned before and we have produced a number of guides. These are mainly targeted to researchers but obviously they can be of use to other stakeholders whether we're talking about librarians who are supporting researchers or any other stakeholder. What we had in mind in producing these guides is the fact that most of us and I include myself as well are not legal experts and we sometimes feel not very at ease when we have to deal with legal aspects when trying to make our research outputs open. So these guides are an effort to support the community by providing answers to a number of questions that usually come up in a non-technical manner. So the first of these guides, how do I know if my research data is protected, provides a definition regarding what research data are as we know that different disciplines use different types of data. So confusion sometimes may arise as to what constitutes research data. It explains how the different rules on research data may impact on their use and also it presents what copyright law is, who owns the copyright and what copyright owners can do among other things. The second guide, how do I license my data, explains what the Creative Commons license is, how licenses can be applied to research data and how one can use these licenses for the purpose of making the research outputs open access. The third one deals with the reuse of research data and tackles issues relating to the reuse of protected data sets and how to use a data set that has no license and what the risks are in such a case. So in terms of our future work, obviously we would like to produce additional support either through producing additional guides or organizing webinars on the basis. Not only of our NOAA network needs and requests, but we feel that this is also a nice opportunity this webinar to seek your input and your feedback. We would like obviously to know whether you have used any of these guides and if you feel that there are any additional support materials that we could produce as open air. And we always seek to find synergies with other organizations and initiatives and provide our input in terms of the policies that are being developed based on our expertise. So this is all from my side. Thank you very much for your attention. Thank you very much, Marina. And this is again an invite for our participants also to have a look at the at our Twitter channel because there's a poll about the usage of the guides. But in case you are you have been using you or your institutions or some of your colleagues have been using one or many of these open air guides, please let us know because we are we are particularly interested in understanding their usability and if there's something we can improve around them. So thank you very much. So next speaker is Elie Papalopoulou from Gatine Research Centre in Athens, Greece, that will speak about Argos. That's the open air DMP tool that has been recently relaunched. So I'm particularly excited about that. Elie, thanks. Thank you, Larry. I hope you're listening. Can you hear me? Yes. Yes, yes. Hi, everyone. And thank you very much for your time to participate and to take part in this session today. I'm going to be talking about the Argos, which is a service that Open Air has developed and it's all about data management planning. This one little girl that you see it's not me actually I don't talk to the designers and told them to create me this is just an avatar that we have this is me as you can see in the camera but now I will stop the video so that we all have a better connection and I will move on to my presentation. So let's see what Argos is about. My presentation will have a brief introduction of why we developed the tool. And then I will move on to saying a few words about the key features that stand out in Argos. Then how Open Air interacts with Argos and where does Argos stand with the Open Air ecosystem without the service enhancements and so on of what you're viewing in the interface of Argos and then I will conclude with a few next steps. Let's see what is Argos and actually why it was developed, the need that addresses. We all know that the latest years there's a huge demand for this data management so as researchers and as students and as people that are doing our involvement in research data management we need to understand what are the different steps involved in the research data management life cycle as you can see here. This is a paradigm example. But also we need to understand nowadays what is of course the proper way to manage and handle our data throughout the research life cycle and what are the open aspects that we can find in which different steps we can find them and how we can attract and similarly how fair principles can be found in a research data management life cycle and how we can follow this both open and fair principles in order to achieve fairness of data and fairness also of all the other outputs that we produce in our research. This is what a UDOT and Open Air collaborated actually on to develop a software, an open source software that allows it to be this configurable and extensible and it provides researchers with more flexibility to the handling of their data within that they include in a data management plan. So Argos is actually, you can see, I hope my mouse here, the cursor and Argos actually is built on this software, the open DMP software and provides this platform through Open Air to create machine actionable DMPs, meaning DMPs that can be then measurable, measurable, like you can understand what is the usage of the DMPs, you can actually see how DMPs evolve throughout the time and then we have other functionites, you can also publish DMPs and I will talk about that in a few minutes. Yes, so let's see how you can use Argos as a research. The features that Argos has that stand out are the following. So first and foremost is that in Argos you would differentiate DMPs from data sets. That means that you will find two different editors, one is for the DMP, a DMP editor and the other one is for the dataset editor and why we're doing this is you will find the in the DMP editor you can actually add information that has to do with the scope of your DMP, why it is created, who is it involved in the creation of the DMP and all the basic information around the research and the data management plan that you're creating and then on the dataset editor you can have more specialized information like according to how you have applied the first principles for example and open access principles in your data when managing and handling your data. And this is very useful, it means that you can have a DMP that contains more than one date descriptions of data sets and you can for example have a DMP that has a dataset that is described with specific metadata standards that have to do with archeology because you are having, you have a collection of archeological data that you are describing and then let's say that you have some sensitive data that you also would describe so you can create a different description for this data and actually hide them before you publish them or share them with more broadly and more publicly. This is very handy for this and it actually allows you to copy and paste and move around the data sets and copy and paste them in different DMPs. So let's say that I want this sensitive data set to reuse it in a different DMP that has different context and I can easily grab it copy and put it in a different DMP that I'm working in Argos. So it makes a reuse of descriptions of data sets easier. Another key feature is that a DMP can contain more than one templates and this is important for two reasons I will at least say two reasons. One is that you may be working for an international project that has received funding from many funders and big funders that all or some of them require the you to create data management plans according to their requirements. You are able to do it by using Argos. You can select more than one template to describe your data sets and you can allocate depending on where you want to add it and which template you want to describe your data set at its time. The one thing that you can use this feature is that for international projects, for example, and the other one is that let's say that you work, that you are working for a multidisciplinary project that collects and reuses and generates data that are of many types like simulation data or statistical data, different types of data and different discipline data. For example, let's say that I have a project that also deals with social data and archaeological data. I can select as many templates as I want based on the discipline. I can select for example the Horizon 2020 which is the generic and then I have a different one which is the Horizon 2020 Ariadne class for archaeological data that provides a little bit more that actually gets into more details for how the Horizon 2020 template can be used in archaeological data. Another thing that you can do with Argos is actually you can easily select resources that come from Opener and from ESC through the APIs that we're using and you can do that without having to exit our interface. You can learn about specific RDM concepts like the data, what is metadata and where to find metadata standards and actually you can easily select them from the editors and add them in your template without having to check them out and go to different sources. Of course you're able to do it. Another thing is that Argos supports collaborative writing. It means that you can invite your colleagues and work collaboratively, collaborate with them, work with them and manage workload in the writing process of the DMP. Then it also supports JSON format. You can export your DMP in lots of formats like text, PDF, XML, but the most important for interoperability reasons is the JSON format that is compliant with the RDA DMP standard. For that, if you export the DMP that you created in Argos and upload it in another RDA compliant platform, then you will be able to use it and continue work without noticing any difference in the information that you have provided with that missing information. And vice versa, of course, you can download something from an RDA compliant platform and then upload it and import in Argos and continue work from Argos and deposit it to Argos and Zanotto etc. And this is similar to how you would do, for example, when you're working on a deliverable and you start creating a text document using Google Docs and then you download it, you upload it in a OneDrive and you continue work. So this is what we're trying to catch by doing this, try to facilitate an uninterrupted process for researchers and allow them to move around and choose the tool that they want is a little bit losing any vital information. And also it provides, you can assign DOIs by using Argos, we treat DMPs as outputs, as I said in the beginning, so you can assign your DOI, your license and also have different versions. So freeze the DMP that you're working at any given time and share it with your colleagues or deposit it in Zanotto from Argos directly and then continue work by creating a new version and all the different versions you can do at any time so we keep track of different versions of the DMP. I don't know how much time I have but I will start talking maybe a little bit quicker. The role that you can have in Argos is either as a DMP manager or as a DMP collaborator. The differences are not that big, like both of the managers and collaborators can have all the edit rights, they can add information templates, they can discard information, they can save information, but the DMP managers, meaning those that have initially created the DMP are those that can also only finalize the DMP, these are the only people that can finalize the DMP and publish the DMP in Zanotto. The only difference, let's say, and collaborators meaning those that have been invited by the managers to work on a specific DMP. Now let's see how these all are possible and many things, but these are the key things that are possible due to some few integrations with the opener ecosystems. We are likely to be involved in the opener ecosystem and make use of other underlying services such as Provide, for example, this is Zanotto, as I said, we integrate Zanotto and you can immediately from Argos close this DMP lifecycle and publish it directly from Argos to Zanotto, get your DOI and get cited. We are actually progressively, we are integrating with other opener services like Provide, we have soon in the next couple of months, but by the end of the year, let's say, we will have the option to deposit in all opener compliant repositories from Argos and all the other services are not going to go into much detail, but we also integrate the open sense primers and the outputs that were mentioned by a earlier, we also integrate them into templates so that we provide more guidance to researchers to understand the basic concepts like the license that they have to, that they could use to date what it is and what are the standards around it, how they could use it. All the guides we try to incorporate in the templates that we have included in Argos so that we make navigation easier and understanding easier and also thanks to the NOAAs that I've listed some of them here because there are too many and I couldn't cut them all. And thanks to the NOAAs, we also have Argos in different translated in different languages, so you might be, you might see, if you go now to Argos.com, you might see your language already, Argos translated in your language already, which is very, very good and very useful. And here I've added the opener research graph, so this is from the opener research graph, it shows their different relations and the different entities included in opener and Argos actually is, we're now working with the research graph and it's, we're creating a DMP entity here and we're making the relationships with the different other entities like with the research product, so what DMP is associated with what data set, with what publication, funder, etc. We're trying to enrich this open scholarly communication graph that opener has. Very quickly, a lot of latest achievements, we collaborated with the RADNPLAS that I mentioned before, we had a very fruitful collaboration worked on our templates and DMP tools and produced some, we have some good outcomes of this collaboration, you can find them out in our latest blog post about our Argos and RADNPLAS collaboration. We took part in the RDA hackathon and we were very happy to get to know and to collaborate with the global research data community and we actually won this hackathon, so this gave us the incentive to release a new version of Argos and for which we really would appreciate your feedback, this is for researchers, this tool was created by the demands of researchers and the fair and open community and for, so we would really appreciate your feedback to make this better for you. Ongoing some integrations, I'm not going to go into much detail, if you want to see more templates, please contact us, some useful resources here, I know that I'm out of time, so thank you very much, I would be happy to answer any questions later. Thank you very much, Ali, also for being time conscious. Yeah, the release of Argos is definitely interesting for many of the participants, so if you have any questions, please do put them in the Q&A box. And now it's the turn of Alex Ioannidis from CERN, he is working as an auto, I mean, I would say that you are the leader of these and auto development rights. Yes. Can you hear me? Yes. Yeah, yeah, we can hear you. I've been in the development and operations and generally running the service for the past two years. Well, okay, so I didn't make a mistake. Your presentation is not in a presenter mode though. Okay, great, thanks Alex. So thank you for joining the session and thank you for your time. So I will try to briefly go through what's the novel reason, what we've been doing and how we've been trying to serve researchers for the past almost seven years, I'd say. So my name is Alex Ioannidis, I'm working at CERN, where the novel is also hosted. And the novel was made for many reasons, but basically one of the ideas that disaster strikes from time to time and you might have done a lot of research and you might have worked on something for a long time but sometimes things get tricky and you might lose flash drives or laptops and also it doesn't matter if you physically start things maybe in libraries or museums like things will always deter your time. So that is another is a platform that helps not only with preserving things, but that is that it's also, it's also helping to publish and help share an interlink of the research objects with other platforms. The basic principle shows another is that users can upload any types of files. So there's always has to be a file with some other record. We allow up to 50 gigabytes for each record for each data set that's uploaded but you can upload as many data sets as you want. So there's all kinds of file formats, so they accept videos, data set zip files, presentation PDFs, everything, it's not a, there's no limit in what, if you want to store it and preserve it, we will take it. So these are the upload your files, then you describe them. And we, what we try to do here is to have like a very flexible metadata schema which is rich, but at the same time doesn't limit users. For example, for, for you to quickly upload something describe it very briefly like with the title with the authors and some descriptions, and, and then it's up to you if you want to expand and add more, more to the record, for example, without funding information. So thanks to open air, who's doing a great project that basically a big database of, of grants and awards that from different kind of agencies funding in business. We have we can you can always connect your records and your outputs to, like, let's say the funding. You can always, you can pick from a vast array of licenses we also have another very interesting part of this step is that sometimes you want to publish something in a journal. But for example, you, you want to cite the data set, but there's always this process where you know that there's a journal that's a journal review process where you want to be alive or something but you don't want it published yet. And, of course, after you publish, you get this citable do I, which will always will always resolve the summer and people can always reliably kind of excited and share it and follow it to find the files that you've uploaded. Provided in many indexable and exportable formats. Jason in data set XML. And also some other formats. And, of course, there's a rest API and then or a IPMH API, which you can use to access and harvest. And, of course, we track some user statistics. So to give you an example of the kind of like reach me that like how deep we can go in terms of reaching the data and different disciplines. Because another in score it's a it's a general purpose repository so anybody can upload anything but so we don't discriminate against discipline but if you want you can go very deep and you can, for example, use very custom metadata for the for the kind of objects you upload. For example, here we have biodiversity records which describe, for example, the species and the kingdom, all these different kind of like very specific but very domain specific. And of course we have more general data like location information. So, besides that, we kind of understand that research objects they evolve over time. So we have data sets and software we know that software is kind of like a natural process to always have new releases and always get new versions of things published. And this is very tightly integrated into the platform. So, for example, if you have a data set and you add new data or refine the data, you can create a new version. And this is always tracked that you have this, you have a, you have a single DIY for all the versions so you can specifically site them. And of course also you have a DIY for all the versions. So it's kind of like an encompassing UI, which allows you to site it. Besides that, we also track the statistics for each individual version and of course we have statistics for the whole, let's say research object. Another feature that the novel has is what we call communities. So, the novel is like a big, let's say, a collection of all sorts of records and different disciplines, but the idea is that users can create their own smaller repositories inside the novel. And this is where they can organize them. Better, let's say, they manage what kind of records and they want to describe it. So for example, if you have a project or if you're an institution or for the conference maybe or the subject, you could create a community once and for example, you can set what you call a curation policy of what kind of records do you accept and what is, what's kind of like the purpose of the community. And then you as a record curator you can have the ability to decide what gets accepted and what gets rejected in this community. This is a way for you to manage and curate pages. Besides that, from the very, very beginning we tried to make software. The first class citizen of the research world, so we know that a lot of development happens usually on GitHub and researchers that work with software, they want to publish the things on GitHub and they want to keep their workloads there. So basically we don't want to invade into this process and kind of like take them out of this world and have them do something else. So we kind of like tightly integrate with GitHub in a way that you just work on GitHub, you publish your releases, you do what you do and then we automatically kind of archive all of this software on the other side. And this is kind of like a something that you just flip a switch and then you forget about it. And then software becomes a sightable object and you can come to receive recognition. So besides that, as I mentioned before, Zanodo is accessible via REST API and origin lights endpoint. And the idea is that any kind of operation that normally happens on Zanodo via the the platforms that we have can also be done programmatically. So you can set workflows around Zanodo, you can set automations and of course you can also harvest Zanodo for your own purposes to collect statistics or to index something. And also besides that, on Zanodo record, something that we've been kind of like lately doing is we're trying to display like we have another service which is basically harvesting citations and tries to aggregate them based on what we put this versioning schemes that we have a kind of track. So basically you can see citations to specific versions of records, and you can also see, let's say the bigger aggregation of citations that things receive. And we try to harvest different sources. I think this very good work also done on the openness side was called Explorer which does exactly the same like they harvest many, certainly many different resources and they make them available. And then to give you kind of a bigger overview of what Zanodo has been looks like in numbers. So at the moment we start about 1.6 million records. There's a lot of text, so the publication of preprints and reports. DMPs are also part of this. So what are those, the outputs of Argus are the text outputs in Zanodo. So images and figures. And we also have a lot of, as I mentioned software so we just recently reached 100,000 basically archived software device on Zanodo. So around 8000 data sets, these all amounts to about 235 terabytes of data, it's 5 million files stored here at the data center. And the moment we, we're busy from all around the world, we have around 5 million visitors per year. So we have a COVID response. We try to make, to offer what we can do best to the public. So basically we prioritize all of the requests related to the COVID-19 outbreak. So if users want to upload data sets, for example, how bigger, let's say quotas for promoting much bigger data sets because that's for, for example, if they want to upload data sets more regularly, we try to help them to set up scripts and animations and kind of like make their lives easier in this, in these different times. And of course we also went through the process of curating these records. And this also happened with the help of Irina and Ravis and Epner to make this community, to have an open community, to have a community here at Zanodo, which collects all of these records and makes them very easily available to others. And this kind of robot. So for the, let's say for the future plans of Zanodo, of course we've seen that it's a, it has become a very popular platform and of course everybody wants to have their own Zanodo where, which they can customize and add this and the other feature and all of these things. And this is what actually happened, Zanodo is open source, so anybody can actually clone it and set it up for themselves. And to kind of tackle this issue, we said, okay, let's kind of like gather all these partners, all these different universities and institutions, and try to make something bigger out of all these efforts. And this is kind of like what we're planning to do with Zanodo, like what's the next version of Zanodo. So, basically combined, almost 20 plus partners to, and the universities, institutions and companies to build something bigger and basically enhance all the features that Zanodo already has. So for example, communities is kind of like a very monolithic and very, very modular kind of like a feature at the moment and the idea is to enhance it and add more user roles. And so, I think the questions are going to be coming later, I guess, so. Yeah, so there are questions coming in the Q&A, we will address them at the end of the presentations but in case there's some that you think you can answer directly. There are questions coming in the Q&A, we will address them at the end of the presentations, but in case there's some that you think you can answer directly, you can as a panelist answer them directly in the Q&A box. Thanks, Alex for the very interesting presentation. And now we switch to the open science citizen activities in open air. The speaker is Eugenia Kibriotis from the, I tried to speak to pronounce it, Elinobis running here Gogi in Athens. Well, and Eugenia, it's your time. Yeah, sure. I'm just starting my screen now. Thank you. Hello, everyone. Give me some time. You must see my presentation. It works. Great. So, hello, everyone. Thank you, Larry for the introduction. This is Eugenia Kibriotis. I work for Elinobis or otherwise, you can just call as EA. We are actually an R&D department belonging to a big private school in Athens. And today we're going to present you the citizen science activities that we have initiated in the framework of open air. So just some directions for the beginning. What we what we had to do in this task was that we had to form activities, educational activities that could involve citizen science. The trick here was, though, that we had to find initiatives that would fit to school settings and that would also fit to the interests of students and their teachers. Ideally, we would like to create something that would be easy for the teachers, easily adapted to the national curricula. So we came up with three initiatives. The first one is the school seismograph network where we gather seismic data. The second one is the Open Schools Journal for Open Science where we're talking about an open science journal with articles from students and addressing to students. And then, last but not least, an initiative we are calling Bringing Nobel Prize Physics to Classroom where we're actually giving access to students and their teachers access to research data. Let's take them one by one to see what we're doing in in these fields. So the first one is about the seismic data journey. As I told you, we have created the network of seismographs. As you can see in this map, we're covering the southeastern part of Mediterranean, which is actually the part of Europe most prone to earthquakes. Its triangle on this map represents one seismograph. So actually, we are expanding from Azores to Israel. The locations are mostly decided on the seismic activity, on the importance of the seismic activity in its location, and in some cases also the volcanic activity. That's why we have placed seismographs in Azores, in Sadorini, Thira, in Nisiros, places with great volcanic activity. All the seismographs are hosted in schools. Please keep that because I will explain later the importance. Everyone, not just the schools belonging to this network, even you, if you visit this network, you can click on the seismograph and this is the visualization of what you get from the seismic data they gather from these seismographs. All this data is actually delivered to the National Observatory of Athens, so actually to researchers, to real researchers who are using these data. So how is that happening? Actually, we have to install these seismographs. So firstly, we either, as I told you, find a location that we think it is interesting to have a seismograph and to gather data from there, or schools are expressing interest by themselves, telling us that they're interested to host, to install seismograph in their settings. Then we find a teacher that we can cooperate and will be accountable for the function of the seismograph. We install the seismograph, we connect with the computer so that it starts getting all this information, all this data from the seismic activity, and then spread it and circulate it to the whole network. Actually, we're talking about data collection. Data collection that you can take it as raw data, as you can see, at any time, which could be very interesting for researchers, but unfortunately not so interesting for teachers and students, because as you can see, I don't know how many of you are familiar with the SAC files as a format. These files are not very easily used by normal computers that you can find in a computer lab in a normal school. So this format can mostly be useful for researchers of the field. What we're doing for that, to facilitate educational activities, is that we host this seismic data on helix. A very good example of how a paradigm of how that worked very nicely was the HACWIC 2019 that we have cooperated in cooperation with helix, where we did an experiment. We asked from students and teachers to use open research data obtained from the network in order to create an early warning system, an app that would work as an early warning system in case of an earthquake. I just have added here an image just to help you understand a little what we mean by saying an early warning system. Actually, as you can see here, if we say that this is the epicenter of an earthquake, the waves, these waves are damaging, but as they move they become less damaging, but at the same time they are getting closer to big cities. So if we can have the data from several spots of the area, we can report and we can warn citizens living in big city centers to be prepared. Okay, so imagine the return to the society and how easily that can happen. So in order to help the teachers, as I told you, we gave them the data from several seismographs of five different big earthquakes that have already taken place. They had to take advantage of this data in order to create this app that could be up and playing. Unfortunately, we managed to complete only the first part, so we have managed to train the teachers so that they can work as mentors for their students, but we are still missing the second part where the students will actually develop this app because unfortunately COVID had other plans and we are not allowed to have any meetings with the students and teachers at the time. So this is a remaining task, but it's a very good paradigm of how research data was placed on helix and would be very easily used by citizens. In our case students. So when we're talking about citizen science and education, we can actually talk about teachers presenting research data and expose students to the methodology of scientific research. Imagine how interesting that is at these times that the growing mistrust to science is highlighted. We can have hands on activities where students themselves can have hands on activities and implement take part in the research themselves. But what we were actually missing is give them the floor. So in order for them to feel as real scientists who have been analyzing real research data, we wanted to offer them a platform to publish their work. And that's why we have created the open schools journal for open science. We're talking actually about an international scientific journal from students to students. This is the second initiative that I've shown in the beginning. It follows all the rules of a scientific journal with peer review processes. And at the moment we have 233 users registered 140 students and teachers who have been authors 98 reviewers. This is very, very important for us because in order to keep having peer review processes. We have to make sure that we have reviewers for as many languages as possible and also as many fields as possible. We have 240 published items. We're talking about items because where they can be either posters or full articles. And we have 13 issues published so far with some more to be published by the end of this year. This is how an example article looks. We have the title. We have the authors, the do you, of course, the abstracts and one once someone clicks on these PDF, we get the actual article. Authors have to follow the guidelines that we have created. So there is a specific template that they have to follow. It is available in English and of course in the languages that you can see on the on the left. In order to help teachers to make their lives easier actually because you know it's not very easy for all students and teachers to write in English. And of course because the journal is an international initiative and we celebrate this multinational nature of the journal actually. As I said we follow peer review processes and therefore everything is done in secrecy but what we can offer to reviewers is a certificate of reviewing as the list we can do to express our gratitude for their contribution to our journal. Also, everything that is upload on the journal is also upload on the Zenodo community with all the details and all the metadata that Zenodo is asking for. Last the Nobel Prize physics on Zenodo. This is an initiative that is mostly that mostly has to do with Zenodo. Actually, we are we have created educational activities that use research from big infrastructures, big science research infrastructures that can be easily used in class. Some of them are even involving the analysis of research data from students and researchers are actually counting on the return of these analysis, the outcome of these analysis back to the scientific community. So you can find here an open dialogue between students and researchers and not just an open dialogue but actually a productive dialogue because we can see that researchers are often offering some of the data that they can to the educational community with a great impact that that can have on both the education but also to the forming of the future responsible citizen science of citizens and also the return to the scientific community by the analysis of the students. Here we can measure the views and the downloads of these educational activities and for the time we know that 235 resources have been downloaded by teachers and students and would have been implemented in class. And last that we would like to share with you the news of a good practice example in one of our of the articles that have been published in our journal. There's a Greek article of some students you can see their names here that's a big team that here you can see the translation of their actual text that they have announced that they have identified an exoplanet orbit. Around a specific star that this team has analyzed for the very first time so we are happy to share that there has been a new discovery published in our journal from students. So thank you that was from me this is my mail in case you want to be into contact and you are interested. So any questions. Thank you so much Eugenia. Yeah please put any any questions because I mean the topic is a bit different from those that you hear that you heard before, but it's very much connected to them as well at the same time so in case you have any questions please use the Q&A box. The next speaker in line is Manonis Therrovitis from the Athena Research Center as well that will speak about amnesia, the open air data anonymization tool. We can hear you, though. And not yet. Let me see if I can unmute you. Yes, yes. Okay, great. It works. I opened the video but not. No worries. That happens. Thank you Laria for the invitation and for the introduction. I will share my screen and talk to you about data anonymization and the amnesia anonymization tool. So I'm sharing and I'm going to put it on. Okay. So maybe you have already heard amnesia, but here you will get a chance to get a high level presentation of what it does. So, as I said in the beginning amnesia is data anonymization tool that is offered through opener. And the first question that comes up in many contexts is why anonymize? What is anonymization? So I would like to go to the definitions of GDPR and make the distinction between pseudonymization and anonymization. So pseudonymization is the removal of direct identifiers of names of social security numbers and other things that directly pinpoint a person and the replacement with maybe random identifier. So I would like to go back to the definition of data anonymization because given external information like the map that we kept when we did the substitution or the combined information of other secondary identifying data like the date of birth and the zip code where someone lives, we can go back to the data and re-identify the people that are described in the pseudonymized dataset. Anonymized dataset offers some protection. It is an easy precaution step that we can take in several contexts, but the data are still considered personal and that should be handled with every guarantee that GDPR requires. Anonymization is the reversible transformation of data from personal to statistical. Now there's a lot of discussion and I will not go in depth that irreversible against non irreversible is a clear distinction in what we talk in everyday life, but from a technical perspective, the boundaries are not that clear. Something we may not have any reversible, any reversible transformation, but we may be able to infer information about people from anonymized data. But if, even if we do not pay this much attention to the details, the idea for anonymization is to have a guarantee on how the data are transformed. And he would say that a third party that has background information that about the date of birth and the zip code of a person will not be able to identify this person in the public dataset with probability more than 10%. This is a guarantee. Other things might be able to be identified, but this is a guarantee that we did the job, paying attention to anonymization and the data that result from these processes can be considered statistical data, which are no longer personal and we no longer have to take all the measures and precautions that GDPR requires. And compared to other methods that we know from the past like encryption, secure multi-party computation, anonymization is more suitable if we want to reveal the data to third parties that we do not completely trust. Not completely trust does not mean that they're malicious, but there may be researches that we do not know, have not signed a non-disclosure agreement, so a data owner is able by anonymizing the data to deliver them to a wider audience. So it is very important to have meaningful and effective anonymization and Amnesia does exactly that. So to a bit publicize Amnesia, we try to make it very user-friendly. Okay, I will accept that still it's all the fewer available anonymization tools not very friendly because it's a complicated procedure, but we have put effort and we think it's the most user-friendly that's out there. It works locally. You can see it in our site. We have an online version that you can use for demonstration and training purposes, but if you want to really anonymize sensitive data, you can download it and know all the process in your premises without the data ever leaving your safe environment. We put effort in giving the users many degrees of freedom in customizing the anonymization process. We do have some very specific features based on our basic research results we had in the past. So it's the unique tool for set value data. This is kind of arbitrary data records of arbitrary length. We do have a unique version of care anonymity for high dimensional data, which is a common case in practice came anonymity and we do offer apart from the graphical interface we offer a rest API that allows to incorporate Amnesia easily to third party tools. So as long as it's up, it had 32,000 visitors in the open-air portal. We had more than 100,000 page use and 2,000 unique downloads. So this is just a glimpse of its popularity and the interest is actually growing. We have just launched a new site, so feel free to go there and give us feedback. And also let us know if you need more documentation or what is not very easy to understand because for us that have been working there, you know, it's a problem for engineers. Sometimes things are obvious because we have worked for them for so long, but they're not for the users. So we put an effort on providing good documentation, but I'm afraid that in several cases we fall short and we want to do even a better job there. Okay, for the status, we offer anonymization algorithms based on care anonymity at the moment. We plan to add different survivors in the future. We offer for several kinds of data. We have recently released a disk-based algorithm that allows you to anonymize very large data sets. It's quite technical. This is a quite technical thing, but what it does, it processes the data set while it resides on your hard disk, so you're not limited by the main memory, and this allows using a big data set. Misia is up for quite a long time, so especially in the older futures, the bugs have diminished, and I think it's quite robust given the complicated process. Now, I gave you the overview. For those that are not familiar, I will also give an example of anonymization with guarantees. And this is an example of care anonymity. On the left part, you see some imaginary medical records where you have for patients, the zip code, their age, and their nationality. Even if the names have been removed, if you know the zip code and the age of someone, now you can identify the record. On the second table on the right, we have generalized, that's the term we use, and we have replaced the specific values with more general ones, more abstract ones. And now, even if you know the zip code and the age of a person, you always have four candidate records for it. One difficult task to do is that you have to create to provide to the algorithm, what we call a generalization hierarchy, and this is how to do the replacements or specific values of more general ones. And the best way is not to have, you know, this specific is a general and that's it, but to have several steps so the algorithm can generalize as much as needed, but not more so it won't lose much information for numbers and dates. Inesia helps you and simplifies this process, but if you have categorical data or the data, this is, for example, labels that have to be grouped according to the semantics, this task has to be done by the user and it's one of the hard parts in the anonymization process. So some things about our limitations. I think after some experience. One of the major limitation is that anonymization person anonymization has not been used extensively in practice so users do not know what to expect we do not have so much feedback, even if 30,000 users have visited the site, we don't have so much feedback for industry in real world cases to foresee some problems so this is an ongoing task. It requires some effort to create rules or in customizing the solution. We do not have good answers. Because we only create the software on how to set the privacy parameters only recently have examples of the US sensors anonymizing the data, or we can follow the practices of the statistical authorities that are in similar methods, but there are no guide lines on for example, what case should you use to care anonymity. And of course, amnesia has yet focused, has focused until now on care anonymity there. It has some shortcomings and other also anonymization methods which we plan to add in the future but there's no tool that can actually do everything. So that's all. I've seen several questions but I guess we can do that. Yeah, we will do that after after the last presentation thanks a lot manolis. Okay, it's always interesting hearing of these things. I mean, the topic of an anonymization is definitely very much relevant because also of GDPR. And I'm always fascinated of how amnesia can work and help researchers. I'm always happy to present. Yeah, that's good to know. Okay, the last speaker for today before we go to the Q&A is Argyro Kochanaki from the University of Athens that will speak about the open air explore. That's the search engine of open air but yeah, I'll leave it to Argyro for more details. Thank you, Larry. Can you see my screen. Yes, it displays correctly. Open air explore is the way to access and explore the open air research graph and its entities. This is achieved through the search and browse functionality. The last few months, we have worked on updating the user interface and adding new functionalities and they are all available in explore.openair.su. The main functionalities of the explore portal is the search, the linking and deposit. For the current presentation, we will focus on the search functionality in the browse. In the home page and in the rest of the search pages of explore, we can find this bar. We can search in all the content, all the entities of open air, but we can easily switch with this drop down menu to other open air entities, and we can also switch from simple to advanced search. Under the term research outcomes, we have merged together all the research types like publications, research data software and other research products. So we have the option to search for them and see them all together or combine one or more of those subtypes. In the series search, we can use single keyword search, or we can use codes for exact term search, and we can use persistent identifiers like toy, archive, BNC and so on to search for specific results. In simple and advanced search, there are the filters on the left. The filters are based on fields that depends on the entity that we have selected to search, and they give us an overview of the values and the numbers of that field. For each field we saw the top 100 values, and if we click the view link, we can see all the values. We can search for a specific value and we'll have the option to sort the values by the number of results or by the name. In the advanced search, we can create more complex query, not just simple keyword search, and we can use a list of fields, the list of fields depends on the entity that we have selected to search for. And by using the specific fields and values, we can define complex queries to search. In each page we see the search results, we have changed the grouping and how the information appears. For example, we have added the ORC ID information for authors, and we can search for results with the same ORC ID. For the results, we have the option to change the numbers that are visible in one page and change the sorting of the results. We have the option to download results in CSV format. If the search that we have applied has more than 2,000 results, we are able to download the first 2,000 of them. If we click on the title of a result, we will go to a more detailed page that has all the information related to the result. In those pages, we call them landing pages, and we can see the information of the, for example, here we see a publication. We can see the information about the publication. And in the tabs, we can see relations with other research results. In the bottom of each landing page, we can see the date, the last update, the last date that the records were updated. Again, we can see the ORC ID information, and there is the option again to search for results with this specific ORC ID. And if the result is the result of the duplication, and this means that in OpenAir, we found more than one, more than one instances of the same results. And we have matched it to one single record. And here we can see a list of those results that are matched to this record. We have also the option to link this publication with other OpenAir entities like project research outcomes and communities. And this is done with the linking functionality. This is a project landing page where again we can see information about the project. And in the tabs, we can see relations with other OpenAir entities like publication, software, and so on. If we click on a specific entity, we see the most recent results for that entity. And by clicking the View All button, we can go to the search page where we can apply more filters for the results of this specific project. We can also see statistics for the recent outcomes and also use the statistics. We can include the results of this project to our website using the code available there. And we can also download reports in CSV and HVML formats for the same project. In both cases, we can choose what type of results we want to have. We can download all the types of research outcomes or select the specific type like publications. And here we have the option to link this project with other research results. Again, this is happening through the linking functionality. And we have the option to search for results in OpenAir that they already exist in OpenAir or link with results from crosses that decide in Turkey. Here we can have all this information about how we can deposit our research in a repository. This is a data source landing page. Again, here we have tabs with relations with entities that are related to this specific data source. We have the option to go to the search page and see and apply more filters for the publications of this data source. We can see the statistics about the results of this data source and use the statistics related to the results of this data source. This is the organization landing page. Again, here we have relations with other OpenAir entities like projects, publications, and so on. We can again go and see the results in the search page. And we have the option to download reports for this specific organization in HTML or CSV format. In all landing page, on bottom right, there is this link to report an issue. If you click on this link, this page will open. And what we can do here is to specify a field from the date that we started that has wrong information or some information is missing. So we can specify what is missing, if something is wrong in the title for example, describe what is wrong with that. We can add more than one issue and we can leave our email there so the OpenAir team can contact back to us for the status of this issue. The email is not mandatory, but if you want someone to come back to you, you should leave your email there. And this is our team for developing and designing the Explorer portal. And that's all from me. Thanks a lot, Agiro, and the team. This is wonderful. And thank you everyone for your excellent presentations, sir, for keeping time and also thank you for sending your questions, sir. So we have some questions answered already and some still pending in a Q&A. And I suggest that we go through them. So the first one, the first one is from Biliana. And it's a question to Ellie about Argos. If we decide to translate Argos in our language, who is the first person in Argos to contact? And I can already answer that it's Ellie. Do you want to add anything to that, Ellie? Yes, yes. It would be me, but it would be nice if you could also send the email to this address as well. I added it in the chat. It's argos.openher.eu. And myself, like you will receive my email in the presentation we will find in the last slide of the presentation. And should I continue? Yes, I may be read. That's a good question. The second question is our local funder includes something like DMP in few last calls. If we want to include this in Argos as a template, do we need some special rights or we could do it as a DMP manager? No, you cannot do it as a DMP manager. You don't have administrative rights to the use of Argos tool. This is this request to us and we can do it for you and in collaboration with you so that we make sure that everything works fine and it's according to your needs. Thank you. Thanks a lot, sir. And then we have a question from Regis and I think it was about the part when we presented the guides and all other outputs of task forces and I'll read this question. Thanks for all this interesting info. I work with researchers who'd find all this intel quite useful. However, I'm afraid that all of them speak read English. Is it okay if I provide on my institution's website some translations, partial or complete in our country language while linking to the regional content on the nodo or opener.eu. I'm not sure about the specific licenses of this content. Thank you for the answer. Our open-air portal is licensed CC by and I guess that's also the license on the guides, but maybe Ellie should correct me. And of course we welcome translations. Yes, I think we welcome translations and but from which country is this? Maybe Regis, you led, sir? France. Yes, you can also contact Noah of France, I think. It's Coupera, André, yeah. But we'll be happy if you translate them. Yes, yes, sure. Thanks a lot. And then we have two likes to Regis Coleman. Thanks a lot, Manolis. This was very interesting. So that's a thank you. Then a question to Ellie again, Ellie Papadopoulos about Argus. Is Argus only a pilot or productive and robust software? No, no, it's in production. I will also share with you our specifications here and you can also find it in the EOS catalog and in the open-air search catalog. It's in full mode. It's ready. Thanks. Then a question from Emma to Egania. Maybe I missed it, but how old are the students? Hi. It depends. We cover both primary and secondary education. Do you also have articles of primary school students? Yes, we do actually. We do because we have organized the school seismograph competition that they had to create seismographs with everyday elements and materials. And we had the participation from a primary school and they have also written an article about the whole procedure. Thanks a lot, sir. Thank you. Then a question to our heroine, Katarina, about Explorer. Can also claim their works. Other works included in an orchid record added to the search index and as I shown when searching for a certain author. Authorists can claim their works. This is possible through the linking functionality. I didn't go on detail in this presentation, but this is possible. In this linking functionality, you can give the orchid and see the list of the orchids of the author's works. And they can claim the, actually they can link with a project or other open air entities, but we are planning to integrate the search and link functionality of orchids. In the search and link reset where authors will be able to add in their orchid profiles publications from open air from Explorer and save that information in Explorer. We will add the orchid information. Thanks a lot. Next question also about Explorer. How does it relate to data site comments? Are they connected or working anyhow together? So I can start that we are connected. We collaborate with data site for data related information. But we have more than just data. We also have publications, software, other outputs. And I don't know if you want to add something to that, Argyro and Katerina? No. Then Matias asked a question about amnesia. So over to you Manolis. And I'll read it. Do you know if any clinical research project to use to anonymize data? Do you I? We have used amnesia in my health, my data project. This was not a pure clinical, this was not a clinical research project. It was actually a project on providing anonymization and GDPR compliant solutions to health radiation in hospitals. So we tested it with some sample data from Bart's hospital in the UK. But we do not have results. If that's the questions on how it affected, for example, specific models or tasks that were done on an anonymization mode on an anonymized data. This is actually some work that I would like to do. And if Matias is interested in working on that, I would be happy to help with amnesia anonymizing data that can use for huge work and see how it affects the quality of results. Thanks a lot Manolis. I have a quick question from Marie Claude. Again about explore to our heroine Katarina in open air explore. It would be useful first to display the list of all sources of content. So maybe I'll read it all and you'll answer in pieces to explain the difference between content providers collected from content providers and hosting content providers. If you could search more precisely in the database and know what exactly the different fields mean and contain. I found this comment very useful and we try to integrate this explaining the different field. But if you really want me to explain the difference, I really can't. I'm just a second I'll make my analysis here but she's not a presenter. So let me make her a presenter. In the meantime, I put that something hide screen speaking. So user wants to see all the content providers that contributed the content of open air explore. You can go to the menu and click the content providers link. There you can see all all the content providers of whatever type. We have in open air and there are the research that you can do and there are filters that can help you narrow down your search. Also it is important to say that for everything that you see in open air every research output. You always see on the right side of your page of your screen, which are the content providers that actually provided all this information. In many cases for for the research output that you see for for one single record. There could be multiple content providers. Everyone is listed there. So everyone is getting their credit and their visibility. Very cool to also edit we can we get only 100 first providers. I'm not sure that's the case I think you can you can see all of them. We see in the in the general search by the filters on the left. This is correct because these are only filters. So they help you. Let's say I narrow down the results of the search that you have already done but I don't know if I can share my screen if you go to the menu of open air it search deposit link and then there is a link for content providers. There you can search for any content provider and we give you his full filters on the left. So that is there my screen. Yeah, yep you can share so just stop sharing the largest screen. And start sharing use analysis here do you want to add anything to that alicia. No, I think I explained very well where to go to find the list of all content providers from which we collect from. I can give some details on the other part of the questions. Because open air collects from content providers in in two ways. One, the first is that we directly harvest metadata records from the provider. And the other type of harvesting is what we call the indirect harvest, because we basically collect from an aggregator, which in turns collects from another one. And the actual file of the research results is typically stored on the original content provider. So differences that we collected the metadata records from one content provider, but the resource is hosted by another content provider. So this is the difference that we want to express in the landing pages of the research results. We used to have these two filters also in the Explorer search page, but we understood that it was a little bit complicated to explain. So currently you, you can only see one filter by content provider. So when I get on katerina together with the support of our team user experience will find a nice way to make it clear the difference between the fields in a few words, then we can add it back also in the on the site. Among the filters. Thanks a lot. So I only see one remaining question from Oscar to Manu this, which version of amnesia is the latest I have one point two point one. This is the latest. Thanks. And I don't see any other questions. Again, everyone for joining us today and many, many thanks to our wonderful speakers. Nigela Alex. Argyro. Ellie. Ellie. Egania. Manolis Marina. Thanks for joining us for the fourth open air week. So we have recordings from previous days are on the program page, which I also put in a chat there. And tomorrow we'll have a session open air services for research communities. Please join us if you're interested and have time. Again, and have a nice afternoon evening. Maybe for some of you, it's still morning. Thank you, everyone. Bye.