 Welcome to the 17th Argos community call. Today we're going to talk about connectivity, fairness and machine action about DMPs. So earlier this month we kicked off the OS Trails project that Nelly will be mentioning a bit about the Argos contribution about that and a little bit more about our project. So without further delay, Nelly, the floor is yours. Thank you Athena, thank you so much. Nice to see everyone. Well, see, I see your names. I don't see currently. Yes. So the scope of today's meeting is to actually inform about this, this new project as we have already mentioned in the past that this was going to launch. And it's coordinating a new project about fair DMPs and scientific knowledge graphs. We thought that it would be good to share things with you because this is going to be actually the basis of our work also in the coming three years. And it would be good to, to share it with you. Okay, let me see. So we prepared the presentation for like an introductory presentation for the project. Let me share my screen so that I start presentation. You're able to see right. Oh, what do you see? Because I have to. You should be able to find it in the display settings. Or if I disconnect one of the screens, no, it won't work. Let me see, let me do it again. Maybe in display settings there's a mirror screen or something similar. So we see exactly what you have. Yes, great. Now it's perfect. Good. Thank you. Please feel free to, you know, jump in. This is a presentation that this is the community goals. It's more informal. Start discussions on any points that will be touching in the presentation. And again, open science trials is a new project is a horizon Europe project started 1st of February had the kickoff on the 15th and 6th of February in Athens, hosted by the coordinator by opener. And the project aims to deliver some takes a commons approach, let's say names to deliver the tools, the methods that guidance the training to implement plant drag assess pathways in the context of the European open sense cloud so we're going to work on the interoperability to be able to exchange this information across the different stages of research and see how we interconnect the existing infrastructure in order to do that. Well, our ambition is exactly this to create the integrated research flows that capture not only publications but also data software and other activities actors and research products. And across the stages of planning research tracking it and assessing it. And how we're going to do that is through collaboration of course we bring a wide range of research institutions and research infrastructure in order to, you know, have a better representation of everyone's needs. In the in this three pillars of our work that plan track and assess. And we bring also the services and the infrastructure to enhance improve it. Let's enhance it and connected. So test what we're doing in real life scenarios. Okay, these are the partners. So you see that we're working this pathways together. These are not all actually we have two more that are joining the consortium, but two more are CWS from Leiden and CNRS from France. So as you can see, we have different people and different infrastructures involved sharing their knowledge and expertise and infrastructure. So why we're doing this. What is, how does the landscape look like today. Today we see that the landscape is a bit scarce. We, it's silent. There are many good, let's say, attempts also through ESC to to minimize this. We see that someone publishes like a researcher publishes their product, usually publication, maybe the data, maybe even the DMPs, but this is not something that is a usual practice, maybe the software. These are all may or not be linked together. But in order to do those links, they are supported by scientific knowledge graph so graph so infrastructures, like the opener graph, for example, or domain specific ones that harvest this sources harvest this outputs and create the links between them. And then they, at some point they all support the research reports, there's assessment reports, sorry. But again, as I mentioned, we don't know what happens with the DMP software outputs, if they're published or not, where do they reside and stored and preserve. And the same with the first assessment software that is used we there are different tools that we use where do the results lie at the end and because they also facilitate part of the research assessment right the first aspects of the research assessment. So, more into the limitations of these three pillars. So, for tracking, we have the scientific knowledge graphs and we see that they are becoming like the talk of the town, but they mostly refer to bibliometrics. So the descriptive metadata that they get from the publications. I mean, the, the scientific papers and the venues where these papers and pre prints lie. The scientific knowledge graphs are in their own city, we see that there are quality issues related to the metadata. Some metadata might not be there, for example. So they don't follow a standard metadata schema, for example, they're not compliant with opener, this example, and their missing relationships with other outputs. And also, of course, the limited coverage for data software and other outputs, as I mentioned, we are very good in the publications side of things but we're missing most of what the details that lie on the data and the software workflows and so on. And they happen in isolation so they are realized in isolation, they don't communicate with each other. They're missing, of course, the knowledge that is carried by research communities, research communities go beyond descriptive metadata, they also offer technical metadata, for example, and they go into the intricacies of the domain and currently we don't have a graph that has this information. And there are also interoperability at the moment and technologies, even to support them is at the level of the catalog. And not on the limitations in terms of DMPs are that we don't know if I mean we create them but we don't know if at the end. We become better in practicing research data management the best practices that we apply or if our results are more fair when we because we organize our practices through the DMP so we don't know if at the end they're good or not. So DMPs are shared across communities, they might be shared via, you know, repository, lexanodo but they're not served as fair output so some of them do not fulfill let's say the principles from the unfair. The researchers still feel that DMPs are an extra burden, and they are unsure about where to start and how to organize themselves when it comes to writing a DMP. We see that the domain data protocols that was suggested by science Europe few years ago. In 2017 or 18 maybe even before is still padding adoption. And this could help because domain data protocols offer it's like a tailored DMP for the discipline for the specific domain. So this will help to recognize and provide guidance to the specific issues addressed by its discipline domain. And then qualified references are the references are the are the links between different let's say outputs that have a specific relationship. So, not just pointing from one thing to the other but also specifying what this is about. If, for example, this data is part of the DMP, or if publication supports, or if the data supports the publication, and so on. We see all these relationships and we see that DMPs can help with those relationships with those qualified references, but this is not a practice that is adopted yet. In terms of fair assessments. And this came actually from the report of the ESC association for metrics and data quality taskforce we had a survey. Last year, and I include here some of the points from the report that is going to be published later in March. Well, we see that funders, institutions and infrastructures provide limited funding support and guidance, although they add like they, they include for fair assessment and fair compliance at least of the data in their policies and various data management policies. The results are inconsistent between tools that exist to support assessments, for example, if you take the same data set and assess, assess it against the fairness in one tool and the other, you get different results. And that's because there are differences in how the different providers interpret the firm metrics. There's a lack of a commonly agreed minimum set of guidance and direction to assist in research so we don't currently we support researchers, but we don't know what's the minimum thing that this we don't have an agreement between us of what's the minimum set of things that they could do in order to satisfy the requirement by the funder or their institution. The results are not shared the fair assessment results those that we get when we use a tool are not shared. So they remain somewhere, maybe in the tool they disappear. There is also a misconceptions and misconception between fair and data quality some respondents from the survey as I mentioned, like was a survey. Some use that interchangeably fair and data quality, which of course there are dependencies but we shouldn't use them as synonyms. And there is miscommunication and they in what the tools support. We know for example that to support fair assessments at the level of the metadata and not at the level of data. So currently we're very mature of assessing the method the descriptive metadata that repositories have. Like we are assessing the repository against the fair principles so how far enabling it is and not that much on the data. How we're going to tackle all this through open science stress stress is this. We have three pillars that we are going to work on plan track and assess. And in each one we are going to support interoperability. And for DMPs for example, in the planning phase we want to move away from having PDFs to having a DMP export and all the information that we need in this export that can be used to act on behalf of different services and activities that we have to do throughout the research data life cycle. In the tracking phase we have, we need to make sure that we include more outputs rather than only publications with this we have already and make the relationships between them. And in the assessment phase, we have the third tests that describe the community rules for for each element of this for each. So, we need to bend we want to benchmark the first test for tests are the piece of software code that implements a firm metric. So we need to. So we want to benchmark this so that they're used consistently by all the providers of the first assessment tool providers and get the same results. And that's what will help us accomplish this and also implement move from assessment to assistance. So give also guidance on how to better the practice to be more fair. Well, another view of the project. We have different faces that we're going to implement it first phase is the design, we're going to co design the brevity reference with the services that are on board and we'll talk about them in a minute. For the incentive notes graphs and fair assessments. And we're going to co design them also from input taken from pilots we have 24 pilots in the project that are going to give input and feedback to to adopting our results in national settings. In 15 countries, and in domain specific settings so in brought by partners that are that are having research infrastructure that represent research. Then we're going to having had the interoperability reference. We are going to implement and federate through the DMP is some different discussion for assessments to work on the automations the capabilities for automations for this people to automate and exchange information between them. And we are going then to empower furnace but not only furnace but assessment in general. So we're going to develop the tools and methods needed to perform for assessments in different outputs. And also we're going to have to assess the DMP, not against fair metrics, but against the DMP evaluation criteria like the science Europe one. And we're going also to have, you know, provide more accuracy and coverage in deformation that is harvested by the scientific knowledge graphs. And of course we are, we want to have the to embed the firm metrics and inside DMP so that when you create a DMP, you also assess the fairness of your work. And same force and if those gaps we want this firm metrics to be an entity in there. And we're going to implement them. We're going to implement all these results by the 24 pilots that we have. We will adopt those results. And yeah, I'll talk about that in a minute. So we have 24 real use cases that are going to play that play election this because they do know that they will they will give us the requirements that they have based on the different infrastructure that they have and the different maybe open sense maturity even and different needs. This is the overview of the project and other say view of what I just explained, you can see the different here. You can see the three pillars planning tracking assessing, working on interoperability on each pillar and across through also the services, you can see some of those services here that are on board it, working on DMP evaluation, and on the quality of the knowledge graphs metadata, you can also see here some of the services, testing them on thematic and national settings. And all that are happening in alignment with or let's say projects that are running that have similar interests and scope, let's say, or they they also work on some of those. They are not tracking or assessing areas, but they tackle it from different angles so we're going to align with them and complement each other on what we do, for example, with grass posts I like our record for us and also RDA. There is a data lines. Here results, more of a high level. This is not an exhaustive list. These are the commons. So we're going to have common methods to services guidance and training. Across these pillars, for example, commons might be a machine actionable template for the DMP since we are focusing on the DMP today. That is served across, you know, providers. We're going to have a DMP evaluation rubric we're going to work with funders we will get their input to extend the science Europe rubric and see how we can, what are the criteria that we need to take into consideration to create a DMP evaluation service. We're going to have a research product quality toolbox for scientific knowledge graphs, which will offer a set of annotations to improve the quality of metadata in scientific knowledge graphs. Case studies improve concept instances through the pilots force to test to define, you know, fit for purpose pathways across the three pillars of our work. The library and integrated competence center with learning resource open learning resource of course. And targeting, not only targeting both the usage of this services and this results, but also following a trainer trainer approach on the different things. But on the three areas of our work. The adoption is going to happen through the pilots. You see we have 24 as I mentioned, representing 17 countries and five clusters. The SSH social humanities environment life sciences physics astrophysics and a majority of countries as you see here. And we recognize that no one size fits all but every country for example and domain have their own specific needs. And how are, you know, are organized differently and bring different technologies and services in the, in the, in the work. So we're going to work to provide solutions that fit all of them. In terms of the, yeah, we have two main, let's say, pilots, national and two categories of pilots national and thematics we split them like that, based on the coverage. And these are, you can see here some of the activities that the pilots national pilots will will perform. Some will develop all of them some will develop a collection of them. They're not all the same because they bring different again needs and services into the table. But mostly, they will develop machine actionable templates that will extend their repositories to archive machine actionable DMPs and create relationships with publications between publications data and DMPs. They will interoperate with a graph. They will have or other scientific knowledge graphs and include again qualified references with other instruments and activities, research activities. They will codify the metrics for funders. So they will work with the funders to get the input and in order to assess at the end the fairness of the positive outputs of the repositories. The national data services national posters of the bring codifying DMP evaluation criteria and assess DMPs quality and extend the national monitoring systems with machine actionable DMPs. The thematic pilots are nine in total that we have some domain specific and some cross domain to actually cross domain. And the activities to choose from are again developing the machine actionable DMP template for the specific community embedding also fairness so fair metrics into these templates. However, they will be expressed from the world that will from the world that will be performed in the architecture. Sorry, but Australia's. They will enhance the catalogs and the scientific knowledge graphs with different entities. Here this in relationships from instruments experiments facilities they have created also links with publications and data and DMPs and codefined domain specific for metrics and says the furnace of this published outputs. Services that are brought by the partners to help achieve all of that are in terms of skgs are the opener graph software heritage for software information and the S3 the reasons infrastructure's pathological depositors and also national pre systems and calculus. And the idea is, so as you can see we, we, we have like some disentanglement disentanglement to do here. Define what how we how we view a scientific knowledge graph to be a versus repository for example. And what the relationship is. And we're going to enhance the scientific knowledge graph through a common model through API is that they're going to support the exchange information flows to integrate information to indicators or the metrics for fair and for DMPs. They're just actionable DMP so they, so they can harvest actually and they can have this information for the piece there and accept notifications when something changes because we're going to exchange information but we need to also make sure that we notify each other about the different changes. For the DMP platforms, you see here the DMP platforms that are on board it. And either directly brought like Argos SW the map which is the Austrian solution, which is the Swedish solution DMP which is the finished one, and our DMO which is the German solution for DMPs and is brought indirectly through our partner who is going to who uses like they use this tool to create their templates. And you can see that they are European white or national, some are offered as a service and premises, there are different levels of readiness and compliance with the machine actionable that you become standard produced by RDA. So we're going to enhance this model, the RDA model to support support the different actions that we need to do. So we're going to have common APIs so that they exchange information integrate the positive functions that they can share their outputs, enhance more of the existing APIs that they have, not the commons that they that they will be used to exchange information with each other but the APIs that they have to exchange information with other services, connect to external DMP evaluation services so that you can evaluate the DMP output that you, when you when you write the DMP in the service, and then send notifications also this is across all services. For fair assessment, you see also here the different services that are brought. And there are different, there are differences of how they perform the assessment is itself assessment is it more automated. I mean, manual or automated, using different context with different protocols. And we're going to enhance the know that's a wrong, that's a wrong slide here, because this is a wrong list. But for fair assessment we're going to of course work on the fair tests. The piece of software that is going to be common across all. And we're going to work on the API so that they can share this fair test across services. And we're going to work to embed this for tests inside different tools like DMP tools. And glossary in case this is useful. I already explained. And this is thanks to Mark Wilkinson, who is partner leading actually a work package about for assessment in this project. I already said, explained what for test is, but I didn't go through the explanations of her magic and for assessment, you can see them here and you can also find them online when we'll post this presentation under the Argos community call page. And thank you and I see we have, I think some something on the chat. Thank you for my screen. And let me see. I think it was just me Ali I was asking for the reference for the domain data protocols but it looks like we've put our head on it. I, I, I later heard you reference the science Europe guidance document from 2018 I assume that that's the one right. Yes, that's the one correct correct. Thank you. And yeah, Andrew is one of the partners representing CWTS and will be involved in the pilot for the Dutch pilot in the Netherlands. Thank you. Let me see. Yes. Okay, found it. Oh, are there more questions please unmute. Yes, there are some questions from Danny. I don't know if you want to speak, Danny. Hi, let's start my camera. Yeah, I was wondering. Does anyone have examples where research funders accept machine actionable DMPs at the moment. And you mentioned that there will be this one key result that you will adopt the fair evaluation rubric. So when is this deliverable access expected. From Argos, we, we work with funders for example we work with Chisterra, which is an European consortium of national funders funding ICT projects. And they also accept they accept both because we give both PDFs and machine actionable DMPs. We work with them, because their priority was also, well, their concern was also to create machine actionable template, not only just in the end. And we actually are going to launch next month, a new, a new version of their of their templates, having both software plans and, you know, combining both software plans and data plans. And the Latvian Research Council, the Latvian Research Council also, we work with them and they came to us because their priority is also to have machine actionable DMP. So, yes, there are funders that we are working and we know that they want this. And so what was the other part of your question. When the evaluation rubric deliverable is expected. I do. Okay. Let me actually let me search it now. I don't remember it by heart. There's also said there's many interested in this feature. So it would be very interesting to know. Month 28. Oh, so it's, yeah, 28 is now is month one. Then we have a long way ahead of us. So it's a middle 2026. Yes. Are there more questions? It is the one from Lisa. Yes, I had one more question. I'm mostly connected to the Argos tool, actually. I was wondering, I don't know if you sort of know the common standards proposed by the RDA from the top of head, but it's just some fields basically. They already implemented in Argos. And it's probably more a Argos specific question, probably less often direct OS trails question but it obviously kind of informs our tool of choice for a national pilot. That's where that question is kind of coming from. Yes. The short answer is yes. So we are also members will we are co-chairing the active DMPs group from RDA. So we collaborate with the other chairs. And we currently at this at the current version that the DMP commons that it has, we have implemented it and we support it. But this is going to change. So, as you saw in the open sense stress project, we are going to extend this standard. And we're going to work on. We're going to reevaluate what is already there and see if we can provide more information. Yes, I think that's. Thank you. I mean, I can point you also to actually, you know, I can point you also to some. To the adoption story that we had for RDA. And then this is it. Plus, the latest, the latest paper that we have. And as you saw, actually, and thank you for that. As you saw, we have a new release, which I didn't talk about because I started with oysters, we have a new release out of our goals. You see that what we're talking about this. The last quarter of last year about having a blueprint which defines the structure and you can combine the templates for different outputs and activities inside this blueprint and connect refill different sections with different sources and everything. So that's already live. So you can test it. There's also a news item, a news piece that we created to support the better, you know, understanding of this change. And we are working on creating some tutorials and you new material that that can also support communication and dissemination activities that we do. Any other question. I was trying to raise my hand but I don't know if it was working. Just a minute. I have maybe two questions and if either of them are out of scope for this call we can talk about it separately. So, as Ellie said, I'm Andrew Hoffman I work at Center for Science and Technology Studies at Lighting University and we're sort of co running one of the national pilots on the project. We're curious about using Argos on our national pilot. And I know that Argos generally has a default feature that publishes DMPs to the Argos graph or sorry to the open open air graph. And that's what they're created in Argos. That that was what my understanding was at least as of some time ago maybe that's changed. So I guess the question is has that changed. Will we, is it possible to create templates and play around with them and even fill out DMPs without sort of defaulting to publishing for now. And I think beyond that in the sort of medium to long term are there. You know, is there any thinking on on y'all's end about how to represent knowledge graph us how to represent DMPs in the open air graph without necessarily exposing the full payload. I feel like I had a conversation with someone at the curve meeting about this maybe it was with you, maybe it was with someone else. But for me, I think, you know, I'm a data steward I work with researchers. I think that I'm going to be having a hard time finding folks who are ready to publish their DMPs in their entirety, you know, somewhere, anywhere, really, at this point. I think that we can do interesting work in about, you know, the metadata for these entities that, again, can be sort of the existence of a DMP can be exposed and certain metadata can be exposed that may still be useful and relevant to folks who are reusing some of the, you know, that material. Yeah, so Thank you. Let's start with the first I tried to note them down so that I don't forget. So the first one if you can. Let's say opt out from publishing the DMPs on the graph. Yes, if you we only the graph only gets the graph only gets what is published on the nodon so if you click. If you select to after finalizing your DMP to also deposit it, then it goes to Zenodo and then Zenodo is the opener service for depositing so we immediately have that information. And we immediately have both the PDF and the machine action with the DMP right. So if you don't just do that, then that's fine. It's kept internally. It's not shared with the graph. And the other one is about sharing the whole DMP, right, publishing the whole DMP. And there are issues with that because there might be sections in the DMP that you don't want to share because they did the detail sensitive data, you know, management. And currently, well, we had this, we had this option that you could select which section, I mean which data set, sorry, not which section which data set you could leave out the publication from when clicking the posting. But now because we have new feature the blueprint feature we are refactoring this mechanism to support the selection of specific sections or sections of sensitive data to just not included select not to include it in publication. But there are two different things when sharing the DMP is one is the data this the descriptive metadata on, you know, the title, the usual the title description of the DMP the offers, etc. The attachment, which is the files, the PDF and the JSON, but they're also uploaded to the repository and become part of graphs. And I think that for the for the latter I already answered that we are going to have this feature refactored and the metadata, the descriptive metadata, we're going to work in the Australia's through the DMP comes up. Thank you. Thank you. Any questions. I think it's good if we can also involve you all in the, we're going to have some workshops national workshops. So if you represent one of the countries that you saw in the presentation, it's good if you could also participate. I mean, if you want, we want to why did the representation of universities and you know, organizations. But for example, yeah me. I don't, I don't know if you would like to participate in one of those events. Which, which work package or, or is this, is this independent of OS trails or this is part of OS trails. No, no, this is part of OS trails. No, no, this is part of the child. If there are no questions or for Westerns or for others, then we can wrap it up. Good. We'll see each other in a month from now. And wish you all good rest of the day. Thanks for joining everyone. Bye. Bye. Thank you. Bye.