 Okay, so in this session, HILC occurs from CERF-SARA and Ilona Von Stein from DANCE are going to guide you through certification and assessment for data repositories and services, and in particular, they will touch base on evaluation of core trust seal and its implications for maturity modeling, on assessment of fairness of data sets, and on fair assessment framework for data services beyond repositories. The session will start with the presentation of some of the recent outputs from the FERSFET project, and then we will engage you in some polls using a millimeter tool. A little bit of housekeeping for the event, so the event has been recorded entirely, as you know. Your microphones are off, so we invite you to use the Zoom chat for question, as you know, or raise your hand in case you need to speak. As said, we'll invite you to a millimeter poll later on, and we'll share again the code for the poll following the presentation. We are taking collaborative notes. There is a shared file, so if you want to contribute and add any inputs there, please feel free to do so. FERSFET, in a nutshell, FERSFET was funded by the Horizon 2020 Infra-Eusk 5c call. It started in March 2019, and it will last for 36 months. There are six core partners in the project, so Dan, who is the TTC coordinator, yes, DCC, E-Way, SCFC, and trust IT services, but the project has 22 partners in totaling from eight different countries. The objective of the project is to survey the landscape of fair activities in relation to the EOSC, and create a basis for harmonization efforts of all those actors working in the fair ecosystems and trying to build an active community around EOSC. In particular, we try to identify overlaps, divergences, and challenges related to the framework with a spatial focus on the recommendations that were identified by the high-level expert group on fair data in the turning field into reality reports, and to accelerate the realization of the goals of the EOSC in all fair-related matters. The FERSFET project is structured around seven work packages. Two are dedicated to the management and engagement and dissemination, while other five work packages are really bringing to the project the real technical activities. In this session today, you're going to see the results in particularly of two work packages dedicated to fair practices, semantics, interoperability, and services, and another one dedicated to the certification of repositories. The FERSFET project is supported by two groups of experts. The first one is the High-Level Advisory Committee, made up of nine experts who provide strategic advice to the project. And the second one is the European Group of Fair Champions that currently counts 11 members from different disciplines and different projects in Europe, and it's mandated mainly to ensure uptake of FERSFET results by their communities. One of the key actors of the FERSFET project is the Centralization Force, a team of people working to enable a dialogue among the various projects in the EOSC, and working in particularly to maximize coordination and minimize unnecessary overlaps, encourage the dovetailing of projects and activities with the EOSC governance, and promote the mechanisms to collaborate to an unfair into reality. The Centralization Force is gathering representatives from the main actors in the EOSC ecosystem, namely the regional thematic initiatives, the S3 clusters, and of course the working groups in particular. We have stronger relations with the fair working group and its task groups, and we are also working with other horizontal activities and other fair related initiatives. The Centralization Force met physically the first time last year in November in conjunction with the EOSC symposium in Budapest, and it's meeting for the second time now in these days with a series of virtual workshops taking place between April and the 11 of June, when the final concluding session will take place. So I conclude now my presentation inviting votes for our poster, which is number seven in the list, and we remind you that the voting closes tonight at alphas five Central European time. I'm Sara Pittonette from Tracedis Services, and I now welcome Hilke from Fairsfair, and from Serfs Hilke, up to you. Great, thank you very much. Let me unmute and share my screen. Can everybody hear me okay and can people see my screen? Yes, you can. Great, let me take it away. So thanks a lot, Sara, for the nice introduction, and thanks a lot to all of you for coming. It's great to see so many people in the audience. We have a nice saying in the Netherlands, elektvordel, elektnadelhebsenvordel, which translates to every disadvantage has its advantage, and I think the disadvantage of doing this offline or doing this online virtually is that many people have the opportunity to join, and it's great that so many people of you have taken advantage of it. So thank you all for doing that and welcome very much. So my name is Hilke Koers. I'm a group leader of the data management services team at Serfsara. Serfsara is a Dutch national organization for high performance computing and research data management, and I'm also the task leader of Fairsfair task 2.4, which pertains itself with fair assessment of services and software, and that is the topic that I wanted to talk about today. So in the task, we set out the work on perhaps a rather naive question or trying to formulate answers to a perhaps naive question. What does it take for a data service to be fair? This was our starting point, and what I want to do in my presentation is talk you through some of our discussions around that and to resharpen the thinking and also to refine the objectives for the task. I wanted to present what we have done so far and also share with you what is our plan going forward, how we plan to take this further. And then of course, very important, I'd love to hear from you about your thoughts around data services, what are important data services in the context of Fair, what are important criteria for data services to help data be fair, etc. So we'll do that at the end. So I'll try to be relatively brief in the presentation part so that we have enough time for the more interactive part where you can give your input and we have a bit of discussion around that. So actually the starting points in our conversations with the group from this question is, is this really even a good question? Should we be speaking about Fair services or is there perhaps a different frame that we should take? And really to think about that, we wanted to go back all the way to the very foundation of data and of the digital objects, the bits and bytes. Because as we all know, at the end of the day, that is what we are talking about. But we also know to make these objects truly findable and accessible and interoperable and reusable, we need to go beyond just the bits and the bytes, we need to have metadata, we need to have persistent identifiers. And of course it's important that these objects follow certain standards so that also other researchers can easily find and use them. So if you take that together, you arrive at the notion of a digital object, which I think is a very meaningful level of abstraction that encapsulates not only the bitstream but also persistent identifiers, metadata, and of course builds on existing standards, which can apply to different types of digital outputs. Of course the data sets but also research, software, methods, ontologies, etc etc. These digital objects can exist in various places in the data life cycle. They can represent data that just comes out of an observational device and just comes out of a certain apparatus. They can represent data that's actively being analyzed, data that's already in a repository. And of course very important at the end of the day, the data that's being reused by other researchers. Now if you think about data at these data sets or digital objects and can also apply to software or other types of research outputs in these various stages, you immediately see that in order to really do something with the data to get value out of it for the data to be more than just a static inert piece of content, you need to be able to act on that. You need to have tools, you need to have services. For example when the data is being gathered, of course there's a measurement device, hopefully some provision, some service to add machine metadata to the object right when it's being created. Data analysis tools that take the fair digital object and operate on it, turn it into something else that is of more value for the scientific insights that you could out of it. In the data repository there is functions for uploading annotations to a chip and so on and so forth. And then of course function services to aggregate the data and the metadata, expose them to search tools and give people the ability to download the data and act on it. So it's really this combination of the fair digital object, the content and the services that act on them to be able to really get value out of the data and be able to do something with it. There are supporting services as well that underpin all of these, for example functions, services that means persistent identifiers, linking tools, registries and so on and so forth. So all supporting infrastructure to be able to do all of these things with your data and your other digital objects. So a way of thinking about it, which I kind of like is if you see the digital objects as the musical notes in our world of data, then the services are the rhythm and the cadence that add the dynamics to the music. And you need both of them to really have a perfect symphony or a fair data ecosystem. So actually the way we slightly rephrase the thinking in our task around fair services, not so much what does it take for a service to be fair, but rather what does it take to enable fair and really be an integral component of this ecosystem where services act on fair digital objects to enrich them, to add value and perhaps make them more fair. I'm mindful that the way I've been speaking about it is fairly hand-waving of course. A lot of this is made much more firm and given more detail in the turning fair into a reality report that appeared in 2018. I think some of the authors of this work are also present in a session, so that's great. So a lot of these notions are made much more precise in this report and if you haven't read it or looked into it yet I would highly recommend you do because it really gives the scaffolding to think about this in more detail. So what's the issue then? If we have that report and we have this notion of a fair data ecosystem, what do we still need to do? Well, quite a lot of things that still need to be elaborated more because as you know, fair is not an absolute, but it's a set of guiding principles that still needs to be taken further that requires more interpretation and definition to really be able to do something with it and to become really actionable. For fair data there's been a lot of work that has been done on also quantifying the fairness of a digital object representing data. There are all kinds of checklist, assessment, certification criteria, etc. But for services that's not really the case and there's actually very little guidance that service owners can benefit from that tells them how to make their service fit in this fair data ecosystem. So this is really what fair is fair task 2.4 is all about. It's our objective and we will deliver that at the beginning of 2022. There's still a little bit of time for us working towards an assessment framework, a fair assessment framework for data services that will enable and stimulate this kind of interplay, this symphony between digital object and the services that act on that. We're also working on a fair assessment framework for software, equally important in terms of the scope of the work, but not so much the topic of the session today. So I'll really be focusing on the service aspect of our work. So what we've done so far we took a little bit of time to review the existing fair assessment frameworks for data also because we think they can serve as inspiration for assessment frameworks for services and in a sense the services needs to build on what we already have for the data. We've also looked at existing assessment certification frameworks for services, not necessarily fair but what's out there in general. We've done a couple of case studies where we did a bit of an uphill bottom-up analysis, if you like, looking at an existing service and trying to answer the question, do we actually feel that this service is enabling fair and in what sense is it doing that? So rather than first, well, defining a lot of methodology, just starting from an existing service and trying to make sense of this question with then the aim to abstract that to a more formal methodology later on. And we've also formulated some guiding principles for this assessment framework. So not yet the guiding principles for the services itself, but more what do we think the framework should be about and what are the boundary conditions or the desired aspects of such a framework. And of course, we had a lot of interactions and very useful beneficial discussions with stakeholders and other related working groups and projects. I wanted to quickly show you one case study, not so much for the detail of the specific case study, but more around some of the aspects of the methodology that emerged. So we took a look at a couple of services from the EOS portfolio, one of them being redefined. And then what we did is just mapped against the fair principles. And for every principle we try to formulate an answer to the question, is this service enabling this fair data principle? When it acts on the fair digital object, is it making it more findable, for example? And then the specific formulation of the F1 principle, is it respecting that property or is it reducing it? And we felt that these, well, this set of three, or actually it can also be not applicable, of course, if the service completely does nothing with a particular fair principle, then it's non-applicable. But mostly it's a charge of three. And we found that a much more valuable way to think about it rather than a binary yes, no. So for every of the fair principles we asked the question, is this service really actively improving that? Is it sort of respecting it, fair in, fair out, if you like? It's not adding it, but it's also not destroying the fair property when it operates on a fair digital object. Or is it actually reducing it? So this is the kind of mapping that we came up with. And like I said, it's not so much at this point to really scrutinize the defines or any of the other services, but really to formulate a sensible methodology that we can now formalize more in the work that this has. So in the case of the defines, we find that, well, actually, if you look at the fair principle fair, one, metadata are assigned a globally unique and eternally persistent identifier. We define respect that if it operates, if it acts on a digital object, that property is kept, it's respected, it's maintained. But be defined in itself does not provide you with this property. As opposed to F2, data are described with rich metadata. Here be defined is actually enabling that. It is elevating that fair aspect of a fair digital object. Where we are right now in the process. So some of the things that I presented and a little bit more are described in more detail in a first assessment report. It's available on Zoonodo. We very much welcome your feedback. There is a Google Doc that's still open, so you can use your feedback there or by any other means that you would find appropriate. So if you're interested in this, please take a moment and let us know what you think. There's still quite a bit of work ahead. We started the task in last September. So we're sort of three quarters of a year in. So we still have ample time to refine the thinking. So your input is still very valuable. This is a team for a task 2.4. We have the good fortune that there's some really smart, clever cooperative people. So a really nice team and we've had some really high quality, great discussions. So that's been great. And now I wanted to move on to the more interactive part. I would propose that we save questions and points for discussion to the end. At the end of the session, we have reserved a little bit of time for that as well. So if you have any questions about the presentation, we can pick that up at the end. But for now, we wanted to spend a bit of time on also getting your views and your input around certification, assessment for data services. And we set up a mentimeter for that. So I would ask you to open up menti.com and then use the code 89191. And then for the purpose of the presentation, Sarah, I am going to ask you to then take back control because all being well, you should have the menti view open. Yes, I have it. And I see people who are starting to provide an answer to the first question. Can you all see the screen and the balloons popping up? I can see the screen. So I'm assuming others can see it as well. Right. So first question also to get to know the audience a little bit better and help us understand the context to your answers to the other questions is to describe yourself. We wanted to leave it open. So in your own words, how would you describe your role in relation to data repositories and services? I see we have a repositories hacker amongst us. That's great. We'd love to hear a little bit more about your experiences later. Service providers, managing a data service, a user, research consultant. Some people are both a user and a manager. I think many of us wear different hats in this arena. Let's leave it open for a little bit while there are still responses coming in. But it looks like we have a very nice mix of different perspectives, which is great for the discussion. All right. 54. Let's wait a little bit. We have 159 people in the audience, 57 responses. So I think we can still do a little bit better. Say again? No, we have very diligent participants. One third of them is replied. Thank you. Okay. Shall we move on? Okay. Yeah, let's move on. That's fine. Thank you much for sharing that. And we'll use this also in the analysis so that we are able to segment the responses a little bit on the basis of your role, if you're a data provider or a provider of services or more consumer. So the second question, we wanted to ask you to name three types of data service that you yourself would consider essential to enable fair data. So imagine we start designing this all from scratch. What are three types of services that you would find absolutely necessary to enable fair data? And I would ask you to focus on technology title services. Of course, there's a lot of more non-technological services as well, but let's zoom in a little bit on the technology side. I still see a lot of feedback coming in. So let's leave it open for a little bit. I see some of the usual suspects, if you like, but also a few more uncommon ones. CPID services, data repositories, annotations, licensing, BMPs, AAI, important aspect as well, of course, data citation services, excellent ontologies, almost at 60. So let's leave it open for a little bit still. Is there anyone in the audience who would like to elaborate a little bit on why they have selected these type of data services? If you would want to add a bit more detail to that or perhaps just some rationalization, could you raise your hand and then we can open up the microphone? Anyone brave enough to give a little bit of context? Palomangi wants to raise his hand. Yes, Palo, you can very much raise your hand. Let's see if I can find you so that I can unmute you. There with me. Palo, the floor is yours. Can you hear me? Yes. Hello, Hilke. Hey, thanks for joining. Hello, everybody. No, I'd just like to elaborate because we've been experienced in this lately. And of course, apart from the infrastructural services, which are key, like precision identifiers and this kind of enablers, I think that today one of the key propositions would be to make thematic services so the places where scientists perform their science, where they execute their digital experiments, open science by design. So expect the services to publish on behalf and under the authorization of the users or the elements that are needed to repeat the experiment. So basically aspects such as province, attribution, semantic interlinking, the position, the proper repository selected by the community, user IDs, etc., the proper usage of PIDs should be managed substantially by the services used by the users. Open science requires a big effort in terms of publishing, which cannot be left to the manual upload of scientists to different repositories. So thematic service. So full stack open science. I see Andras also raised his hand. Andras, I will unmute you for a comment and then we'll go to the next question because we still have a few that we would like to cover. Andras, floor is yours. My answers were intended to be non trivial. I think the most important thing is to help researchers make their data fair. So one of the words I've chosen was consultancy. I think a lot of the researchers would need help for making their data fair or more fair. Another word is evaluation. It would be nice for researchers to be able to check how fair their data is. And the third word is enhancement. I would find very attractive to get services which help to enhance my dataset. For instance, proposing finding linkable pieces of metadata or enhancing the data in any possible way. Thanks. Yeah. Thank you very much for that contribution. Very useful. Thank you. I'm sure there's still a lot that we can say about this and we can come back to it in the open discussion at the end. But for now, let's move on to the next question because we have a few other things that we'd really like to get your input on. So this question is not about the services itself, but more the qualities or the type of the attributes that you would expect from a data service to enable fair data. So what do you consider to be the most important qualities for a data service to really be fair enabling? I see trustworthiness coming up repeatedly. That is great. We'll also talk a little bit about trustworthiness in the context of data repositories in the second half of this session. Interoperability, no paywall, transparency, sustainable, open signs by design. That might be Paolo again. Interoperability coming up quite a bit. Excellent. Persistence, yeah. User friendliness. I've also seen that a couple of times. Building community standards. Great. Correct attribution to data providers. I also really like that one. I think that's very important. We already have 73 participants, so more people are getting engaged in giving feedback. That's wonderful. Thank you. Excellent. Thank you very much. Let me ask maybe one, just in view of the time, is there someone who responded around trust or trustworthiness around the quality for data services? Somebody would like to elaborate a little bit on what trustworthiness means for them in this context. If you would be willing to say a few words about that, please raise your hand or say so in the chat window. No hands. Then I propose in view of the time, let's move on. And then there's still some other questions where I would again invite some discussion. Let's move on to the next question, Sarah. So this is a rating question because if we speak about fair assessment of services, that can still mean many different things to many different people. And here I've tried to summarize that into three different levels of maturity, if you like. One is around sharing good practices, recommendations for fair enabling service. The second one is a self-assessment tool, already a little bit more formalized. And the third one would really be formal certification. So there is a formal certification stamp for fair enabling service. My question for you is how important would you feel for these three different types of assessments slash certifications? Do you think the good, just sharing good practices, is the most important? Do you feel formal certification is most important? Or this self-assessment tool, which perhaps sits a bit in the middle. In the next question, I'm going to ask you to elaborate on why you chose those values, but for now really just on a rating of one to ten. Let's wait a little bit. There's 50 people who have given their response. There might be a few still coming in. Interesting to see that most people find these sharing good practices and recommendations most important than self-assessment and then certification. But in the certification, you see these two bumps. So there's also part of the audience who really feels very strongly about that. That's really interesting. We'll leave it open for a few more seconds. Last opportunity to cast your vote. So again, thank you very much. That's really valuable and helpful. Let's move on to the next slide where I wanted to ask you to motivate that in particular, if you have given different scores, would be really great to understand why you value one of these elements or why you find that more important than one of the others. And this is the last question for my bit of the session. So please tell me if you find best practices, good practices more important than formal certification or the other way around. Why is that? Okay. All equally important about flexibility. Yeah, I understand that argument. Steps one and two might be a prerequisite to get to three. That's also a really good insight. Many people also gave equals scores. That's also very good to know. Somebody here mentions the aspect of demoralization. Maybe a side effect of certification can be that some people are left out, which is of course also not something that we wish. There is already a quarter seal in combination with very guidelines. Very, very good observation. We'll speak more about that in the second bit of the session. Francoise, thank you for your comment. Important to see it as a stepped process where maybe we would start with establishing good practice, et cetera, and then moving on to self-assessment and certification. 28 people have given their feedback so far. In view of the time we need to move on shortly. So last few seconds to share your motivation, if you wish. Great. I had wanted also to invite people here to motivate their feedback, but actually you've already done that here in writing. Maybe time allowing at the end of the session, we have 15 minutes for open discussion. We can come back to this because I think this is a subject that at least I would love to tease out in a little bit more detail. But in view of the time, we're midway in the session. I would propose that we move on to the second part of the session and give the floor to Ilona. So I'll stop sharing for the moment. And Ilona, floor to you. Yes, hi everyone. I'll start sharing my screen. Give me one minute, please. Can you see my screen? Okay. We see the presenter mode, Ilona. So also the other slides, et cetera. Yeah, okay. Let's try how to fix that. I think when you click share screen, you can choose between your primary or secondary screen. I'll try again. And how is it now? We see the slides, but also the PowerPoint. So we also see the navigational. Yes, this is it. Okay, so this should be the right one. Thanks for guiding me through the technical stuff. So welcome everybody to the second part of this session. I'm Ilona Von Stein. I work with Danse in the Netherlands. And we are the Netherlands Institute for Permanent Access to Research Data. And I would like to dive with you into the topic of fair enabling repository data services. And I will focus on evaluation, assessment, and certification. The scope of my presentation of this part is on the one hand about evaluation and certification of data repositories. And on the other hand, I will focus on evaluation and certification of fair data. So I take the fair enabling perspective, just as Hilke did before. And also I take the perspective of the fair digital object. And also the complementarity between those is very important. So I will dive into that a little deeper as well. With this scope in mind, I would like to just provide you a little bit of background. Because I think that repository practices, they enable fair principles for digital objects. I think they do so. And this is also a starting point for my work. They do so because those services ensure at least to a certain extent, the fairness of your data set. And on the other hand, they also perform long term stewardship and creation. So very important, I think that the data remains fair over time. If you see the repositories in relation to certification, a common starting point is to think that repositories are assessed against guidelines or standards to evaluate their trustworthiness. A few certification frameworks do exist to assess the quality of a repository. And cordial steel is in common use for that. So now I go to what fair is fair is doing in the area of assessment of the fair enabling repositories. I've highlighted three aspects here. And I would like to go through them with you over the next couple of slides. So I will touch on our work on the fair alignment of certification schemes. Also, I will share a little bit more about a European network of trustworthy repositories that enable fair. And also, in the light of the fair enabling repositories, we're working on providing an improved registry for finding and selecting relevant trustworthy repositories. So the first thing we are doing under the umbrella of fair enabling, we are aligning the cordial steel requirements with a fair data practices to identify how repositories can enable fair data. And all this we do under the conviction that context matters. So really, the evaluation of object fairness cannot be done in isolation from its context. Again, we take the fair object versus the fair enabling environment perspective. The design methodology we have here is to use the cordial steel requirements as a baseline and elaborate them in a way that they demonstrate that the repository enables fairness. In this capability maturity approach, we use the cordial steel compliance levels, as well as the CMMI approach. So this is the capability maturity model integration approach. And we hope that with such a maturity approach, we may support repositories at lower levels of maturity in defining and achieving their goals. So we are focusing also on continuous improvement here. The next slide gives three figures in the middle that show some outcomes of our work. For example, on the left hand side, we have proposed an initial cordial steel plus fair mapping. In the middle image, you see the CMMI maturity model. And in the right image, there is the cordial steel process, which is a self assessment and peer review model. So I would like to say with this slide that the ideal outcome of our work will be a cordial steel process which certifies repositories as fair enabling trustworthy data repositories. It's also important to highlight that within fair is fair. So during the course of the project, so we will still have more than one and a half years left, we will not foresee a past fail outcome, nor we will perform or govern a formal process of fair enabled certification through cordial steel. However, obviously, it's very important that we share our recommendations for the fair integration into cordial steel with them. And we do so on a regular basis. And also cordial steel boards, they have provided a statement of support for this. So they support the work in this respect. If you're interested in this kind of work, I've provided here an overview of some Xenodo links to component documents that we have provided. So we worked on cordial steel plus fair also a report on fair ecosystem. And the last bullet is there meant to tell you that we have an upcoming work package deliverable which integrates those components documents. It will be released at the beginning of June and we will be seeking wider community feedback. The second thing we do under the umbrella of fair enabling is that we offer support with a cordial steel angle to 10 fair is fair supported repositories. So what we do we go with those repositories on a journey towards trust and fair. And on the other hand, those repositories provide us the input and share their experience on how their repository practices enable fair. So it works both ways. We have selected them through a call of repository involvement. We will extend this limited group of fairs fair support repositories which are 10 to a wider European network of trustworthy repositories enabling fair data. And also obviously we need to take the wider network more like the global network also into account of fair and cordial steel stakeholders. This slide is meant to provide a little bit more detail on how we support the repositories. So you see here the cordial steel process which consists of self assessment and peer review. And you can see a circle with the fair is fair logo and a yellow band which indicates that our fair is fair supports engages at the self assessment point of the process. So we provide support before the repositories submit their self assessment formally to the cordial steel board. The third thing we do under the umbrella of fair enabling repositories is we are working on improved descriptions for repository metadata. The ideal outcome of that would be that we would have a better description of organizational and data collection metadata. And ultimately obviously we hope that then the relevant repositories are better findable for all stakeholders. And this work is supported by the work on the cordial steel plus fair alignment as well as our work on the object assessment against fair. And exactly this point is where I'm heading to with the last part of my presentation. So here I have a couple of slides that indicate what we are doing in the area of evaluation of object fairness. Fair is fair is developing and running pilots with two primary use cases that help assess the fairness of individual data sets within repositories. So we have two primary use cases. One is focused towards researchers and we will develop their manual self assessment tool. It's like a fair awareness and education tool. It's meant to be used prior to deposit. And on the other hand we would like to tailor an automated assessment for data repositories and that's meant to be after the data publication. If you want to know more about the use cases stakeholders and our design approach for the evaluation of object fairness I've included here one of our deliverables. It's also open for community feedback. And I put it at the lower end of the slide here. The next slide gives you two tool set snippets. So on the lower left hand side you see a tool snippet of the manual self assessment awareness tool where the researchers can fill in questions and where they can be where they can raise awareness and can be educated on how to improve the fair data before depositing. And on the right hand side there is a work in progress on an automated assessment tool and you'll see a snippet there on the right. Again this is a slide with some more pointers and links if you're interested in this work. The deliverable on top I already mentioned. We also have a metric specification we are using for our pilots. It's available on Zenodo as well. And yeah last but not least we are also working closely in collaboration with the RDA fair data maturity working group. We've done testing and yeah contributed to a lot of aspects of the work. But recently we've also compiled a detailed project response feedback to their specification and guidelines. So it's also open for everybody to see there. To conclude I think it's quite I've said it a couple of times already but really for me the conclusion is that fair for objects and fair enabling they evolve in parallel. Another important takeaway would be that we are mapping object characteristics to where repositories can enable fair. We do in the conviction that fair and courteous steel approaches are complementary and well aligned. Fair is fair offers support with a courteous steel plus fair angle and later that will be really interesting for the remaining part of the project will be to see how we can integrate the object evaluation outcomes into into repository assessments. So there we can see how we can align the repository practices with the fair scores of their collections. And so we will have interesting times ahead of us. We work here with this great team of people. We have I think 12 people working on this dance is the work package lead. I'm work package leader of this work package for we work together with DCC with data side read through data. University Bremen which is Pangea, CNES and also the UK data archive. So this is the end then of my presentation. And now I would also like to go to Menti. It's the same code so I will stop sharing my screen. I would like to invite Sarah to open the Menti again please. Yes it's there Ilona. Thank you so much. Okay so the first question. Is everybody okay to see the Menti slides? Yes I hear yes thank you for your confirmation. So how important is it for you that characteristics of fair digital objects are aligned with repository contacts? And I would love to hear some motivation as well if you dare. I'm still here. I'm just waiting for some responses to come in. Thank you very much for the first contributors. So I see some indications that people do find it important. They share the same opinion as I have. Also interesting that another one says well I'd say it's not that important, not too important because repository should just deposit more than vice versa. Yes clear enough. Thanks for the contribution. And I've seen what I take up from the answers here as well that community acceptance of the fair principles is very important before it can be elaborated into repository requirements. Thank you so much. I'm just waiting for a couple of seconds before I will go to the next question. It was a difficult question to start with but thank you for formulating and expressing your views. And I was also interested in one person saying not important. Thank you for sharing your views. Either by raising your hands in the zoom or by making yourself known in the chat. Is there somebody who would like to share his or her view with us? I don't see any raised hands or I don't see anything in the chats. Well thank you very much for your writing. Very helpful to us. I would like to propose to go to the next question please. Maybe this gets you going. What are the main challenges around fair enabling repositories? Would you be so kind to give your opinion on this? So I see coming in one vote. Thank you three. Not all data is fair yet. That's interesting one as well. Everything is evolving around the indicators. The lack of clarity that relates also to the things that are still evolving. In the chat I see also a participant coming in on sustainability. That will be a main challenge and it comes also apparent that the costs and the staff resources are a point or an issue you are thinking about when you think about a challenge. Comments in the chats from a participant that states that it may seem a difficult process. Maybe too difficult. Is there anybody who would like to share with me the reason behind the main challenge around fair enabling repository certification? You can either please raise your hands or make yourself known in the chat and mute you in a bit. You are on mute. Yes please. Hi. Thanks. I wanted to just raise the issue that one of the challenges for us. I work for Gbeth Global Biodiversity Information Facility and it's sort of like Schrodinger's repository. We both are and are not a repository. We are a federated network. There are ways in which data have and always have persisted. The challenge of actually trying to approach certification from a federated or network model is quite challenging. We feel that we align very well across the board on fair data metrics but trying to walk through a step-wise certification process presents real challenges for us. Yeah. Okay. Thanks for sharing it. And what do you think Kenmore should be done to improve that? Who should take an action on that? You think? Okay. I think Kyle lowered his hand so he cannot talk anymore. Don't worry Kyle. Do you want to say something Kyle? Should I proceed? That's okay as well. While we are waiting then I have received in the chat also by I think it's by Hela Hollander. Is it possible to do step-by-step then? Is there something you would like to say about that Hela to the audience? I'll try to find you and unmute you. One moment please. Hi. I'm Hela Hollander. Thank you. Hi. I'm Hela Hollander. I'm a colleague of Ilona working at Dans. Head of the archiving team but also a project leader in Ariadne Plus and the other I worked in parts and most different communities from the cultural heritage perspective. And I think to find the start or the beginning of this process is the most difficult part. If I see people who are at the bottom of this process, where do I start? Who do I contact? Can I do this step-by-step? How can I learn? How do I get the expertise? Is it really that difficult? The question is like this. It's like a road map perhaps. And I commend what's a challenge also to have enough reviewers because when you have 40 people willing to make one step ahead, how do you train those people? How do you help them? So this was my these were my answering questions. Okay. Thank you. Yeah. I really understand that. So really the basic question. So where should I start? And so like I explained, I can give you a little pointer you might be able to use in your community. For example, we've created some workshop material with a road mapping exercise and a stakeholder mind map exercise. In first instance, we tailored it towards the 10 support repositories. However, we also generalized it. So it can be used also by others seeking trust certification and fair. So yeah, we're trying to deploy activities in that area as well. Yeah, thanks. Yeah. With an eye on the time, I would like to go to the next question. I have two left. So the next questions are please give it a little bit of the bright side. What are the main opportunities offered through fair enabling repository certification? Open science in general. Yeah, that's an important one as well. Easing the burden on research. So actually, that would be all about it, right? That we make the life easier for the researchers so that it can take the burden of individual researchers in making data fair. Yeah, I agree with that. I also like to see that the key phrase of improvement is there. So yeah, through good practices, we work all together to improve our repository practices, ultimately surfing the researcher. I like that as well. Thank you. Thank you so much. I did see a raised hand by, I think it was Paolo Manghi a couple of minutes ago. So I would like to open the microphone for Paolo Manghi to respond. Hello, that was for the previous question. Okay, we go back to the challenges then, sure. Well, it's actually both sides, I think. This actually links to the similar comment that I made in my previous, in the previous session. So I think repositories cannot be left alone in this fairness certification process. So there are aspects of fairness that cannot be just measured based on the metadata, but must be ensured by the process that generates the data. So the payload itself, for example, the quality of content in terms of ability to reuse it cannot just be described by metadata, but must be ensured by the process that generates the data. If it's a systematic service that on behalf of the users performs the deposition, ensures that certain conditions are respected, is certified accordingly, and its certification can be even stored as part of its accounting system within the repository, then this process is simplified. Again, thematic services should be more involved. This is the coupling of where we do science and where we store science should end with the open science. The two systems should be connected. Yeah, okay, thank you for your comment. I also think that quality data curation really should involve community experts and current community practices should be respected. So I support that as well. Thank you Paolo. Is there anybody in the audience who would like to share his or her opinion on one of the things they answered as an opportunity offered through Fair Enabling Repository Certification? I don't see anything coming in. So then I would like to suggest to go to the next question and it's the last one. So the question here would like, I would like to you to think about is how much do you consider trustworthy data repository status and fair data to be a journey? So one strongly disagree, 10 strongly agree. I see already 20 responses coming in. Thank you so much. So well, it's very clear. You do really see this as a journey and I'm taking this perspective as well because I think that if we have an approach for repositories that might give them some indicators of where they are, where they want to go, what they can do to improve. Yeah, they can reach a higher level of maturity. So I am in favor of this as well. And somebody in the chat mentioned that it's very important to go step by step and that hints towards your journey as well. Thank you for that. The last opportunity for the audience to reflect on one of these Mentimeter discussions. So if there's something you would like to say, please raise your hand or use the chat. It will be open the chat and raising hands obviously. So that would be the end of the Fair Enabling Data Repository Servers Perspective. We have with an eye on the time 15 minutes left for a general Q&A round. So we can go on the Fair Enabling Services from Hilco. We can dive a little bit deeper into the repositories, what you like. And I would like to ask Sarah to take back control of the share screen please. Yeah, I have it already, yeah. Yeah, and if you would like to be so kind to put up the last slides and moderate the Q&A session, that would be great. So if I go back to the chat and the comments I've seen there, there were a few of you who raised the point of the effort needed to enter such a process, and in particular for small repositories. I've seen comments by Andres, a whole Joy Debiton, and Keith Russell, maybe. So I don't know if any of you want to further elaborate on this. Andres, yes. Give me a second. Yes, you know, sorry. Yeah, go on. I have added another comment at the chat window. And I think while certifying repositories is important, it's even more important to create a fair enabling workflow, a fair enabling process of which only a part is the repository is the archiving place. And I think in this process more components, all components are important. So the burden shouldn't be put on the researcher. It should rather the researcher should use proper tools, fair enabling tools, proper practices, and all the environment, the services should be also fair enabling. And in this way, we could really bring lots of research data to be fair. And obviously, certifying repositories or making repositories more fair friendly is important. But what I see is that some of these burden could be shifted from the repository operators to, say, creators of repository software. What I'm saying is obviously not true for big data centers which operate with their own software. But for small repositories, I think it's more important to have the software they use fair friendly or fair supporting. And of course, software is not everything. They need to change their practices. They need to adjust their practices. But my opinion is that there are much more important steps, not just only the certification of the repositories or the efforts of the repository managers. Yes. Okay, thank you for sharing that. And yeah, so I fully agree that we work here with the full fair ecosystem. And we've also tried to incorporate the broader view for you and me together to share our work on evaluation in a fair ecosystem. And what I would like to contribute as well, and then I'm also referring to the Turning Fair into Reality report, that I think it would be good if you would have registries for finding such components of an ecosystem. So it will be more easy to find the relevant ones for the users ultimately. Absolutely. Thank you. I believe Mustafa also wants to reply. Can you try and unmute yourself? Yeah. Hi, thanks, Sarah. And thanks to Hilke and Ilona for the great presentations. I just wanted to react on the question of the cost and the effort needed for the certification. And I think I would like to ask the reverse question, not the cost of the certification itself, but the cost of not being certified. The thing is that certification people think is gaining something like a label or a badge or some kind of recognition and reputation. I think that is part of the certification, but it's actually the trivial part of the certification. The most important part is the process that's behind it, more than the reputation you gain from it. And if we think of it like this, I think it becomes more obvious what is the cost of not doing it. If you're not going through this process, you run the risk of not applying good practice as we all, I think, agreed that it's an important aspect. You run the risk of not being following the community standards. You run many risks in not doing your job correctly, basically. So I think the key point for me is not to focus on the end result of the certification, but more on the process and what it brings to you as a business, as a service provider. I think that the advantages are quite obvious if you look at it from that perspective rather than just gaining something at the end. So it was just a comment, thanks. Yeah, thank you so much. So I presume you were also really in favor of the journey approach toward it. So thank you for sharing this view, Mustafa. Thank you. And since we are reaching the end, I would like maybe to just close with one comment I saw from Francoise Genova. I'm not sure if she's still there, but she was mentioning, she was asking, I think, a provocative question. So is this just a problem of costs or sources or include some expertise? I don't know if Francoise, if you want to expand on this, since this might also affect another area of activities where first fair is involved, which is the skills and professionalizations of latest works. So Francoise, are you there? I don't see her in the list anymore, Sarah. Can you speak, Francoise, now? I guess, yes, now I can. Yes, I am still here, but you didn't saw me. So I was trying to understand, because I understand the problem of costs, of course. And it was a big thing in the middle of the screen. So one has to think about what it means. It's clear that there is the question of smaller repositories that Amryash was discussing. But I think that the cost, when you say the cost of staff and so on, there is also the question of expertise. And this is not only for fair, it's for the whole question of repositories and trustworthiness. But in the case of fair, there is an additional expertise in some cases to what repositories are used to do, especially those who are just taking, if you think about the core trustee scale, those who are not the scale of compliance, but the repositories which are just taking the data as it is, and then preserving it and distributing it. To get fairness, you may have a much stronger role of repositories, especially if they help people bring more elements of fairness. So expertise is really included in the cost and in the skills, as you say. But I think that there is more expertise for the staff required if you want to have fair enabling repositories in general. Thank you, Francoise. We would all go back to data curation activities as well, right? So in order for the data to be available and accessible in the long term as well, we need curation activities for that. So thank you for this contribution. Andres, I see your hand raised. So you should be able to unmute yourself now. Sorry, I've just left my hand raised. Okay, great. So I don't see any other hand raised. So I think we can thank all our participants and the chairs for this very active session. So thank you again for joining us. Sorry. And thank you to everybody in the audience for joining us and sharing your views. Super valuable. Thank you. So all the materials will be uploaded in the EarthCup Week event page. And we'll also have them shared via fairs, fairs channels. So stay tuned and have a nice rest of the EarthCup Week. Thank you all.