 Welcome, everyone, also to this webinar. And yes, just seeing if I can still manage this screen. Also, thanks to OpenAir for hosting this webinar, and we are very delighted to be here. We're participating in both EOS Cup and OpenAir Advanced. We realize that there are already many services available for the researcher to make this data more fair, findable, accessible, intractable, and reusable, and more open. The seminar is not a webinar introducing all kinds of services of the two sweatshirts, because there are simply too many of them. So, they will highlight a few services that can be of help when you want to improve your data according to fair. This one-hour webinar is split into four parts. I will start with spending a few minutes to quickly introduce EOS in the two projects. Mayan will follow with introducing the concepts of fair and open and the research lifecycle for the purpose of data management. And regarding services and the lifecycle, as just mentioned, we will focus on the services that directly help researchers to put the fair data principles into practice. Now, we would like to answer your questions if you have them at the end of the webinar, but you are welcome to ask them already in the chat area, of course. And we and some other people from EOS Cup and OpenAir might be able to answer your questions on the fly during the webinar, but otherwise we will try to answer them at the end. So, as mentioned already, the slides are already available. They will also appear on both project websites, and there you can also find information on their services in previous webinars and other support materials. Now, EOS and EOS building projects start with. EOS is the European Open Science Cloud. The irony is that it will not only serve Europe. It does not only support open or open science, and it does not all have to happen in the cloud. It does bring together current and future data infrastructure, creating shared area with all kinds of services to store, analyze, document, link and reuse data. It's all being done across borders, across scientific disciplines, and in this way facilitating improving science. So, the name entails a little bit different things than we thought it would be. There are other EOS building projects at the moment, and then OpenAir and EOS Cup that I should mention, I think, in that pilot EOS Central and Freya. Well, let me now introduce shortly EOS Cup and OpenAir at once. EOS Cup is all about integrating and managing these services that are offered by around 20 research infrastructure. In this way, building on previous work that has already been done within all kinds of projects, like EGI, EADOT, and in Europe, but also others. Now, EOS Cup is a Verizon 2020 program that started in January and will last for three years. As you can see on this slide, it's a huge project with many partners and people involved, and a large 30 million euro budget spread over all these partners. The main objective is to integrate and manage the services for EOS. Now, what is OpenAir? OpenAir advanced also started in January with a much smaller budget, but it is a continuation project into 2009. OpenAir started as an open access infrastructure, and it is moving towards a more open science oriented project. So implementing and aligning open science policies across Europe is definitely the first point on the list, and harvesting all of the open access outputs, linking it to contextual information such as research projects, institute information, or funded information, and logically, OpenAir is active in developing open standards and deploy services that researchers or research communities can use. In line with that, there is a training involved, there is training involved for open science and clear science, which is often provided by the national open access desk. So OpenAir is about opening, sharing, and reusing research outputs. Although the projects EOS Cup and OpenAir advanced are very different in focus, EOS Cup being more about integrating and federation of storage, compute, and application services into EOS, and a large part of the integrated services that EOS Cup offers is about big data analysis, big data computational services. Well, as OpenAir is more about integrating research data management and publication services into EOS, and support is the training and co-wielding team, as these are very complementary. It has been decided to work closely, for example, on training and development of these. Now, I would like to hand over to Ayam. Thank you, Ellen, and as well for me, welcome to this joint webinar. And actually, giving a webinar like this is part of the collaboration agreement between the two projects, EOS Cup and OpenAir Advanced, in the area of joint training and dissemination. Talking about OpenAir data is also talking about a joint ambition. And let's start with this one. You may have seen it. It's one of my favorite images. And one of the interpretations for this bird, I think, is a kind of old-school researcher using an amount of valuable data open to the whole world. But in our current context, yes, open, surely, but is it fair? Could you use the valuable objects that the bird collected so that you can reuse and benefit from them? Probably not. So Open does not imply fair. And when we look at a more formal point of view, this is a slide made by Daniel Spichtinger from the European Commission last year, and he explained about the shift taking place from open towards fair when he explained the European Commission's Open Research Data pilot. That pilot started, as the name says, with the ambition to make research data open. But gradually, the Commission endorsed the fair principles. And as Daniel Spichtinger wrote in his slide, the Commission now sees openness as one component of fair data and aims to address all of the fair aspects in Horizon 2020. In this context, it's good to be aware that fair doesn't imply open either, because it's perfectly okay when you have sensitive data to restrict access to those data to certain persons or organizations. Of course, you have to be explicit about this in the grant proposal, preferably, or at the latest in your data management plan. And if you do that, your restricted access still counts as accessible even if the data cannot be made open to the whole world. I'm pretty sure that all 200 of you have seen the data principles, and have thought about it, maybe have tried to implement them or have succeeded in implementing them. The links on the slide refer both to the bullet list of all the principles and to the underlying article in the Nature Scientific Data Publication. But principles are principles, as the name says. And how could we come from principles to practice? What the European Commission does is provide some guidance on that, but it's very generic. So you are probably familiar with the guidance on data management planning that Horizon 2020 provides. And in that information, they literally say that the DMP template is inspired by fair as a general concept. And the conclusion you can draw from that is, okay, it's up to us to translate the principles into our practice, and hopefully there is already some practice within the discipline or the domain we're working. That is a top-down approach, you could say. And there is also a bottom-up one called the GoFair initiative. That started in Europe with a couple of, let's say early mover European member states, making optimal use of initiatives and infrastructures that already exist. And it's interesting to note that GoFair aims for fair data and services. So it goes a bit beyond the data themselves. Okay, this slide you see an example of top-down support, bottom-up support. But what does it mean for the daily life of a researcher? Let's take a look at some research life cycles and consider when it makes sense to think about fair. The first example I'd like to show you is, comes from a paper about embedded network service research. So this is a very domain-specific approach. You see, for instance, a reference to device calibration. Well, that's not for all of us. And it makes sense, of course, to have a life cycle within a domain or discipline. And you see some clusters of activities for processing and analyzing the data. And also for publishing both publications and data. That is one example. Another example is a very nice and colorful one. This open access tube map, as they call it, indicates how open access to data and publication has many stakeholders. For the researcher, the route starts at the bottom, in pink, where it says Start Here. And you can also see, when you follow the pink route, that only a small part marked as data life cycle is indeed about data. So the data life cycle is a section of the research life cycle, at least as it is indicated here. Recently there was the first webinar jointly presented and organized by Open Air and EOSCAP. And there are, Gary Seepers presented this life cycle. This indicates the areas or phases for which the two projects deliver services. So for data management planning, finding and generating data sets, discovering services, and so on, so you can follow the whole route. Interestingly, this is a counterclockwise approach. You don't see that very often, but of course, that's a matter of taste. And it also indicates that there are several routes towards the end, which is probably only fair. There is a question if the DECC is part of EOSCAP. They are part of EOSCAP Pilots. And EOSCAP, I leave it for others to answer. I'm not sure of the top of my mind. Moving on to the fourth example life cycle is from the EOSCAP project by itself. This is a very concise one. And this one is remarkable probably because the first step at the right-hand side is discover and reuse, which suggests that that should be the first thing we do. And I think that it's a very interesting approach because when we talk about fair data, we often only consider making data fair, but not so much using fair data. So discover and reuse is the first step that makes sense. However, in many domains, it is very common and standard to generate and create your own data. So we think the model shown here derives from the example of the UK Data Archive and already being used in a data management briefing paper in open air. And we will use throughout this presentation. Okay, let's go back now to fairness. And this is the moment where I start with the second part of this presentation. This is the same life cycle but backwards because it makes sense to adopt a perspective of a future data user when you think about planning for fair. And the future data user could be yourself, of course. And the question then is what would a reuse and need? How should the data be organized in terms of data, metadata, documentation, all this kind of contextual information? And when you are part of a large project which has been going on for some years already or a domain that already works in this way and shares a lot of data, it may be obvious. But for many researchers it isn't clear from the start. And what is needed definitely is, and it's good that Sarah was in the room, a checklist that she started in the UDOT project, a checklist to see how fair your data are. And one of the things this checklist said is that lots of documentation is needed. Okay, documentation to make your data fair, all starts with metadata. That shouldn't be a surprise. Metadata, like bibliographic information, is needed to locate the data and to get the first impression of the content and the relevance for you. And we think that a persistent identifier, like DOI, is just part of metadata. There are generic metadata, Dublin Core, Datasite, and many disciplines also have discipline-specific metadata schemas. And it makes sense to check with the repository where you will store your data for the long term and that will preserve it for you because they often support or expect a certain metadata standard and they can help you. So if you're curious about what metadata exists in your discipline or related disciplines, there are a couple of rich sources. Fair sharing, for instance, is very rich and has a multi-disciplinary collection standard. And it also provides some good guidance on why standards are useful. So if you need to convince someone else, this might be a good place to take a look. And the other sections, the RDA, the Research Data Alliance and the Digital Curation Center collaborate on metadata overviews. You probably know the first link. It has been disseminated very widely. And I think the second link is especially interesting because that site also refers to tens of tools to create metadata or to validate metadata. You see a short list and it is longer on the website. For instance, metadata in DDI schema, SDMX, links to data cubes, symplectic elements, pure and coniferous. So lots and lots of tools for metadata. In addition to metadata, that documentation in the broader sense and sometimes researchers tend to think that all the documentation that is needed is in the article. Probably not. Probably you would not include all the explanation of all the variables in your article, for instance. And of course, the list that you see here will not be relevant for everyone in each discipline. But it makes good sense that if you use a lab journal or a Jupyter notebook, for instance, to also share that with the others. Because it explains a lot of how you worked and what you did at particular moments during the process. In a similar way, it makes sense to share and archive statistical queries, information about the machines and devices you used. When you collected consent from respondents, that consent forms should be archived as well and so on and so forth. So ideally, you would document and preserve everything that is needed to reproduce the study. My last slide on SEA is about interoperability. Because that is often considered geeky and technical, but it need not be. Interoperability is what humans have been doing for ages. You don't have to read the small print on the slide, but words like consensus and standards helped us, of course, to come up with shared notions for what is time and what is distance. And how can we make sure that your piano is aligned with my piano? So it is about speaking the same language. And again, the degree of interoperability will clearly vary between disciplines and also within a discipline, but keep in mind that interoperability is an ambition and a goal for everyone to work towards. And then we reach a point where I'll hand over to Ellen again. Yes, services at the point of need. And you look at the research data lifecycle. Now, so what are the services at the point of need? First of all, both EOS Cup and Open Air Advance already create and mention many services and support materials. So where do we start? It helps if you have known a little about the predecessors of these projects, like Open Air or EGDAT or EGI or Indigo Data Cloud. For this webinar, we are looking for services for a researcher EOS provides or EAPovide, and that improves the fairness of your data or project. So we have focused on common services and not services specific for your discipline, although I understand you would like to know what these services would be for your discipline. So here is a simplified lifecycle with fair support. There are some services that can help researchers to put the fair data principles into practice. EOS Cup and Open Air have many more services to support the research process as well as to support other stakeholders than researchers, such as funders and data providers. In your own research domain or research infrastructure, there may also be very relevant data services. So we are aware that we address just a small part of what will gradually become a huge Open Science Cloud. So let's go around a little bit at the top. An example would be to stage to transfer data from the EUDAT to the I2HAR performance computing and for analyzing also very strong EOS Cup is big data analytics, handling and creation. We have not focused on these big data services. They are quite new to us, but also because the compute services are more about analysis and less about improving fair or open the results results. Now for giving access to data, there are some services that we will discuss as they really support improving fair and Open Science, such as the NODO, B2Share and B2Note. To plan for fair and good data management, we will discuss using ECDMP or DMP online. And there are several ways to know about the services that are available, apart from the websites of your own research infrastructure. You will have the EOS building websites and for the moment we will also mention the market sites. Now I will shortly introduce some of these services that are shown here. Let's start with B2Find, making Open Science findable. B2Find is part of the EUDAT CDI and is offering a central catalog of data. Here you can discover data that is shared by research infrastructures and communities with B2Find using metadata. You can use specific search based on a harmonized set of metadata. There are more catalogs like this that support fair and making data findable and accessible of course. The data of your infrastructure can of course also be hosted by B2Find, which is one of the examples of these kind of catalogs. Then B2Drop is the second service that I would highlight. It enables working collaboratively on the same files, by research projects, with several researchers in different institutes. Versioning is possible, but if you want to publish your data, you move your data from B2Drop to B2Share. It's still in the slide. B2Share is a way for researchers to store and share small scale research data from diverse contexts. B2Share is a solution that facilitates research data storage, guarantees long-term persistence of data and allows data results or ideas to be shared worldwide. B2Share, as I just mentioned, is integrated with B2Drop. So when you have stored files in B2Drop while you were still updating them during the research and now want to publish them, it is easy to publish with B2Share. You need to add some more metadata, various domain-specific schemas are supported, and B2Share makes sure that research outputs get sustainable, unique identifiers. This definitely makes them findable. Then I need to open the third presentation. Yes. Because part of B2Share is a nice feature, the license selector, this very well supports your access policy and it's based on open source software. With the tool, you can select the appropriate license by answering a few questions which will finally suggest the right licenses covering your requirements and the suggestion that it makes depends on, amongst others, the type of the data you are depositing, is it software or data. The original licenses of use software or data and the data consumer access and distribution rights that you want to allow. When the data is published with B2Share, it already has metadata and the persistence unique identifier, but everyone including the viewer can add additional annotations to the data using the B2Node service. The service is integrated with B2Share but can be integrated with other storage services. Of course this improves the re-usability and interoperability of the data. I'm not sure this is better. No, not better. Not really. So we were talking about B2Node service that is integrated with B2Share and when the data is published with B2Share you can use B2Node to annotate the data. I can't stop talking, sorry. Of course the annotations improve the re-usability and interoperability of the data. It's not only metadata is available for the data, but also semantic annotations. Now the marketplace is a completely different service, not linked to the UWCDI. This is available through EDI at the moment, but will be part of EOS hub. And service providers can add their services with conditions and research communities can look for a service they would like to have. For service providers their services are better findable and EOS hub provides templates for service management. Now this does not really support fair data as for example other services of EOS hub do, such as online storage or the B2 services that we just mentioned, support fair services and software, which is we believe just as important. The huge scope of services EOS go and sale becomes even more clear now that Mayan will introduce some of the open air services. Okay, thank you again, Ellen. I'll introduce some services from the open air portfolio and I'll start with the Amnesia tool. The slides are based on a recent webinar that was presented by Manolis Telovitif and the idea of Amnesia, the goal of Amnesia is to make your personal data shareable and it refers to so-called micro data. That could be data about your medical condition for instance and understandably as an individual you may be hesitant to provide that data and as a project or a company you might be afraid to share the data with experts and the general data protection regulation, the GDPR requires that you have a very strict protection scheme for that kind of data. Now the idea in anonymizing data is that the information that identifies an individual is removed from the data before you publish it so that no sensitive information can be attributed to an individual and when you look at the image there if you combine data sources and the data in the sources has been anonymized then it is no longer possible in that particular situation to link the medical data to the social security number that is available in the other data set. So the idea is very simple and the goal of course is that anonymization allows you to share the maximum amount of data without compromising the privacy of the individuals and Amnesia does not only remove direct identifiers like your name or your social security number but it will also transform identifiers like your birth date and the zip code like I said so that individuals cannot be identified in the data. It is currently a public beta version so you can go there and play around with it. It is available in two flavors, there is an online version and that's mainly for demonstration and testing we provide some sample data sets as well and you can also download the application and then you get more functionality in terms of scale and security and please do give us the colleagues behind the tool feedback. There are some plans of course for extension adjusting it to health data in particular so we hope you will find it useful and valuable. Jumping from A to Z, there is also Zenodo. Zenodo is way beyond beta version it is still in production and Zenodo is repository for all output of EU funded research. What you entrust to Zenodo is stored in the data center of CERN in Switzerland and Zenodo provides you with a persistent identifier for every upload. Is it free to use? Yes there is a maximum upload amount sensibly and it is open to all research outputs from all disciplines so that is a difference for instance with B2Share while B2Share focuses on data Zenodo doesn't have that kind of focus and another interesting aspect of Zenodo is also that you can acknowledge project funding. So when you upload your data or other research output there is a metadata field called grants and there you can enter your grant identifier and then OpenAir will let your funding agency know so this is perhaps also a point where I should say that there is a human Zenodo curator behind it and they need to validate your upload so you may experience a small delay before your data are available in OpenAir. What I skipped on the previous slide is the so-called enjoy versioning and that was one of the most requested features for Zenodo. It has been code developed and I am very pleased with that by the Zenodo team and the B2Share team together. Zenodo and B2Share are built on the same digital repository platform and joy versioning is a valuable feature when you deposit for instance a major correction of your data set or when you have a new wave. Wave is typically a term from longitudinal research think for instance of surface or measurements that are periodically repeated and that result in a new data set each time then joy versioning will give each new version a joy but also a joy for the whole series to family you could say so in a publication it's up to you whether you cite the whole series or just a particular version and another feature in Zenodo is that it makes soft representation also very easy. When you have code on GitHub then it's very easy to forward it to Zenodo. I see there is a question in the chat box of the study measuring the fairness of Zenodo. We'll try to find the link to a study that 4TU did when measuring fairness of a couple of repositories and I think Zenodo was in their sample. There are indications for measuring fairness. We have now seen a couple of services from both projects and the project that went before that aim to help you to deliver fair data at the end of the research cycle and of course the end of your research cycle is the beginning of a new cycle but let's take a minute to talk about data management planning. I'm sure you all know about the need for data management planning and you know that funders and universities increasingly demand that you deliver a data management plan that there are tens and tens of DMP templates around. Other webinars have already dealt with what should be addressed in the plan but my focus here is on two services that you could use for drafting the plan and on the left hand you see the well-known DMP online tool provided by the Digital Curation Centre and on the right-hand side you see the not very well-known easy DMP tool and that was an initiative by EUDADS and OpenAir and it will become part of the EOS Cup service portfolio. Now you can see from both my screenshots that you can register to both the tools and explore the Horizon 2020 template and not surprisingly the structure of those templates is identical in both tools. Yes, there are similarities. Clearly, apart from both providing this particular template both tools allow you to invite others to work on your DMP or to give feedback on it so that the DMP under construction you can collaborate on a particular plan. Of course, you can export your DMP and both tools plan also to support machine actionable DMPs. Very briefly, machine actionable DMPs refer to the situation where it's possible to automatically extract information from a DMP which is, for instance, relevant for funders who want to collate all the answers to question 12 or it will be interesting for checking how far the plan has already been implemented and so on. And apart from the similarities there are of course differences. This new easy DMP tool, for instance, provides another kind of guidance to the questions in the template. It is a more free interpretation of the text that the commission wrote and that might help you to understand what kind of information the template asks for. The BCC DMP online tool, on the other hand, follows the EC guidance text more strictly, more closely. But there you have the option to also get the expert guidance from the DCC itself which is also very helpful. And easy DMP is also not so literal in another sense. It tries to minimize the number of free text fields and that means it provides more cool-down menus and that can be very helpful if you want to select a particular metadata schema or a particular file form that you plan to deliver. Here's another intermezzo. But still, I think it is very relevant. When you demonstrate in your grant proposal, so not in your DMP but in your grant proposal, the good and the concrete awareness of how to make your data open and fair, this may increase your chances at winning the grant. And we are very thankful that Ethel Grigorov, the grant support officer, reminded us of a publication that's also mentioned on the slide. And he also found us some quotes from actual feedback on grant proposals. Because there are indications that grant proposals receive praise for including an outline of the data management plan, even although a DMP is not required in Horizon at the proposal stage and is not part of the formal review in the sense that the data management section in the grant proposal is part of the competition between grant proposals, so to speak. So you see some nice anonymized quotes that Ethel found us. For instance, a clear description in this particular proposal is provided of how core datasets and model development can be shared broadly with the scientific community. Data storage and accessibility issues are not considered sufficiently good realization of the commercial potential, data management plan and so on. So the lesson you might want to draw from this is that ideally already in your grant proposal you should describe how open and fair data will result from your project. To conclude, as Ellen said before, this is a subset of services, services that can help researchers to put the fair data principles into practice. Clearly, in your own research domain, in your own research infrastructure, there may be very relevant and valuable data services which we now bluntly, blatantly ignored. So we are aware that we addressed just a very small part of what gradually will become a huge open science cloud. And although we tried to map services to the research data lifecycle and to fair the lines you see here are only indications. They're not exhaustive. It's not really possible to link a specific surface to a specific fair principle or part of that. But we hope you see this webinar as an invitation to take a look at these and some of the services. As we are in the business of reuse, we did also... Yeah, there's an echo now. We also reuse slides from several colleagues. And that really brings us to the end of the presentation. So, how are we with questions? Okay. Thank you, Marianne. Thank you, Ellen, for this very useful webinar. Actually, to be honest, there were not many questions that have not been answered yet by helpful colleagues in the chat already. So, a special thanks for Sergei and Marc van der Zonnen from EOSC, Sergei and from DCC to reply to a couple of questions that have been asked. But just for completeness sake, I will paste all the remarks and questions that have been asked into this field so that you can all see them in case you are not following if you just bear with me for a second. I will paste them. So, if you go up to the top, the first thing that I just made a note of is that there will be an evaluation form sent out. This is not relevant to the conference of the webinar. So, the first one was the question whether DCC is a part of the EOSC hub. So, Sergei and Marc answered that. I showed the EOSC pilot. Then there was quite a conversation going on about how to know which data standards are relevant and if it's possible to have more than one relevant standard for a certain discipline. So, there are quite some comments in the chat about that, but I don't know, maybe Marjan or Alan, you want to elaborate a bit once you've read through all the answers. Well, I will. I also see the notion of fair metrics. I think that is an interesting question. I'll copy two links in the chat box which you can check if you want to know more about measuring the fairness of existing data. So, I think Jao asked about fair metrics and that's why I know fair metrics do not yet address the fairness of ontologies and that's also in line with what Sean answered. Okay, thank you, Marjan. The next question is a question about the marketplace of EOSC. A question by Sarah Jones. I see Marc has already replied to a couple of questions, to the question of Sarah. I don't know if you want to add anything there. And now we're putting it in the chat. Okay, so I hope that the answer is clear. Then there was a question that has not received by Sean as whether Indigo Services will also appear in the marketplace. Marc, I don't know if you want to answer to that in the chat or if Mayan or Ellen, you know about that? This would typically be the question that we would hope that Marc from the sun would answer. We didn't introduce Marc. We asked him to be our backup to speak when we would get hard questions about EOSC related. Marc, I will make you a moderator. Normally you should be able to start your broadcast now. Hello, can everyone hear me? Okay. For having services listed within the marketplace, within the service catalog also for EOSCUP, we are working on rules of participation and that means that what is required, what do we request from service providers to have services being listed in the service catalog and in the marketplace. We will come with a number of mandatory requirements and some optional requirements on how the services should be described, promoted, how access can be provided to the service, what type of support channels will be available to ask questions or report issues with the service and on the majority of the service itself. And then we have procedures for applying, for requesting, listing of services within the marketplace to provide services within the context of EOSCUP through the app. And we also are going to work on processes for assessing those services which have been applied for listing. It will be a broad process and principle any service providers could be going to apply for listing within the service catalog and the marketplace. So it goes beyond as only the services which are listed or for service providers participating within the EOSCUP project. But for assessing and evaluating the rules of participation, we first look at service providers which are active within the project itself before we promote the rules of participation externally. I see also a question from Sean. Will there be EOSCUP-proofed software? Within this we are looking at different levels of rules of participation by having services, just having a listing within the service catalog and the marketplace towards services are more integrated within the infrastructure, more leveraging of core services provided by infrastructure service providers as EGI and EUDOT. Then we increase the level of mandatory requirements to enable access and promotion of services via the marketplace and service catalog. Because there is also more reliance of the service and second services provided via the hub. Then it is more on the level of how far are you integrated in making use of EOSCUP services. I'm not sure if we will go to a brand of EOSCUP-proofed software or services. There's also, at the moment, we focus more on services. Services is something different as software. Services are, of course, built on top of software. But it is a whole process and an organization behind in providing the service. It is not just software. If the service is available as a software package, of course it can be also described as part of the service, that the service is also available as a package which you can use to install a local instance. But that will be done via other channels to providers. If there are any questions, please put in a message in the chat. I can respond on this. Okay, thank you very much, Marc, for this. There is a question that just popped up in the chat by now about the cooperation agreement between OpenAirAdvance and EOSCUP-Pilot. I'm not sure if any of you can answer on that one right now. Okay, so Marian says there's already a formal collaboration agreement. I'm not sure if there's anything online. Yes, there's a formal collaboration agreement between OpenAir. So it might be worth it to explain the difference again between EOSCUP and EOSCUP-Pilot or just maybe link to an online space where there's more explanation on that if that's available. Yeah, maybe a few words on EOSCUP-Pilot. So the word pilot in the name of the project already suggests that it is, let's say, a forerunner of the other. So it immediately, it started earlier. It started, I think, in early 2016. And it is piloting several things, so several concepts, also governing large infrastructures and so on, also training January 2017. Thank you. Also in looking at the skills and capacity needs to really use and benefit from something as huge and ambitious as an EOSC It's a two-year project and fortunately there is a good overlap in time between EOSCUP-Pilot still running this full year 2018 and EOSCUP and OpenAir Advanced, it both started January 2018. So yes, it can be confusing because we overlap in time. What is very beneficial is that we to some extent also overlap in the sense for personal unions and several organizations and individuals collaborate in more than one of these projects. So the idea is that the insights and the agreement and the stakeholders and so on that were more or less collected already in EOSCUP-Pilot will carry over to the other EOSC building projects. So this is really intended to stimulate continuity of expertise and networks, technical networks but also mainly people networks. Shall we then move on to the question that's now on top of what's on the screen? About measuring the fairness of the photo. So Sarah already added the link to a paper that's measured the fairness of a couple of repositories and there is an interesting thing and you can compare it to the two fair metrics links that I put in the chat box. Of course, everyone's exploring. No one has a clear definition of what fair exactly is. So we are all exploring and sharing ideas with others. So do you think these are good metrics? Do you think these are relevant parameters? Do you think these are good skittles or degrees and what have you? So there are no final answers to the fairness of A-surface or A-datasit, but really we're moving towards consensus, I hope. Okay, so I'm not sure if you can hear me. Okay, I think there's one last question and that is about the services and providers to register into the ESCA. So maybe, I've seen already Mark and some others have already chipped in there. I don't know if there's anything additional that you want to say on that subject. No, I think it's pretty clear what was said in the chat. So there's one last question popping up now from Irina Novak into the chat. So it's about Irina as well. When you start a new Horizon 2020 project and you're preparing your DMP, can you already rely on the services that are mentioned into this presentation? Are they better versions or are they fully in production? Sorry, Gwen, I missed the question. Can you put it on screen again? Can you see this now? So somebody asking, yeah, okay. So the question is I think in summary, can the services that you presented into this presentation, can it already be used or are they mostly in better? No, most services are already in production. And Mark, please correct me if I'm wrong, but my understanding for EOSCUB is it is about service integration, not so much about service development. So all services that want to be part of EOSCUB already need to have a very robust level. So you can use them and trust that they will do what they promise to do. And I think the technical term was TLR9, so that is a level of, what's the word? Sean, help me here. The technical readiness levels. Okay, thank you. Yeah, okay, technology readiness level, okay. Okay, I see it's two o'clock and as we said, we would clock off at one hour. I think we're... This would be the time to close off this webinar. As I've announced before, the recordings and the presentation will be made available as soon as possible, which will be probably a bit later today or else tomorrow. You'll be able to follow... I mean, they will appear on the open-air webinar page in any case, the link I shared at the beginning. And I would really like to thank Alan and Mariam for this presenting. And also to Mark from EOSCAP to chip in with some very valuable comments and remarks. And I sincerely hope that we can do this exercise again. I hope you all enjoyed this webinar. Can I ask you one thing? We do want to improve our webinar services not only on technical level, but especially on level of contents. And so we have made an evaluation form for you to fill in. And it only takes one minute of your time, and it would be very useful for you if you could take the trouble to follow the link and fill it in. And like I said, you will be receiving recordings and presentations shortly. Thank you very much. I hope to see you soon. Thank you, Gwyn, for helping us present, for hosting this webinar, and as usual, but not as usual for collaborating between the two projects.