 Good morning, everyone. Good to see you all here today. This is the second OpenAir Graph community call and with us today we have Claudio Azzori from the OpenAir Graph team. He's going to guide us through the Graph workflows as well as the Ying and Yang of Content Acquisition. So before we get into this and his presentation, I would also like to quickly guide you through our new website which we recently launched and some of the key information that you can find there. So let me quickly share my screen to give you this very brief tour. So hopefully you can see my screen now. So this is our brand new OpenAir Graph site. So in the about section you can find all the information about the Graph, a very brief overview of how it came about, as well as a deep dive into our numbers, the statistics, our roadmap, and the changelog. Next we have the use contribute page. So this is all about how you can specifically use the Graph and also contribute to the Graph. Going into this page we've also created separate guidelines depending on your stakeholder group, if you're a researcher, an institution, funder. So you can find all this information there. Next we have the community. So the OpenAir Graph is community driven. In order to get your feedback and provide the forum for you to ask your questions, we've created a dedicated webpage for that which is the user forum and we very much like to invite you to explore that. In this page you can also find all the information on our community calls, past ones and what's coming next. Next we have the support. So in here you can find all the information and supporting material to help you navigate through our graph and some frequently asked questions, supporting materials as I said, and also technical support bits. Finally we have a separate website for the documentation. So we very much like to be transparent about how we create and also operate the graph and in the documentation you can find all the information about how the graph was developed, API information as well as access to our dataset. So with this very brief run-through through our website, I would like to invite you to have a look and also pass the floor on now to Claudio so he can give us his presentation. Thank you. Thank you Athena for the introduction and good morning everyone. Thanks for joining. It's a pleasure to give you this presentation. Let me share my screen. Okay desktop share. Can you see input screen? Yes. Thank you. So with this presentation I hope to clarify a bit some aspects behind the process in place on how the open-air graph is built. Everything starts from the scientific content available out there. So again there would be no open-air. Let me say this clearly without the repositories. So open-air leaves tanks to the existence of the repositories. However it's important to note that today there are also other initiatives out there that do play a key role in what open-air does. So here why we have this duality between content that is considered to be open-air compliant coming from the repositories and content that's a complementarily that is not open-air compliant but it is yet important. So the outline for today I will briefly describe the open-air compatible sources by illustrating a bit of information on open-air provide as some of you might already know as one of the open-air services. The registration process, the role of the open-air guidelines and then as I said with non-compatible sources then how the graph is being constructed. A few mentions on some of the ongoing in future works and then leave the floor to the discussion. So these community calls are being inspired by the work we are doing in the graph. In the documentation and also it's part of the fact sheet you will find this diagram, this flow that resembles essentially the implementation of the open-air graph construction and processing pipeline and we decided to took it as an inspiration to decide which topics should be important to touch in the series of the events. So the last time my colleague Paolo gave an introduction of what the open-air graph is, what's the purpose, which are the principles behind its conception and today instead we start from this part of the processing pipeline from far left. So we're going to describe what the open-air compatible sources, the role that they play and then the non-compatible sources. In the next occasion we are going to rotate with some also other topics that we believe are important, how to play with the data for example, since the graph it's publicly available and can drive important insights. We are going to illustrate how to derive some insights from subset of the graph that could lead for applications that you might have in mind. So again, open-air compatible sources leverage on the open-air guidelines. What are the open-air guidelines? Everything started, I think it was 208 with the open access mandate from the European Commission. Back then the intention was to assess the share of publication published in open access and open-air started to build on top of that. So it was important to guide repositories on how to expose the metadata of the publication they host in a way that it is machine-con-processable. So essentially the open-air guidelines includes a set of indications and rules that when followed make, simplify the work for applications that need to consume this data. And the consumption of this data makes them essentially capable to express, for example, the reference to a funding project or how to indicate that publication is open access or not. So the primary goal is to have a common understanding on the semantic of the data, of the metadata to basically improve the quality of the metadata. Of course, this metadata must be exchanged across the systems. So it is based on a data representation that it is XML-based. The latest version of the guideline is inspired by the data site metadata format and there will not be metadata exchange without the use of the OAPMH protocol, which essentially allows open-air, but not only open-air because it's an initiative from the Open Archives initiative, to circulate the data from the repository to an aggregator like Open-Air. So Open-Air Provide. Provide is essentially an umbrella service that covers for validation of the contents available in a given repository and the registration of the repository itself. When users can test the validation against a given version of the guideline and perform the registration. It also features the enrichments that Open-Air builds thanks to another service offered by Open-Air, which is the Open-Air broker service and allows to explore the usage statistics. So views and downloads from the usage counts service, but we are not going to cover this in today. So we will focus today on validation and registration. So the test of compliance of the contents can be performed as a first step when you log in into Provide. It's possible to verify the compliancy against the version of the guideline that you supposedly know you are compatible with and run a validation test. This will result in a validation report that allows you to know which are the rules that are well implemented on the content of your repository and which rules instead are not passing or partially passing. As some rules might not be maybe applicable to all the fields in every single metadata record, while others instead are more trying to find a wider coverage in the contents. So then the output of the validation can be seen in different perspectives. So on a per rule percentage of compliancy, as well as in a detailed report that provides hints on one end the validation, the rule itself, and the reason for the pass or not pass or the score indicated in the validation results. Then it's possible to proceed with the registration or the data source. Opener is already aware about which are the repositories because it leverages on registries of repositories, namely or open door, entry data and for sharing. So these are regularly retrieved from these three registries. Remember that Opener is essentially an aggregation system. It's not responsible for the long-term preservation of information, especially about repositories. So it's not aimed to supersede or to replace the existing repositories, but instead to maybe we can say create bridges between these registries. Then once the repository is selected, it's possible to edit the repository information. Maybe it's not that updated in one registry. So fill in the rest of the fields, register an interface, so indicating which is the endpoint of the OAPMH repository, accept the terms of views and for the metadata harvesting and for the reuse of the full text. Let me remember that Opener runs text and data mining algorithms on top of the publication or the open access publications to derive added value content and finalize the registration. So this is essentially what the end user perceives as a registration process, which results in the end in the availability of a repository-specific dashboard that allows to see what Opener is doing with the contents of the repository. It illustrates the history of the aggregations, so a timeline in which the contents from the repository were aggregated. The trend of the usage counts, the amount of enrichment events that one can build, downloads and views and so on and so forth. There is also more information available in the dashboard, but this is just to give you an understanding of the information that you might find on the provided repository dashboard. So behind the curtains when a repository is aggregated, Opener as a team of people that carries on with the activities from that point onwards. It is based on a tool that was developed here at CNR some years ago. It's still in operation. Opener was quite heavily, I must say, specialized to handle the Opener use case, especially in terms of non-functional requirements and scalability. Opener is for sure, I can say this honestly, that the largest application for the DNET software toolkit. So it was customized to accommodate the volume of data that Opener manages. And this is just a preview of a list of repositories that the Opener aggregation team manages on a daily basis. So when a repository is registered, then the aggregation managers can assign an aggregation workflow to it, which is then responsible to perform the aggregation tasks, which are metadata collection and metadata transformation. And here by transformation, I mean a structural and semantic harmonization. So every repository, here we see the University of Minio repository has a dedicated dashboard where the operators can see information on what the system is doing about the repository content. So since Opener has many, many repositories, I'll show some numbers later. It's important, it's crucial to have proper support on automatize the processes. So that's why looking farther behind the curtains, we have a workflow management system, as we refer to it, that allows to automatize these activities. So on a regular basis, the procedures comes and contact the repository and point to check if there are new publications, new dataset, new bibliographic material to retrieve. Then again, automatically, those contents get transformed by a different process to produce a new version of the contents to be integrated in the graph. So speaking of technologies involved, this diagram describes briefly, which are the technologies that take part here. On top we have the DNET application that in short defines an information system service that contains all the definition of the workflows, configurations, and knowledge about the internal services that are available in the system, a manager service, and a process orchestration service that basically implement this list of steps that you see chain with an arrow, which is indeed yet another graph, speaking of graphs, implementing the various logical steps that the system runs to accomplish the various tasks. These layers are based on Postgres database to store a part of the data and MongoDB as a login system for the workflow and for some category of metadata stores, as we refer to them. Then some years ago, it was the time of OpenAir Advance for those who knew this project. We had to introduce substantial changes in the content aggregation workflows because back then we started to face the need to have a more scalable system and we opted for embracing the Apache Hadoop environment. Today, the vast majority of the largest sources and also I would say not only the largest one but a substantial number of the data sources is now aggregated inside the Hadoop ecosystem where thanks to coupling between the DNET orchestration mechanism and the Uzi workflow management system, the metadata collection and transformation workflows run smoothly on a daily basis to keep the contents up to date. Then for the most processing intensive parts, the workflows do leverage on top of Apache Spark and we use Zeppelin as tool for expressing for developing the notebooks for getting insights on the contents, so to dive into the contents. Then we mentioned the OpenAir Compatible sources. Now, as I said at the beginning, it's not only about OpenAir Compatible sources. There are important initiatives out there, repositories, archives, aggregator that let's say must be in to build a data in the graph that can be seen, can be perceived, a comprehensive picture of the scientific production at a global scale. That's why in this slide you see players like Crossref data site just to mention two important DOIJ, Director of OpenArchitects Journals, various funder, European and international funder databases. There are publishers that are onboarded, other research graphs just to mention OpenCitation being an important one. Thematic and institutional repositories like PubMed, of course Zenodo, Al, Akive, it's all kind of contents that take an important role in the construction of the graph. So focusing on contents that are not necessarily repositories, these registers, funder databases and let's say other kind of sources do play a crucial role in creating the contextual information that is needed to run different applications on top of the contents of the graph, meaning that the bare bibliographic record of a publication contains already significant information, but alone might not be enough to run, for example, over comprehensive research assessment that aims today in the context of the open science movement to consider different aspects, so beyond publications. So that's why we often stress on the need to build the right context around the scientific production and that's why, in this sense, funder databases play a crucial role, but not only those, also ROAR as a registry of research performing organizations is important, just like ORCID to provide reliable identification for persons, so authors of publications. In fact, the construction of the graph leverages on top of these concepts, so being defined in one of the most, let's say definitions, the graph as a set of vertices or nodes and edges or relationships is crucial to properly identify which are the records that take place in the graph model depicted here in this slide. This image directly comes from the new release of the website and describes the core graph data model. One of the main challenges in the graph construction process is indeed the non-ambiguous identification of the records that take part in this data model, which is, I want to say, that there are, of course, reliable identifiers out there, but often they are not always available or maybe are known on some version of the data, but are not known on another version of the same data. So we might, we have, let's say in the early stages of the graph before the processing takes place, various identities for what we'll find out at the end of the chain as being the same record. So that's why we have different fingerprints depicted here. So the key message here is one of the challenges is indeed the non-ambiguous identification of the records that we want to connect also in the graph with all these relations. So I think I thought to provide a couple of examples that could give you an understanding on how the bibliographic record, in this case, is a reduced version of a data set. I got it from Zenodo and got rid of the non-necessary information, how this bibliographic record takes part to the graph construction process. So out of this record, there is essentially a mapping that transforms it into various elements or components of the graph. We have for sure one resist product, which is the starting point here, the data set. In this case, it is identified by a DOI. As I said, DOIs are available, not always, but when they are available, we do leverage on that. But it creates one or more persons intended as authors. Here we have the orchid of Michael Lee Paolo. Following below the bibliographic record, we have relations with other records. In this case, we see that the relation is expressed in the related identifier element where DOIs are provided. So being, again, another reliable persistent identifier, it's possible to take for granted that OpenAir will have also these nodes in the graph, so we can surely build a relation, a relation point into that. In this case, the versioning relation. The creator element also included information about the affiliation of Paolo Maio. In this case, my institute, the institute where I also work, Institute of Science and Technology at the Italian National Research Council. So this information, again, based on a persistent ID, allows to create an affiliation relation. Last but not least, the funding reference section here includes two references to two projects from the European Commission that allows them to create a reference to these projects that are instead provided by another source, but we know that they are part of the graph. So from the European Commission, we get then the description of the projects. So in this case, the descriptors contain the metadata that describes this particular project and the list of participants. There are one, generally, or more participants in this case. In this example, there is only one participant. So out of this metadata record, it contributes to the graph construction process to one project and a number of organizations, along with the relationships among them. So we know which organization participates to which project. The last example, finally, is an example extracted from the ROAR database of an organization from which we get a resource-performing organization and a number of hierarchical relations, as ROAR also informs on the parent-child relationships occurring from, for example, a large research institution like CNR and all its sub-institutes that are then represented in the data as children. So about the non-compatible sources and the relationship with the compatible ones. The difference here is essentially on what to drive as validation process, because if for open compatible sources, we do full support from the open guidelines, we cannot exactly say the same for the known compatible sources. So it's not yet 100% clear on how to express this validation that are ongoing discussions at the European Commission level. And in this sense, the European Open Science Cloud will drive some lines on which are the requirements for the validation. But essentially, this is the main difference. If for open compatible sources, the guidelines draw a line on how to interpret the content and what can be considered to be in and what is there not. For the rest, it's a more blurry area. Then, as I was mentioning before, the takeaway message and some numbers. The content aggregation counts today roughly 2,000 active data sources from which open air collects regularly. These activities, these workflows count roughly 8,000 weekly aggregation workflows divided across metadata collection and metadata transformation, resulting in failures and successes. The tooling that I described before allows the aggregator operators to get an understanding on the causes of the failures and track of the, of course, the successful executions. Then, as the graph gets published roughly every once, one time per month, the largest data sources are not updated that frequently. I mean, not on a weekly basis, but more to be in line with the graph releases. So for example, Crossref is updated once per month just like the graph. The metadata formats included for which there are parses implemented include for the large doubling core and data sites. So exposed by repositories as OIDC and OIDC data site formats. There are many other formats involved, luckily still in the scope of being enumerable, but this is indeed a challenge for the sustainability of the processes. Of course, the more the different formats that the system must handle, the more the complexity, the intrinsic complexity. So ongoing and future work. So what is happening behind the curtains? The relation with the European Open Science Cloud is essentially driving some requirements that we foresee that are going to imply stricter content acquisition policies. What we do foresee is that the repositories that are still marked as driver compatible. So the original compatibility that was devised many years ago, even before OpenAir was started as a pilot, will only aggregated through base. So the aggregator operated by University of Belfill. In this sense, the requirements are getting inspired from what we're getting from the ESC. Another important aspect is to strengthen the data flow and content monitoring to better support the aggregator operators in essentially being able to run quality assurance at every single stage of the data processing pipeline, starting from the early stages of content acquisition. Over the years, a bit of experience on these activities has shown that playing the role of an aggregator is not an easy task, especially when there are so many providers involved. Providers that have their own autonomy in running updates of their platforms or being on terms of also service availability, the spectrum of cases that we had to face over the years is quite high. And it happened that contents from the repositories that were undergoing maintenance or any possible technical cause disappeared also on OpenAir, just to give you one of the most important events that could occur. So it's important to strengthen these activities, quality assurance activities to be notified at very early stages on events that are significant for the graph construction pipeline and also to better the timing in the delivery of the contents. So this closes my presentation. I hope I did not take more time than I was assigned. Let me remember the resources that you will find information that you will find on the new OpenAir graph website and the forum where you can start discussions to get in touch with the OpenAir graph team to discuss vision, troubleshooting, exchanging ideas, any sort of discussion that you believe could be fruitful also for others. Feel free to initiate it there. And again, thank you. Thank you very much, Claudio. We started having quite a few questions from the audience. I've seen there's a lot of activity in the shared document. So I suggest to start with the first one and I'll give the floor to the audience. If you want to raise your hand, Janik Viard, you asked two questions in the chat, but if you also want to speak up, you're welcome to open your mic. Otherwise, I can read the question. Okay, no problem. Yes. Beatriz Coppola and I'm from Finland, from Alto University. Raising my, like opening my mind for a direct question to the OpenAir team. So are you involved or are you aware or is this OpenAir forum or team can support me with the question regarding our Finnish national open science portal data integration to OpenIre? So we had a call last week with CSC who confirmed that the Finnish university, the Finnish research outputs will be transferred to OpenIre via integration during this year. So is that in, will that data goes through same validation process what Cloudy raised and presented today? Or is that validation process what you presented in your presentation referring just to the manual added or the collected from different data sources like open data sources, data validation process? Let me try to answer to this. Well, I'm not directly involved in the discussion with you or with the creation of a Finnish monitoring dashboard, but the experience I had on this regard and following the various discussion threads internal to OpenIre, I think there is an important aspect to underline here. As the material that can be of interest to a particular country may maybe vary from country to country as different countries could as full legitimate legitimately have their own autonomy in defining just to give you an example which products should be taken into consideration for research assessment. I think this discussion is also larger than OpenIre and of course a common understanding, maybe cross-country understanding on which could be a set of basic inclusion rules could be something that could be worth discussing. I know some that there are already monitoring dashboards country-based, we do have one that we are working on for Italy, another one for Ireland is also being released. We see it's crucial to confront with other local initiatives to understand which are the differences because these differences might be due to various reasons, but everything starts on which products are of interest for the given country. So, I suggest you to maybe get in touch with Julia Malaguernera also. I know she's following these discussions closely and we can surely have a follow-up on this. Thank you. That would be really great because this is the reason we had that call with our collaborator and colleagues at CSC. From my perspective from the technical team, I don't know to what extent OpenIre is in the position or can have the possibility to adjust the inclusion rules on a country basis because the same product, considering the multidisciplinarity of the science today, could be of interest for more than one country. So, we are not talking just about in or out rule for a given research product. Yes, and it can be when talking multidisciplinarity but also collaborative research. When you collaborate, there are product research outputs with different countries, universities collaboration. So, how about those cases? Exactly. Yes, thank you. If you can chat me Julia's, can you put to the chat Julia's or somewhere to send to Beatriz Copponen or to see her name, so the contact name would... Of course. Thank you. Okay, so I can read Yannick's questions. The first one is, are data articles harvested as well? Like those published by data in brief and this is... And then they provide a link to an atmosphere journal called data in brief. So, articles about data, are they harvested? Not that I know. I know that science direct is not a direct data source by itself. We might have some of its contents by a crossref maybe if they are open and it seems the case. But I can check more in detail. If an article is in crossref, it should be also in open air, just like if it's in the data site. So, I don't know this particular journal, the contents from this particular journal, but I can check. Okay, and I guess a similar answer is for other data sets available under open licenses. Institutions open repositories and an example from France is the French institution repository. So, do you have other... No, it's the same answer. I don't know, surely by heart, but that's an investigation I can do. Okay, we have... Just to... I mean, a message to... This is valid for all the question. For the particular cases, we will follow up in the shared documents. So, if Claudio can check directly this link and the journals that have been mentioned, then we can give a more precise answer in the notes that we will share with all the participants. Okay, anybody else from the audience that want to directly ask a question or...? I can give it a try if you want. Oh, sure. Thank you. Go ahead. You're welcome. Albon Tomas from NRAE. I can just answer first to previous question, because I also use Anthropo research data group. It's available in open air. But my question was about a set of data inside of these repositories. I can see data sets from research data group, but I cannot see our dataset, our set of data inside this repository. I mean, I cannot see only our datasets from our organization inside. Am I clear enough? Let me see if I understood your concern. You are expecting to find a given number of data sets that you know are available in HAL. But you are not finding them in open air. I cannot find them, only them. I don't find a way to get only items from our organization. Okay, so your concern is about how to put them under a given umbrella that is one of your organization? My question was more about if I have to ensure that this collection, this set of data implies these guidelines. Or is it the repository manages to ensure the set of data complies with them? Well, yes. So if they are aggregated directly from HAL, then it should be the source. So HAL itself to comply with the open air guidelines. However, so far open air has been quite, let's say, permissive in the degree of compliance. That's why the closing of the presentation I mentioned that we do for SC, we do envision the implementation of stricter acquisition rules. An alternative way is to have the same contents available on another repository. Maybe directly managed by your institution, where maybe you have more freedom to adjust the contents to the open air guidelines. Otherwise, I mean... I don't think I've been clear enough sorry for that. I mean, our organization has a set inside the repository, meaning this belongs to us. And I cannot access only to this particular set of items. Yes, but this is exposed by HAL as a repository. So I don't know if in HAL it's possible to curate how the data in a given set in a particular set is exposed to the world. Okay. It may be a matter of quality of data. In general, yes. So it depends on the repository platform specific implementation. I admit, I don't know if this is a possibility to run curation or to expose the contents of a particular set in a particular way, different from other sets. Okay, I didn't think about this. And just to finish, should I check in open air provide to check this matters? Is this my charge? Or should I ask for other managers to check? I suggest you to first do us some searches on open air explore and see if you can find okay the contents of your interest and get in touch maybe with a manager or HAL to check if it's already registered. It's an ongoing process work. Thank you. Okay. Well, thank you. Thank you. We have Inge next. Inge, would you like to speak? Yes, I can do it poorly. Now thanks a lot for all the information. I find it very valuable as always. And I just have a few questions about things you said about compatibility of repositories. Namely, first of all, for quality assurance, driver repositories only from base would be considered. What does it mean? What does it mean towards the ones that are driver version one, two, three, four? How do we interpret that information? And the second thing maybe that is connected to that if you look at compatibility of repositories, the things you said about EOS that makes it more and more important, increasingly important to be open air compliant, if I'm correct, if you could expand on that. Yes, thanks Inge for raising this point. Now I do realize that maybe I didn't put it giving the right angle because I mentioned the stricter rules and the acquisition only from base of the driver compatible data sources, but the higher compliance level, so starting from open air two onwards and up to the most recent version four of the guidelines will still be considered as compatible and will be therefore supported to provide. So the change in this sense will only regard repositories that are still, let's say, only driver compliant. Those records will wave through to open air through base. This is the idea and we are now at the stage of understanding which is the coverage, which are the gaps with respect to the sources that are actually proactively aggregated by open air. Again, that are driver compliant. Regarding what the EC is doing with regards to what the EC will provide as an indication for what we consider as non-compatible sources, also to me, this is my understanding I think it's a bit limited, to be honest. I see this as a moving part and it also relates with setting up which are requirements considered in the various stakeholders. I think it's not an easy task to run and probably it's a discussion for an higher level. Yeah, that's why I asked it, it was like where do I sujoy it? But then I know enough, thank you for this. While I'm here, may I ask another question? Sure. Now, I hear a lot about time posting and event notification and exchange formats for interoperability. In how far is this on the road map to integrate it or how do we deal with time posting and event notification? I'm not a technical person so I don't know that much about it but I hear our development team talk about that a lot and I just wanted to know how at this point in time this is included or how it's dealt with. Can you repeat the core of your question, sorry? Sign posting and event notification like as a exchange for exchanging information. Okay, sign posting and okay. I know this protocol is being taken into account in some projects that OpenAir is involved with and I know that the idea is to support it to let let's say a given repository to push metadata about a given research output directly to OpenAir without basically bypassing the traditional metadata harvesting that today runs leverages on OEPMH. So in theory this could allow for a more timely delivery of contents and more up-to-date synchronization between repositories that do implement this protocol and OpenAir. There are implications though on the general design because the aggregation process now gets snapshotted every month so we take a picture of the current state at a given moment in time to spot the duplicate set and to run analytics to calculate the statistics. So these are batch processes that are very resource consuming and of course if we have orthogonally other records, other information coming in let's say real-time fashion this information, these records could not benefit from the duplication for example or cannot take part to cannot be taken into account for the calculation of statistics. So on one end it would be good I think to have a more up-to-date and fresh information but probably the services will need to be adapted to differentiate the various the different functionalities. So we need to carefully evaluate the possibilities here. Okay, thanks a lot. Thank you. Okay, we have another question about the detailed list of sources already harvested compatible by Aurelie. I think yes we can redirect you to the documentation page on the website. No, I think this is a good question because this is in the plan to expose a comprehensive list. This is not yet visible. I mean there are some sources that are mentioned in there but I think we mentioned this sometimes in the past that again for the sake of transparency the portal, the graph portal should expose a complete list of sources indicating the name of the source, maybe the PIDs, open-door earthly data for sharing and the last update date of the contents because on provide you have access to an individual repository provided that you are registered. Instead on the graph website we should have like a table in a dedicated page long table listing all the sources that OpenERA collects directly. It's in the plan. Okay, okay, next one. Askaline Choussino. How to improve metadata concerning organization affiliation in data already collected in OpenERA? For example, there are a lot of variants for my organization and if we go to OpenERA explore and check for the the specific lab if from there there are a lot yeah there are maybe I mean there are three pages organizations. I copy the link in the chat if you if you want to have yes yes I'm seeing that so thanks Pascaline for this question. Well this is indeed one or a good example of the issue I mentioned during the presentation the precise identification of the records that we want to take into account in doing our job. So OpenERA tackles this issue with another dedicated service which started to be operated a couple of years ago and it's named OpenOrgs and essentially allows to delegate the curation of these ambiguities in the identification of the research performing organizations to metadata curators. I can point you to a new website that is being evaluated now towards beta or I will pass the links I don't have in in my browser ready but if you want to know how OpenERA tackles this problem about the variant of a given organization you will find answers there and you can get in touch with us if you're interested to collaborate with that initiative. Thank you yes the website is in the beta version so maybe we can give updates on the status of this in our next calls but we can definitely link resources to understand what this initiative is about and also because we need curators right for highlighting organization in each country and make sure that the affiliations are listed correctly. So oh yes I see that someone is typing and and give the link thank you. Yeah I found the link. Yeah great great. We don't have questions left and we are also at the time of the meeting so I think I think Athena do you have closing remarks? You did. Thanks a lot for joining everyone I think we can wrap this up. I've shared the link to the slides in this chat and also the link for the registration for our next call which is on Wednesday 20th of March at the same time 11 am CET so we're looking forward to seeing you there as well. We will follow up with the notes the slides the recording and all useful information and well thank you once more. I think that's it see you at the next one. Thanks a lot everyone goodbye. Thank you. Thank you also from my side. Have a nice day. Have a nice day.