 Thank you for joining. One more call. So this call today is dedicated to, let's say, a focus on data repositories, on research data. How do we manage to aggregate content providers that are providing research data records and how do we have this research data in our research graph, in our infrastructure. Let's say that this is a community call to clarify some issues regarding the content acquisition policy and what we need to change in our guidelines to address some challenges regarding the research data. Because OpenAir is just starting a campaign to have more data archives that the repository is registered in OpenAir infrastructure. So this is one, let's say that this is a way to start this campaign involving the community, involving those that are, in fact, providers of OpenAir. So let's start with, usually, as you know, we always have a topic on some updates. We like to share some news or to some highlights regarding the provide service. Then our focus will be the research graph and data repositories, the content acquisition policy, and the changes and the updates that we are doing to the guidelines for data archives. And how can you also contribute and provide the feedback for this public consultation period that we are doing, that we are running right now. So let's say, first we start with this news and updates, then we have Paolo Manghi, the OpenAir technical director, to discuss with you about, to present you the research, the OpenAir research graph and discuss with you about the role of Natalupuzitri and the expansion that we are doing within our content acquisition policy. And then we discuss about the updates, okay, together. Okay, thank you for joining this call. Let's start with some news. The first one, you know, so but it's always important to highlight that we have, we did a redesign of the provide service. So it's available in production since April. We are doing slightly slight changes. We did, in May and in June, several tests with 10 repository managers. From those tests, user tests, we are selecting those issues that in fact we need to change or to improve a bit in the layout of our provide service. The user tests are always important for any service, but it was quite important for us. We did all the redesign of the service, based on input from workshops and from service that we did last year and yes, last year and early this year. But we realized that we need to change and to update some things in our layout in order to address some difficulties that during the tests we realized that the repository managers and the users of the services have. But I think we are quite happy with the service that we have now running. So the changes are big. If those that those things that are really, let's say, a big change in the layout, we will postpone it to a second version of this layout, but so other changes we are doing. So broker events, we have already highlighted that in the, not in this last newsletter, but in the previous, but as we didn't have a call, a community call in June due to the open repositories conference. So just want to highlight that in May, we have generated updates in the broker events. They are available. Also in provide, you can check it. And it's important to say that we start receiving some and also as a result of some user tests and some complaints about the notifications via email. In fact, we had some issues sending notifications via email, putting available the information from the broker events in the notification area of our dashboard. The technical team have already realized and solved those issues. And we will put it available as soon as possible. So what are we going to do is that we are going to clean the notifications from all the users and generate and ask you to generate new subscriptions. And we will start to generate properly these notifications and sending via mail properly the notifications for those that have asked to receive the notifications via email. So this is important. Your feedback was important. So I think problems are more or less solved. We will put it in production as soon as possible. I also need to check some with some colleagues. If they are here, they can also provide information later. But so when are we able to do this? But we can already say that I think it's important for you. Some of those that are participating in this call in fact have reported this error. I will ask my colleague Andrea to share the link for this other new. The Canadian Association of Research Libraries together with OpenAir, with the participation of the company for science have developed plugins for this base version 5 and version 6 to comply with the OpenAir guidelines version 4, the literature guidelines version 4, which is good. OpenAir is quite well aligned with the release of new versions. The version 7 will be much aligned with our last version of the guidelines, but we are aware of some limitations because the previous versions are not compatible by default. So we are all together in this work to promote this global interoperability and it was great to have this contribution from the Canadian Association of Research Libraries to support, to sponsor and support these developments specifically for this space only. So when we want to highlight these developments in this call and to share with you, so if you are not aware we also published yesterday in the newsletter, but Andrea will also share here in the link for the news item for you to be aware of. This and the last novelty before we have Paolo Manguidas, it's about the user statistics. You are aware of some issues with the user statistics service and the availability of proper figures from the user statistics in provide service. It's important to highlight two things. Statistics real members are already available in Explorer. In the repository page, in Explorer, in the providers page, you can see the numbers there. They are correct and in the different places of explore when it comes, when it relies with the information that is related with each repository provider. The numbers are not in the statistic information, it's not fixed in provide service. We also have some colleagues here responsible for the user statistics service. If you have questions, please ask. We can reply these questions in the chat or later here in this call. So feel free to ask questions about this. So I will not reply now. I will maybe buy a shot, but later we can answer. So we can make comments or if you find any issue just share with us. So for the user statistics specifically, we are gradually performing all these corrections repository by repositories for those repositories that have enabled already the user statistics service. For the new, it's not the problem. It's only a problem for those that have already enabled this year, last year, two years ago, the service. So we will fix this, but for the new, we don't have any issue. Okay, these are the highlights. There are other things that are also interesting. So in the previous, in June, in June, we have invited you all to participate in the session during the open repository, the open repository conference session where Paulo Mangui, in fact, was also Paulo Mangui, have presented the COVID gateway, open science gateway that we have in open air. So it's another interesting novelty. It's a different service, not the provide service, but I think you need to be aware because the content that you provide to open air is also available in that gateway. So it's another novelty. So there are other things that are also new, which is important to my life, but I just want to highlight this for specific information. So let's move to the next topic and we'll invite Paulo, the technical director of open air to join me and to do this presentation in order for us to be aware of research that the data repository is in our infrastructure and what we want to change, what is new and what we want to promote in the coming months. Thank you Paulo for your availability. Thank you all. Let me try to show my slides first. Can you see them now? Yes, but not the right one. Okay, wait a second. It's always an issue with selecting the proper again. Okay, so let me go through quickly about this presentation. So you know a little bit already of open air, but what we're trying to do at the technical and networking level is in fact to facilitate this process. So bridging the work done by scientists and the world where the results are published. So making these two worlds as connected as possible, possibly transparently. So what we would like to happen ideally is that the scientists perform science and all these results or her results are being published automatically on his behalf and of course prior authorization by the machinery he's using. Exceptions made for the of course the narrative part of which the scientist is in charge, but we can surely dream of a world where this process, especially facing the needs of open science would be dealt with mainly by machines, right? So open science requires us to publish the whole process and this requires a lot of manual effort if we compare it to the traditional procedures unbearable. It's one of the barriers we know. So open area is working in that direction. So aligning globally and all these activities which are necessary for this paradigm of open science to be implemented. So for that technically speaking, so I'm skipping the networking part, we're providing a number of services. The layer at the bottom as you can see shows the number of services in support of publishing science from data management plans with Argos, Amnesia for anonymization of data, Zenodo for storage and say persistence of data, long-term preservation of data and as well as other services that are allowing us to bridge again to automatically publish from thematic services to data sources, scholarly communication data sources, our scientific results. On top of that, we build another set of services which are there basically to monitor the quality and track how science is doing overall. For example, in School Explorer we build a collection of links between publications and data sets and we develop the interoperability guidelines which you've heard of several times. Together with the communities this is the most important thing I think so these are not top down but the results of the bottom-up process which involves the communities as Pedro stressed before. We provide usage analytics and brokering. And on top of that of course we build the added value services that we can obtain by working on this content that we're collecting and coming from the services below but coming also from the institutional repositories and coming from the data repositories which are the subjects of today's talk. So what we are building at the core is what we call the Open Air Research Graph. We call it a graph because it's a metadata collection of entities of different types so in the same data collections you have objects which belong to different and describe different kinds of objects for example publications, data, software, projects, authors, organizations, all these are of different kinds they have different properties but at the same time they're connected by semantic entities relationships sorry so semantic links which somehow determine the kind of relationships between the two objects. For example this object is hosted by this repository or this object was funded by this project or this publication was supplemented by this dataset so this is why we call it a graph. In order to build this graph we have to collect metadata from several sources data sources which we stress should be as aligned as possible in terms of the metadata they expose and how they describe the entities they host. As you can see we are collecting from a variety of sources as well so we don't focus on one kind but we try to collect from all sources which we believe are trusted by scientists. Trusted we mean sources that scientists typically use in their daily activities community specific ranging from community specific to cross community ones or to scholarly communication services like ORTID or as you can see on the left side or data side crossref from the thematic publishers and thematic repositories but also to the aggregations of those which in several cases provide us with a lot of added value for example ampewo Microsoft academics is one of these samples also from open citations and we range out to other kinds of products like github for software and research software etc. We include funders and as a specific case also research infrastructures so we are trying to open up and reveal give the dignity of research results to products research outcomes which are typically hidden by the the services that we know only recently we made a step towards data research data we were used only to publications in open air we're trying to go beyond that so we include other kind of products in the research life cycle and for the evaluation and assessment so how we build the graph we collect from 12,000 data sources roughly we build the raw graph then we deduplicate the content of the graph because the same records can be collected from different sources and we enrich the graph after this this step by full text mining we have about 15 million PDFs on top of which we perform mining in order to infer links between entities typically this publication has been funded by this project or this publication is linked to these datasets or to this piece of software and we create the corresponding entities in the graph so we build the graph that is enormously and much more richer than the one that you can build but just by collecting the metadata. On top of this we propagate this content so that's a nice activity because we thanks to the relationships we can propagate the contextual information of one object to another object that is related with it for example if I know that a project has funded the publication and the publication is supplemented by a dataset I can easily state that the dataset is also being concerned with the project and it is related with it so on top of the graph we provide the number of services we expose the graph through APIs, dumps, etc and we build the number of services for discovery and monitoring mainly but also for example the added value that we provide we provide so from the graph we can of course redistribute the information to the original sources when we are able to enrich their metadata records so let's keep going here I'm not going through this it's not so interesting of course we include registries so we rely on existing registries we are not reinventing PIDs we're not reinventing the wheel and these are just the least of the ones that we have we count today 30 funders roughly with 300 and dot sorry 3.5 millions projects and the inference we perform we do it at the level of the project so we can tell exactly with a 99 98.8 approximation rate that a publication is related with the project provide provide was just mentioned several times here so one activity that you can perform if you're a data source or content provider is to register your service to provide this means that you you have you need to have your source data repository for example register to with the data in this case if it's a data repository we push for this because we want these services to become the way the way we normalize and we align the information about data sources worldwide once is in you can point to it from open air and start your activity of data collection metadata collection the first step is of course to make your repository compliant to the guidelines you can validate this process through provide so get feedback on how your metadata complies to the guidelines to different versions of the guidelines and get a report on how to improve it and of course you can have other added values like the measure like user statistics and also the enrichment of the metadata through the broker which you've heard about before now let's go through these you register etc data content acquisition policy this is an important thing so if you go today to the production portal of open air you will notice that the majority of the objects inside are open access and that's because in the previous era of open air we were mainly focusing on open access publications so we kept this rule because the commission wanted that was part of the tender so we had to measure the ratio between open access and non-open access publications funded by ec projects and that was the deal in the beginning so we needed to somehow measure the taking the take up of the mandates of the commission the open access mandates then things changed so what we are currently doing today is to build a broader graph that contains all possible objects out there independently of the license so that we are able to measure the trends in europe and worldwide in fact we respect two different funders not only to the commission we respect the resource communities and measure research impacts and monitor their attitude so we changed the content acquisition policy for this so we collect everything as long as it complies to guidelines first of all this is the most important thing and as long as it is material that is related with research okay so if you take a look at the guidelines as they are you will have a deeper pick in the next presentations you will notice they're based on standards so we are not reinventing the wheel building application profiles on top of that thanks to again agreements with the communities so you should consider the guideline an evolving product with which you can play you can give feedback and together improve and make evolved in the future now the important thing is that when we collect metadata records ideally we are talking about data repositories later to repositories but we don't really believe it so apart from we see we see repositories that let's say they are pure from this sense they only provide data not as exceptions but like specific to specific to two disciplines mainly most of the time we tend to consider data sources as hybrid so for this reason every time we collect the metadata records from from a source we inspect the specific type or resource type of the object and we try to map it into a meta category of the graph publication data software or others okay so data repository like to know they may contain for example research soft or it's the same for fig share it's the same for publication for institutional repositories which in several cases contain data and software so we try to remap the objects where they belong so in the classifications in the neater classifications that we have in open air so this is a general process you have to take into account so if your repository is not purely data we don't mind okay we also include other stuff and you should make sure this stuff is properly described in your resource type so that we can map it to the proper place in the proper classes uh added value services i'll be uh quick here but i wanted to touch on this it's very important what you get out of this so once your data source is open air what do you get out of this well of course more visibility and discovery etc but there are other things that are very interesting the first one is the broker when we collect one record from a data repository for example we throw it into the graph with the duplicated so we put it together with other similar records if the data set has been deposited somewhere else for example and we build links around it right so we can we may find a link between the data set and the project with data set uh and an orchid id you didn't have for example we bring them in and the good news is that we can send this data this metadata back to you so you can in principle enrich your collection make it more complete than it is uh thanks to the broker this is the first benefit of course discovery as i mentioned before once your data set is in it reaches out different communities which we're serving today so through the graph uh you may provide your content to specific disciplines or to cross uh discipline search or to funders uh research impact is another important aspect so we provide research impact for example for egi for the main research infrastructures for rda itself but also for the commission so as a result of that your data sets will be exposed for monitoring uh to all scientists who want to report them to the commission and this is where i want to show you the integration with third party services the ec sigma which is the portal for example the from the commission the participant portal will show all the data sets potentially related with the project to the uh project coordinator or to the person on behalf of which the coordinator is uh uh reporting the collect the the research outcomes so you will be provided with the list of data sets related with the project and be able to select them scopus is the same thing so if you expose your links through uh your data site uh uh metadata links from your say your uh research data set to publication these links will be exposed to scopus they point to us they point to our apis in order to resolve the links and we'll be soon fully integrated with orchid so orchid users will be able to access the open air integrated access with open air and therefore select all the products which go well beyond the publications and then can include the data sets that are provided by your repository all content is available through open apis and dumps so you can in any case um be sure that your metadata will be made available for science for research for innovation for anybody who wants to build added value on top of that i'm done so uh if you have any questions i'll be happy to try to answer i mean if many thanks paul yes please feel free to um so you don't need to use the in the chat if you want to just use your microphone just ask questions feel free to to to join and i think this is the what what paulo have elated now in one of the last slides regarding the integration with the the third parties i think it's a it maybe it's a novelty for some of you and it's quite important because when paulo mentions that um so scoppers or sigma or orchid uh will use or are using open air so in fact they are using open air open air content open air edis and they are um linking to your um to your um content so the content that the open air gather from from from you and we have we we collected in in open air infrastructure so feel free to ask a question so i will or i will continue it with the two or three more slides that i have just to highlight the changes in the in the guidelines and how can you participate to align with this new content acquisition policy but um you have any question i know that we have we have one question in the in the document that we usually let me check if there are but it's not related with this presentation i will i will reply to this question uh at the end but it's it's not related with paulo presentation it is a a technical issue regarding the place repositories but i will answer okay so if you don't have any questions uh is fine don't worry uh i'm always available for that so uh yes just just just to stay with us because uh so yes you finish the calling in in 30 in 23 minutes we will we will for sure there will be questions let's see let's see okay thank you paolo just uh we'll just share the screen just the white light um okay just um okay uh i will skip this you can you can check in the slides that we have already shared things about the content acquisition policy so specifically for for that archive managers for the our data repositories uh for several of you that i know that are participating in this call that have this hybrid repositories in in your institutions for example that paolo mentioned some some of you in fact are um have a repository that is providing publications and also have registered the same repository the different collection to provide also data but there are other participants here that have um the specific data repository to be part or that is already part of open air so um regarding the the specific guidelines that we have for that archive managers for that repositories uh we so we we are preparing a new version of these guidelines to align basically to fully align with our content acquisition policy that is much more open regarding research data and to align also with the data set schema so we have already um our colleagues from Billerfeld University have already prepared a draft that is available that is publicly available in order to make the process of feedback easier and you can comment so it's really a draft that is in this link available for you to to to make comments uh what is important to say is that uh okay uh this version three of the guidelines for that archive managers um are so uh aligned with the data set the the last the the schema version for that three of that a seed and also use additional vocabularies for example from co-art vocabularies instead of the the old info or repo that in the literature guidelines we don't have it so we need also to update the this version of the guidelines in order to avoid this um this alignment between the different versions of the of the different types of guidelines so this is important to there are some also alignments with with the the vocabularies that the the confederation of open access representatives have available regarding the access rights for example um and so we also uh want to to update the the guidelines uh to align with the fair data fair data principles and also with with plan s so you can check the draft that we are already available in this in the following link so um let me put also here uh so you you can check this information and also um how to contribute to the with with comments so for sure we will have another call where we we can dedicate some time to to some details of the of of of this new version of the guidelines but feel free to visit this this page uh to comment we also have a google document where we where we want to receive your feedback uh so this document was already available we also share this document in previous in previous calls or in some workshops that we have organized already but feel free to to contribute to make your contributions there and provide your feedback but you can also use directly the the open air guidelines github repository to uh to to create to to create an issue and the and the and the suggest changes um so basically this is what we want to to highlight to say that we have this version that is aligned with the new content acquisition policy uh and it's something important in order to facilitate the registration um you can also check this google document where you can contribute so maybe andre is also sharing these all these links in the community call just to highlight you you all are aware of the registration process um and paolo also shared a slide i just want to i put just here an arrow to just to say that um it's important to to say that if if uh so let let me just because i forget here it's important also to to say that these new guidelines so we don't have any more this concept of of a specific set for data okay um if you have your data your and if you can expose the content from your repository um uh based on the on the specifications that we you have but you don't have a specific set uh uh to comply with the the previous version so just just register your repository this is what i want to highlight so um just register then we will provide your feedback if we we have any issue but in this process that we don't have the new version of the guidelines but we also want your content your data repository to be registered in open air feel free to register to and then we will check uh if there is if you need to change something in in your interface um but but feel free to do that because there is now it's um in terms of support and the way that we disseminate properly the information is a strange period because there is a disalignment between the last public version the final version of the guidelines and the content of position policy and what we want to proceed so in between in this process i hope that after summer we have already the final version of the guidelines and there is a full alignment and also our validator is aligned with the last version of the guidelines in between because we have this this alignment guidelines validator and content the position policy feel free to check the draft to to register uh and then if it's not possible for us we will provide the feedback but uh you don't need to have a successful registration against the version two of the guidelines to be available to be registered in open air so we will we will do that for you and we will provide your feedback somewhere in september october i hope that we will have the conditions to have the all our services aligned from the guidelines to the validator in order to provide appropriate support for you okay i just want to i like you this information so you already so you are aware that you need to be registered in the in our alternative data opposite redirectory with redata to then to to to request for registration in in in open air okay so this is the information that i want to to give after paul's presentation is there any question so we have one question here in the chat in the chat now in the google document regarding the the space values okay so the the do the dc type values in this space have to be the same values as required in driver to be compatible with open air so i will reply but so we have other colleagues from open air that are also here available if if they want to jump in and to answer but so we have in the inversion in the previous versions of open air so the dc types were fully aligned with the driver now the dc types that we are suggesting are aligned with the coir vocabularies types okay um so please check that so there is there are the those that are in driver in previous version of open air guidelines and now there is a new let's say list of types that we use that it's global is a result from the confederation of open access vocabularies a working group so we are aligned with that specific and one of the vocabularies that we are using in our guidelines is the types resource types from coir but maybe in this question there is another explanation if i well understood so so you can you can map you can use the type of type that you want to use in your repository you can align with something internal or you can align with the the driver or the coir resource types but what we open air ask is that you need to map your types to those that we have in our guidelines so whatever you use what we want from you is that you can expose the types aligned with our types so um you can use different types in your repository for internal reasons to comply with the internal procedures uh of course there are types that are a bit universally like article book etc but you can use your types your list of types that from my perspective should be aligned with the with standards but then what you need to expose is those that are aligned with us so you can transform you can map and expose your your types compliant with our compliant with our guidelines we have another question for me because if you want to to to put your questions in on using the audio you can do it we are installing an institutional repository within venue rdm and there is a link to open air as far as we know is there an only is that only a technical link or is there a need to fill inform or to apply for the connection to open air just as okay okay so you if you are if you are installing the venue rdm solution uh you need to to register your repository so first registering with redate and then resisting in open air I would like to have contributions from from from our colleagues from from CERN that are managing the venue to understand this question of the link because the link the direct link that we have is from Zenodo it's not from a standalone installation but maybe there is something new that is also new for me which is good no problem that they they know so if we have any additional comment we will put it here in this document but the first answer is yes that you need to register but I will I will clarify this direct link from this new venue rdm software platform if there is anyone in this call that can help me to just jump in and help me with this answer but um Brianna is there any improvement in procedure or workflow for clean the data for organizations okay paulo this is a great questions from our colleague Brianna that you may want to to answer I think in one of the previous calls I have already a little bit open the the this new service that we want to to have in open air to to create organizations but maybe you can you can present because this is something critical that the repository managers it is critical it is critical so the good news is that we have developed it so it's there the the the bad news is that we haven't tested it yet and I think this was something that nashla uh wanted to bring in so we need testers so we need testers from the nodes this would be uh the best scenario for us uh because we believe the nodes will be the ones that at the national level can do this job and uh the the the idea of the tool is simple is to curate a collection where the first collection so the the starting collection is the one that we obtain by doing what we can in terms of uh technical efforts for the duplication and curators can at the national level so we we have for the moment uh this assignment of users to countries at the national level they can curate the result which means uh dismiss the results so say no you're wrong or instead group different collections etc this uh this is say you want to share a bit with our community the service should I share do you want to share this again just seven or eight more if you want to do it I uh no no no just go ahead I mean probably you are more familiar with it than I am so uh I used it when we were designing the the service so we have a sample uh data set in there um so a better service just here really a sample so Paulo you you can you can introduce what is this concept and as we have for a five minutes maybe we can share this so yes so what you have is the service suggesting you which are the new duplicates that have been found that require your validation and which are the ones that may raise conflicts and that you need to resolve you can undo everything decision you make but just keeping mind that whatever you do uh in from time to time is taken and used uh as the course the core organization set into the into the open air so you can see you can see here to the right for example you may say yes this belongs to this group or no that doesn't belong to this group this is a choice that you can undo in the subsequent phases or you can search and select another entry that is not been identified automatically and include it into this group you can edit the name of the group the name of the university so you you have a lot of curation power right the core is taken from grid but we will include Roar because Roar is the next step of the grid uh and again just here the the the different types just for you too yes you see you have different identifiers we collect organizations from so many different places every funder has its own categorization uh in some cases they don't even have identifiers right so you will have the PIC for the commission you will have is me you will have grid identifiers and at the national level you may have also identifiers um you can group them so saying this organization is equivalent to this ID this ID this ID this ID and this will help us in producing records that are uniform in that perspective now we need uh the knowledge to become very familiar with these two and to tell us what's wrong what is not intuitive and so on but this is the way to go so collaborative approach because the machine we cannot do it by itself the information available is really too little to make decisions just opening different different examples yeah just for universal maribor university of maribor universal maribor this looks good so this was a good guess from the from the duplication but in some cases as you will see this is not easy at all so we translated from seven or eight I don't remember different languages all the naming of institutions like university department etc and all the cities and country names and also the subjects engineering etc so when we can we translate and we replace the word with the code and we just compare the code and then we leave the rest as as the rest of the text the remaining text as the equivalence relationship that may say yes or no but again it's very complicated because the information is sometimes blurry little and not uniformly specified so the language is just one of the many barriers so we need humans in the loop yes please may I say something okay I think that this summer is a perfect time to do this for sure and I think that there are some volunteers in some of our country who would really like to help you and I think that the knowers in their countries knows all these variations in their institution and especially they are speaking the language in their country so it will be good to speed up this process and just to give us to work on this let's say for my point of view I really want to be part of this and to volunteer to help you with this even I will so and I think that there are some other notes that they are here and they would like to do on this thank you this is very good and we'll make sure the two to of course exploit your into the asm as much as possible because we need it now the thing is we need to make this progress incremental so the first version of the tool that we will provide you with is just there for the only purpose of checking if you can use it if it's intuitive enough if there are questions that you may have and then we'll move it to production so in the first phase there is no need that you're going to resolve all possible things but that you you know how to do it you learn how to do it and if you have any hints that may help us improve in the tool they're welcome then at some point we'll switch it to production and this is where you will start doing the effective work right and your work will be precious we don't want to waste your time in the first phase and we need like three four nodes who will take the role of train the trainers as usual right so the ones who will give us feedback make the tool good enough and then we'll explain others how to use it using the language of the nodes rather than our uselessly complex technical language okay so it's very important that we do this and we need somebody to take the lead on this process as technically the technical team is quite busy with so many things so we'd really appreciate if some of you would take this forward yeah thank you um Rian and thank you Paolo so I think this is this is in organizations are critical for all the scholarly communication services and it's quite important for the the provider so if open air can okay fix this for the benefit of open air it will be a benefit for the community so this is something important yeah not so we are coming to an end I'm not sure if you have any other questions here one last thing very important this set will be made open of course in public so for the whole world to use so it will be very important to have it so it's not just an internal product of open air so the whole collection will be published normalized so for every organization will say which are the corresponding equivalent identifiers we give it back to Roar we give it back to Grid we publish it to the world okay so we can we can promise that we that we will say something as soon as possible I think what is important Paolo is that we establish a process how to what to do technically and how to involve people and then we we have two or three from us coordinating the process and I think in terms of involvement of the community as Breanna said I think it's easy relying on on on on no ads on the national open access test for open air and some other contributions from the community that are also closed to our no ads in fact at the national level they are collaborating with our no ads in different bodies in formal or formal so for sure it will be easier to I think I think from those that I know that are participating in this call there are different people even if they are not part of open air that can also they can contribute to this good okay so I don't know when and how but for sure if not before via a different channel so I have my so I put as a promise from my side I will provide something information about this in the next community call so we put a topic there and to provide some information about this if you are already if we are already doing something or if it's expected and when okay so I was checking the shot we don't know no questions in the shot no questions in the I think I have already replied to the question so feel free to put comments and the and questions in the in the in the community calls google document that we have we will answer during the coming days or during the month so sometimes people use it to other comments so feel free to do it use and access the provide service feel free to use we are open for all the feedback as you know via our travel dashboard available now please record this this google document and the draft of the guidelines it's quite important and be available for this the campaign that we want to run to have more data repositories in us so some of the no ads from open air that are here we are counting with you to to for this campaign and all the rest and all the others so be be available to register your data sources or to or if you have any issue I know that some of you have already registered if you have any issue just ask also feel free always to ask questions because the people that usually answer the tickets in the help desk they are here in this call so you have a direct link sometimes in the community calls we have specific requests and that are answered so today we didn't have any but feel free to ask questions so and thank you for joining this call I'm not sure Andrea I'm not sure if you have put it the already the dates for the upcoming calls in the second semester not yet but we will put it as you so you have this link in our website where we have all the the the agenda for the different calls the the plan for the calls we will put if not today tomorrow the the dates for the all the calls from the second semester and all the recordings in the web in the slides are there okay available so many thanks for joining and Paulo many thanks for your support I think it's always great when we have your presence in this and participation in these calls I think so all our content providers realize the power that they open air have to to to produce some useful services not only for them specifically but for for the for Europe and for the world so many thanks for your presence okay in this call and stay safe everyone for those that will have holidays so good holidays see you in the next call maybe in the first Wednesday of September I don't think we will have in August so for sure our next call will be on the first Wednesday of September bye bye