 Okay, and welcome everyone to this community call. So it's the fourth community call that we are organizing in order to better present and detail what we are developing regarding the services for content providers, for open air content providers, managers targeting mainly repository managers, free system managers in the other also editors from open access journals. So, and to, to detail some novelties to share with you some of the developments that we are doing and also quite important to collect some feedback also from you. For these community calls we already have some, some kind of contributions from the community but we of course we want to hear more. It cannot be only as presenting you but also to have your use cases or your dogs or your problems that you if you are facing with with some of the services of open air. So this is the, the main idea of this calls and feel part of the, the open air feel part. This is why we have this community calls is for you to feel part of open air, because you are the one important contributor for them for the infrastructure. And so the main, today, we have the main topic is about the, the aggregation, the open air aggregation in reach in enrichment processes, how it works, all the workflows so we will have with us. Alessia, Alessia Bardi, and, and Andreas from Bilo felt Alessia from CNR is the in Pisa, Italy and Andreas from UNIB from Bilo felt the university in Germany to better detail this this this part but of course you can put your questions during the presentations or at the end, we can we can we can reserve some time for comments questions addressing the issues presented during the presentation of Alessia and Andreas or about any other issue that you have related to the services for content providers. So, as you know, all the information so the notes, the agenda, the recordings of this community calls are available in this webpage, specifically for the provide community calls in the in the open air available in the open air portal. Before we start with Alessia and Andreas presentation, I just want to highlight for so usually I have some time here to highlight some some novelties mainly about the services of functionalities that we are in provide dashboard. So we don't have so many things so many new things available. I just want to highlight to to related with the provide dashboard. The first one is that we are working on a redesign of the user interface of the dashboard so we are now so we have available version since October 2018. That was like say the first version one of this new service where we gather all the services for content providers in one single point the dashboard. We know that this first version was was important for us to release the service but we know that there are some issues regarding the user experience so we want to improve a bit. There are some comments in several workshops we did or even webinars and we are improving based on your feedback, but we will put this new the new version that we have in the in beta. Okay, for you to check it's not available yet but in the coming one or maximum two weeks we will have available this this new layout of the dashboard in some new functionalities in beta. So what about your participation we have three or four repository managers that we are already involving in order for them to check the version in in beta and to receive their feedback but we want more. We don't want 100 but we want to 10 or 12 repository managers to check the the recent developments that we are doing in the provide so share with us your availability if you want to participate in this. user board that we want to have to to proper evaluate this version in order then to put it in production based on also your comments so if you are. If you if you if you have some availability and if you want to participate just share with me this availability just send me or send me an email that I will put you in the in the user board. And we will contact you soon during this month in Mars, in order for you to check the beta version and to participate in a user assessment online session. The other information is just an highlight we have already this this collection monitor feature available in them in the dashboard. Since several months ago but so in the in the specific part of them of the of the collection monitor and they're the validation menu, you can check the status of your of your aggregation. Let's see under Andreas will detail a bit more what the different mentions in this collection monitor page means, but I just want to highlight that you can follow. So, the aggregation history when was the last time that we have aggregated content and there is a label a blue label that you can find, and you can easily check where when was the last index version in the open air portal so you can check. And what was the last date that we have updated them. The open air explore service based on your last, the last time that we have aggregated content from your opposite. And I would like also to to advise you to subscribe the newsletter it's a newsletter, a new newsletter that we have in open air to just targeting the content provider managers. If it's available in the open air portal you can subscribe. So, Andrea will also share my colleague Andre, if you ever mean you will also share here the link in the chat just for you to subscribe. So, we have also the public roadmap our trailer roadmap about provide where we are putting the things that we are releasing in them. The new functionality is available. So, and you can also provide their feedback. Okay. So, it's important just for you to be sure that we have this different means to collect your feedback. So, now we will have Alessia and Andreas to detail this, this part about your partner aggregation and enrichment processes. I think this is quite important for you to be aware more of how everything works on the back end of open air. We will close. We receive several questions if we're back via different channels, regarding this to better understand these processes I think this is a good, a good topic for this community call. Okay, so we will start I'm not sure we'll, we'll start if Alessia and Andreas but just, yes, I will start. Alessia, so the floor is yours. Thank you Alessia to be available also to provide these explanations. Thank you for inviting me to join this community calls, which are always very interesting. So, can I share my screen. Yes. Yes, I will stop sharing and I think now you can start sharing. Okay. Okay, good. So, as Pedro was saying this presentation this webinar is about the open air aggregation workflows and the processes that are in place to enrich the meta data records that we collect from from the different data sources that are in the open air network. So, the first thing to explain is that we have these aggregator technology, which collects metadata and full text from the providers. And, and thanks to this technology, we are able to build what we call the open air research graph. Which is basically a graph of open metadata about research products with access rise information with linking with links to funding informations of projects, funders and fund streams and links to research communities and infrastructures that contributed or are interested in in these research products. So, why we use a graph, we use a graph, because as you can see all these entities, publications, data software, projects, organizations, they are all linked to each other. So, using a graph model to represent this kind of information is somehow an option that allows us to represent the, the open science, the product in the open science framework. These are some numbers that you can see in the production and in the beta portal. So, we have more than 10,000 data sources that provide different types of content, ranging from metadata about publications and data sets to metadata about projects and software, for example. And you can see a big difference in the numbers in of publications, so 37 millions in production and one and more than 100 millions in beta, because we are experimenting with new sources of metadata. So, how do we reach this number of aggregated content. So as I said we collect metadata links full text from all these sources, not only in Europe but all over the world. In addition to repositories, we also collect information from aggregators, but also from other important sources of scholarly communication works. So, we have paywall, open citations, Microsoft academic graph, software heritage, and projects from Cordis and other national and international funders. We have organizations from GridAC. We have a number of publishers, journals from the AIJ, and specific sources that serve research communities and infrastructures, like the European Grid Infrastructure, Apple's, Daria, and Elixir GR. And this can be done thanks to the collaboration that we are having with these research communities and infrastructures. So, if we have a look at the very high level or what happens in the aggregator is that we collect data sources. So registries, repositories, open access journals, aggregators, publishers, and systems. We collect this information, I mean we collect information that these data sources offer. So metadata relationships and full text, but we also have metadata relationships that user of the explorer and the connect portals can add to enrich the content. And also in beta we are experimenting with these new sources, which we call one is called Skoll Explorer, and contains relationships between literature products and data sets. And the other one we call it DOI Boost, which is basically the merge of ORCID, Unpaywall, Crossref, and Microsoft academic graph. So basically metadata from Crossref is enriched with information available on the other sources. On Paywall we are able to get the open access version of our application. All this content together forms what we call the Open Air Research Graph, RO. Why RO? RO because there is a path that this data follows before it can be published and make available in the Explorer portal and our API. So the first step is the duplication. So, since we collect from many sources, it is very likely that we collect different metadata records that describe the same entity. So for example, two metadata records that describe the same publication or the same organization. So we find these duplicates and they become one. After the duplication we process also the full text of the open access publication that we were able to collect from the sources. And thanks to this mining technique, we are able to find links to projects, links to data sets and software, affiliations, subject classification terms, citations, and other information that was not available in the metadata that we collect. So thanks to this we are able to enrich the metadata that we collect from the sources. This can go even further, because we can apply some deduction technique that allows us to understand if a research product is relevant for research community or infrastructure. And this is done considering some of the information that is available in the record itself. So for example, based on the provenance of the record. We have more based on the keywords. For example, we have some discipline specific communities like digital humanities and cultural heritage. So we know that when something contains a keyword, which is art. We know that this is relevant for for this community. The final step is the propagation. So the propagation, thanks to the propagation we are able to pass the information from one object to another that is connected to it by a strong relationships. So for example, if a publication is supplemented by a data sets, the abstract of the publication can be propagated to the data sets so that the data set becomes more easily discoverable. At this point we have the complete open air research graph that is used to calculate statistics, and you can see that in open air monitor. And that is also and the graph is visible and searchable via the Explorer portal. The community gateways available under connect and our API's. And this is very important because the European Commission for this one portal is one of the clients of our API's. If a publication has a link to a project, the coordinator of that project will be notified about this publication in the participant portal and he will be able to accept or discard the suggestion when he's performing the continuous reporting for the European Commission. So there are two main integration scenarios for data sources. One is the direct harvesting. And this is what we do for repositories and journals. And there is the indirect harvesting. And this is what we do thanks to aggregators and publishers. Basically, the idea is that if we have an aggregator, we can collect from the aggregator the metadata that fact are hosted by another source. So, in most of the cases we are able to understand which is the original source. And this is thanks to the information in the record themselves because the aggregator often contains information about the original repository. We have open door or read data identifiers or something that can be resolved as an open door or read data identifier, while for journals we have the ISSN numbers. And maybe this is not always the case. And this is why in the portal, you will see the unknown repository. So, basically, this is when we were not able to resolve the original repository so we collected from the aggregator and we don't know who's the maintainer of the record, let's say. We don't know which are the sources that we collect via aggregators and publishers. You can search the data sources and look for the collected from a compatible aggregator. So the sources that are tagged with this label are those that have been harvested thanks to an aggregator. So in the case metadata is provided in a format that is compliant to the guidelines and I think that you know these very well. These guidelines for different providers, so for for publication repositories and journals for data archives for Chris platforms for software repositories and other, and also other research products repositories. So regarding software and other research products. Basically, this is something that is new and we are experimenting with with some of them. So how to join the aggregation how to join the infrastructure and so be aggregated by an R. So the first step is to validate. You check that the record you supply is compliant to the guideline and then you register. Once you register, you can enter in the in the aggregation of open. So here you can see what happens. Basically, for each data source, we create an aggregation workflow, which is composed of two stages. The first one is collection, the harvesting. And then we have the transformation. And, and all these workflows are autonomous. We can configure it and each of them can be configured to execute automatically after in a scheduled way so we can see for example from this repository run the aggregation workflow once a week for this that repository run the aggregation workflow once a month. So this is highly configurable at the level of the single of the single source of the single repository. And in addition to the aggregation work so you can also see the full text collection. So this means that when we have when we can, we will try to collect the full text of the open access publication. The text will also be stored inside the open aggregator. And these files will be used by our inference information inference system to perform full text mining on top of them. Now going back to the metadata that we aggregated with the aggregation workflows. Basically, all these workflows are stopped and we take a snapshot. So it's like, it feels like we take a picture of what has been collected so far. And we basically build the open air research graph, starting from this snapshot. Of course, all these processes could not work without the billy field team, which is the open air aggregation team. And this team performs a lot of activities like the activation of the aggregation workflow for each source. They check that the data is supplied in the proper in the proper way. So they check that your yi Pmh point is working that the format is the one that we expect. And they also configure the transformation step. The transformation step is needed because we want to assign the proper typologies to the records. So we want basically to be sure that records falls in the proper place in the portal. So we give them labels that tells if this is a literature product, a data set, a software, or if it is something else. And in addition, thanks to this transformation step, believe that the aggregation team can address some metadata quality imperfections. So it can make the metadata a little bit better than the version that has been collected. And of course, when there are some imperfections in some cases, they can be solved during the transformation step. In other cases, we need actions on the source. So on the repository side. And so this is why in some cases the open air aggregation team contacts you and suggests improvements, suggest corrections. Or maybe they also ask you for permission to download the open access full text. Okay, in this diagram, you can see what I was mentioning before about our attempts in putting the records, putting the right the right labels to the records. So for example, from publication repository from institutional repositories, we collect publications, but we also collect data sets, software, other research products, and similar cases also for the other types of sources that we have. And if you have question on this, Andreas can address them. Regarding the open access full text. So, I, I wanted to highlight that open air collects the open access full text, which are hosted at the repository, but open air does not redistribute them. So this means that the link that you will find to access the full text via the open air explore portal is not an open air URL. This is the exactly the URL to your repository. So you can see, this is a benefit from from their positive point of view because the open air explore portal could be an additional entry point for users to your content. And if you enable the usage statistics, which is one option that is available in the provide dashboard, we will also get usage statistics that include the views on open air and the full text download. The other important things that we can do. If you provide us with the full text is that we can run mining algorithm, and we can find things that are not available in the metadata. So you will be able to get back this new information using the notification broker. And of course, this is also the way that for other repository to know your version your open access version of the paper. So if we're repository as the metadata record about the publication, but only a closed full text that was deposited by by the researcher, maybe another repository instead has the open access version. So this will allow the exchange and the dissemination of open access version of full text between repositories and this can be done again, via the broker. And, yeah. And finally, as I was mentioning also before, thanks to the connection of the EC participant portal. If open air knows that there is an open access version of a paper, it will be easier for the project coordinator to report it to the commission in the EC participant portal. So how to monitor your aggregation workflow. So if you go on provide and you select under the compatibility compatibility the menu item, you will find the menu entry collection monitor. And if you click on it, you will see this line. This line with circles and rectangles with information about what what happens to the data that we collected from from your source. And there is a lot of information in these in this page. So first of all, each box is an aggregation stage. And remember what I said before, we have to step we have collection and transport. So this is both are visible in this, in this page of provide. And you also have the date, of course, when the when these aggregation stage happens. And for the collect aggregation stage. So for the collection, you can also know if open air collected everything from your repository or only a part of it incremental mode. You see incremental mode, it means that we only collected the records that were updated since the previous collection. So if we assume that. Sorry. So, if here you see incremental, then you know that we collected the records. That were updated since 14th of February. The other information you have is the number of records that have been collected. And since since we're talking about stages that are in the same pipeline, you can assume that a collection is always followed by a transformation. There are two stages of the same pipeline. So, here, you can say yes, the same pipeline, but the number of records is different. And this is a good question there are many reasons why this could happen. In some cases, some records have to be discarded by the aggregation team in the transform the transformation stage. One reason could be that those records are not compliant to the guidelines. And this is one reason why the record has been discarded as a suggestion when you see this situation, you can validate again your repository and check the record. This could give you a hint. On the issue. That could be in the data you are exposing. Then, if you look for the open air logo, you will find out which is the version of the records. That are visible in the open air explore portal. You can also see that thanks to the blue badge that says indexed version. So basically, if you look at the slide, we have the version of the record of some Valentine's Day that are visible in the portal the 14th of February. But open air collected again on the 22nd. But that version of the records is not yet visible in the explore port. However, we have to remember that some of the records were discarded. So, in fact, the graph generation pipeline only the number of records that is visible in the transform box ended up into the open air research graph. Are there any questions so far. I can see some comments in the chat, but I don't see question. Okay. We have one question now in the chat. Is it possible to have results of collection monitor for few repositories in a table or CSV files. I've checked these collectors for 15 repositories so I need to click 15 times. Yeah. I understand the issue. In fact, this is something that technically is possible because this means that your account is associated to 15 repositories. Is that correct. No, he knows the account of 15 repository managers. Especially okay for an important person in Serbia. In the repository managers both. No, because if you're, if you have visibility of all 15 repositories with the same account, then this is possible. If instead, every time you log in with a different one. It's not possible, but the Anna is confirming that her account is connected with. This is new for me. So, yeah, we are the boss. Yeah, so I think that this is something that that can be done. Yeah, open air. Okay, the enrichment processes. Okay, as I said before, we infer information by performing full text and data mining. But we also do more for different reasons. So we want to foster PADs. We try to propagate the arcade IDs. Whenever we find it reasonable. So when we are able to understand that the author of publication, for example, is the same author of a data set. We want to propagate this or keep. And also, well, yes, from publication to data set or publication to another publication. Whenever we understand that the author is the same. We want to reach the graph to improve discovery. So we try to propagate abstracts from articles, data sets and software, because usually metadata about literature products are richer, much more richer than the metadata that we have for that software. We want to improve monitoring. And if you think about monitoring at the institutional level is very important for us to add information about relevant organizations, affiliations for the products. So what we are currently working in addition is to identify links between articles and the relative presentations. We want to identify hidden research software, because currently we're able to find links between literature and software whenever the software is available in public software repositories like GitHub, or others, software heritage. In most of the cases, not in most, but in some, in some cases, researchers just upload zip file or other types of archives on the web. And we would like to find links to those software as well. Okay, so here we see again the supply chain from the open air research graph row to the final open air research graph. And the first step is the duplication. So at the beginning I said that whenever we find metadata records that are describing the same objects. We put them together. We can discuss, I think, several days about what does it mean for two records to describe the same object. In the context of open air, since we want to provide statistics and monitoring. We consider the preprint the postprint the published version as the same object. In some cases they may not be, but in open air, they are the same. So this is why in the opener portal, sometimes you see on when you open a page of a publication on the right, you see that you have a version that comes from the publisher and a version that comes from an institutional repository. And in some cases the version of the publisher is closed, and the repository instead provide an open access version, which can be the preprint or the postprint. And this of course depends on on the policies that are in place for the specific journal where the article was published. You can also see some numbers so relative to the beta infrastructure when we also have crossref. So we harvested 160 millions metadata records about publications, and we end up with 110 millions. And we also perform the duplication on organization. And this is done using the grid AC identifier as a pivot. The pivot information that allows us to group metadata records describing organizations together when they're describing the same organization. So then, these are the three main processes for enriching the metadata into your open air research graph. This first one is inference, which is applied to the metadata records that are in the graph, and because it uses the information available in the abstract method of a field. To the full text and beta we have 10 millions open access full text. We are able to mine 130 millions links. Some of them are links to projects links to software data sets to search communities and infrastructures and similarities between publications. We also mine looking for specific properties so not not links between entities but properties like abstract and subject classification terms and citations. What we are going to integrate in beta very soon are the links to patents. And for these are our team worked together with the European patent office. And so we are going to have links between publications and patents available from the past that database for the deduction of information. So the idea is that based on what we know about a research product. So based on the provenance of the research product based on the subject terms that we as we can decide if this is relevant for a community or infrastructure. So, for example, the example here, you can see we have a result, which has a subject s and the community see told us that is a community working on on different topics and one of this topic is s. So what we can do is to add a relationship between the research product and the community. And a similar approach is done by the information propagation technique, but the information we need is not only in the, the record we are looking but it's needs surrounding. And we are able to propagate abstract links to projects, countries, communities and infrastructure and our kid IDs. So in the example for for example, you see that a result is collected from an institutional repositories, which belongs to we belongs to yes which belongs to Italy, let's say, which is based in Italy. And therefore, we can tell that the results can be assigned to the Italian count. As ongoing activities, we are thinking about applying the propagation of organization from one product to another, if they are linked by strong relationships like supplemented by or supplemented. So, tell you an example to make to make it clear. If I have a publication and one of its author is affiliated with CNR. And the publication is linked to a data set. And the data set as the same motor. Then I can say also that that is relevant is connected to CNR because they have the same motor. So, this is a little bit tricky and we have to study a little bit with some real use cases in order to come up with a specific criteria to apply. But the idea is that of understanding more and more. We charge the organization. Let's say responsible for for the research products because this will allow us to build more precise and with full coverage, the institutional dashboard. And this was my last slide. Yeah, maybe you can you can put your last slide and for this discussion so we have also here a question from Roland so you can you can use your microphone if you want to also. You are free to put your questions or to share your thoughts so but Roland have your question about the version version four version four not five but version four of the of the open air guidelines and do I rightly assume that all the enrichment of the publications with links to that that can only occur with the the open air for that zero protocol and different position is also open air. Yeah, I think here we need to to explain the difference between so. The enrichments coming from the the inference and the enrichment coming from them. The application process of the different links available through the metadata records. Yeah, let's see, please explain because the interest is so we do the inference is just based it on the metadata that we have, and we are not taking into consideration the level of guidelines, let's say. So now, once the records are added to the open air research graph, then we don't care anymore about the level of the guidelines of the sources from which we collected. So, we are able to add links to have links also for publications that were originally driver compliance. So there is no relationship between the guideline of the source of the provider to the opportunities. Yeah, but let's let's say that so if a repository is exposing already all the so the relations within the metadata that they are. So, we have an open air so the relations between publications and other publications or publications that we this is the case I think for with publications with data sets. So we have already that information and this is important so we need also to check out to expose but then the, the enrichments based it on the, on the inference are so independently of the type of metadata that we have from this specific repository and we are offering, of course, enrichments links between publications and data sets to repositories that are compliant with version 123. So previous versions. But of course we can do much more with repositories that are exposing a rich metadata via the open air for dot zero guidelines. Yeah, thank you. So, please share your thoughts or your questions really is awesome Brianna and the, and the some others that are with the weather I suppose. And I want to highlight just two things that I think it's quite, quite important. So, one thing is about the collection of the of the full text and all the advantages that has highlighted for the fact that we have or not the full text of the content that from your repository so of course we can provide much more rich services added value services, if we have the, the full text, but it's important to highlight one thing that open air only use the full text for the purposes of inference and to enrich the graph. And all the links that are part of this graph so we are not we don't we don't really we don't use the full text that we have in our back end services let's say to simplify to to link the records from open air to the, to the full text that we have in open in open air so we always link to the original data source. So from our from all our services from explore from all the rest of the services, and we only use the full text, the PDF that we have in our service just to provide for inference purposes, let's say, to generate that is value services this is important because this differentiate open air from some other similar services like like open air. And the other important information is that it's about something that also said about organizations, and in one of the two or three slides before this, this last one. Maybe we should maybe we should provide some more information about the work that we are doing regarding the, let's say the application of organizations merging of organizations for sure I yesterday I was discussing here with my colleague Andre about the upcoming coming topics for community calls and this about the work that we are doing around organizations is quite important because this is critical for systems like us this is critical for all real content provider managers for all managers. So maybe maybe you can just share a little bit more. What we are experimenting in order to enrich our our graph in terms of organizations, and later on for sure I think close to the summer or just after the summer we should have one call dedicated to this. organizations, let's say the organizations in open air graph, let's say. Yes. Yes. So, for, okay, finding duplicates organizations is, is not so easy because the metadata available that describes organizations is just a few. The name the acronym, sometimes the country, sometimes we have the website, sometimes, but it can be different even if the organization is the same. So, we have just just before just way yes. Yes, yes, just before suggest way to just to put in 10 seconds the problem that the fact that the open air is collecting from different sources we are collecting information about organizations also from different sources from from from open door from the organizations of each repository for example, and from the funders. There's affiliations within the publications and things like that so so we need to so in some point is a mess to manage this and so this is why we have a plan to, to, to better organize this mess. Thank you, Pedro for the presentation, it was very important. Yeah. So this is a hard task to do. And we had, we have automatic process that runs trying to identify duplicates, but we can do better, we know, because often we cannot really put together two organizations which are the same. In other cases, instead, two organizations that are clearly different. If you are a human and you look at, and you look at clearly they're different but for a machine it's hard to understand. So basically, we reached a stage where we cannot delay or postpone anymore. So we are basically working on a tool that will enable a trusted person, a trusted team of persons, which will be composed basically by our national open access desk, the open air national open access desk, so that each node will create the organizations of other country. So this will be very, very useful to address those cases that cannot be captured automatically and fix, fix the issue. Then of course we in Pedro said that we have affiliations inside the metal records and we have, but we could have more because at the beginning of my presentation I said that we are also including metal records coming from Crossref, Microsoft academic graph, and we are including, but we are not taking all the affiliations they are provided. So the number of organizations and the number of affiliations that will be available in the graph in the next month will further increase. And so, yes, we're going to put more and more information inside. Yes. So we are coming to the end, but we need also to, so Vianci Alexander is also, do you mean open door or in grid, I think it was referring to who we collect information about organization from privacy from open door from read free data from do a J, which provides information about the journals. And, and then the funders, because they give us information about the organizations that are participating into the projects they found. We're talking about the funders so I forgot so Brianna send me an alert because Brianna put another question additionally to the, to the, to the congratulations to your presentation lessons. Just to ask about, and this is important this, this link between the index update and the information available in monitor so we try to have updated content in the index. Every two weeks. Sometimes we cannot really make it because we are the, I mean the size of the graph is big and sometimes some issues arise and we have to start again from scratch. And the issues become even worse when we come to the statistical analysis of the graph. Often what happens that we are able to update the index, but we are not able to update the statistics. So this is why monitor the numbers are often not not the same. Whenever we perform an update, however, we update a page on the opener portal, which is the one that you can reach. Where is it with this link last index information. It will be redirected to this page where you have an overview of the aggregation and content provision workflows and at the end, you have the table where we inform when the index is updated and when the statistics are updated. I'm very sorry for the play with the statistics. Our team is working to improve their working on a new technology. And that should be ready soon. So this gap should be filled in the next period. Okay, thank you. If you we are coming to the end we if you want to put more questions you can you can put it if you just also enable your microphone because you can also talk if you want. And there is just a lighting to have do we have a new question. Okay. Can I ask you something that unfortunately is a bit out of topic please. Can you present yourself. Yes. Yes, I'm sorry you can see me I'm Alexander Bianchi from the epfm. We updated our our voice when he said to the version for and we still can't can check it. So we would consider neither in the export production, neither in the in the beta. So, when it was the recent, recently. No, no, no this work began on November, we were well validated and registered in November, etc. Then I know that we're all that happened some index problems. As, as, as your head this point to me. I would like to know where, if, if, if a new index update is in order to see our transformer metadata, the new version because we are still harvested as simple driver. It seems as it seems in both, in both the, in both the production and beta. I was also told, I was also told that we are harvested in the version for but what is exposed is still a driver but I can't understand this. I really sorry, perhaps it's only me. So if it's of no interest, I can. Sorry. Just to reply, we need some more information. Yeah, yeah, just writing the shot the right name of the rest of facility data. Emily Emily is also in the call. So maybe Emily. Yeah. Can you remember this case I suppose. You hear me. Yes, yes, yes. Okay, yes. That's why I decided to chime in. Exactly. Yes. Alessandra and I we talked about the case. Well, the problem is that the beta index hasn't been updated in such a long time and I don't know either when it will be ready. You can shed some light on it, because in the better system the version for metadata is aggregated but of course, because of the index delay it's not visible on the better portal. Okay, so yeah, I have quite good news on this because my colleague is telling me that he's now performing some quality checks on the new content for the beta. So, we think that we will be able to put the new content available for the public in the next days. So I'm very sorry for in better in better in order to be check it not in in production based on the this new in better. Yes, yes, because in production since November we already did 12345 updates of the portal. So, so let's explore production. Let's let's let's say change some information to just send a concrete reply to Alexandra and to also a timeline for the to have the information properly available in production. Thank you. Alexander it's your case, but it's the case of other institutions so it's I think it's important always to what you did to comply with the new version of the guidelines is also what others are doing so it's important. So, you can put another questions I just want to highlight if I will use a US presenter LSE I only have two slides just to make two highlights at the end if you can move to the next slide. And so, and one one thing is about the upcoming calls so be aware that we have this page where we put all the information about the previous calls and the future calls. So, you will find the recordings the slides. Okay, and also some notes but what is important is that we please put in your calendar the coming calls we have already all the calls until the summer. And for the for the call in April, we will have as main topic the the space crease use case so an implementation of the crease guidelines with some new cases to present I think it's important for those that are also participating in this crease ecosystem so to be aware and for repositories that are also that have some connections with the Chris systems. So, we will have other topics. So the main is the novelties and also the information that we can provide to you but this will be the main topic and on the call of the first of April. And the other the other slide is just once again to highlight about the newsletter so as we have now this newsletter we have also released of newsletter so we always want to use this new channel to highlight some some relevant information for you about services development, training and support materials, etc. So, we are coming to the end we have one more question here, Michaela Hubert okay sorry I did not hear all of it, what do know ads have to manually created to collect. It was about the organization so we are, but we will we will share with all the open air know ads and and with the community, the work we are doing around the organizations to better present and organize the organizations and the affiliations of the authors in the open air infrastructure. The task that we are, let's say in a testing phase. I must say that I must confess that personally, what I saw when, when the colleagues from CNR presented. It's great. So it's, it's a great way to. We have a single entry for the organization so we, we manually curate or automatically merge them but I think this will be really good to increase the quality of the, of open air and also, then we can also use this to think about some added value services for them for the community and for the content providers for repositories, etc. But then what I promise what we promise is that we will do one community call about this and to show what we are doing. Okay, and involve always the community. Okay thank you all. So, for those that arrive a little bit late so you can check the recordings later we will publish today during the afternoon.